1 Introduction

Random graph theory is one of the most important subjects in modern graph theory. Besides the rich theory in its own field of study, random graphs have many connections and applications in the general area of combinatorics. Many existence results in graph theory are proved by using and modifying random graphs. Today, random graphs are widely used in computer science, engineering, physics and other branches of sciences.

There are many random graph models. The most classical models \({\mathscr {G}}(n,p)\) and \({\mathscr {G}}(n,m)\) were introduced by Erdős and Rényi [7, 8] more than half a century ago. The binomial model \({\mathscr {G}}(n,p)\) retains each edge of the complete graph \(K_n\) independently with probability p. The uniform model \({\mathscr {G}}(n,m)\) is simply \({\mathscr {G}}(n,p)\) conditioned on having exactly m edges. In other words, \({\mathscr {G}}(n,m)\) is the random graph on n vertices and m edges with the uniform distribution. These two models are the best studied and understood. The independence between the occurrence of the edges makes \({\mathscr {G}}(n,p)\) a relatively easier model compared to many others, for analysing its properties and for analysing algorithms on \({\mathscr {G}}(n,p)\). Some algorithms depend on the degrees of vertices, and unavoidably the algorithms need to “expose” the degrees of the vertices as the algorithms proceed. For instance, the peeling algorithm [9, 13] for obtaining the k-core of a graph repeatedly deletes a vertex whose degree is below k.

An important property of \({\mathscr {G}}(n,p)\) and \({\mathscr {G}}(n,m)\) is that, by conditioning on the vector of degrees of \({\mathscr {G}}(n,p)\) or \({\mathscr {G}}(n,m)\) being \({\varvec{d}}=(d_1,\ldots , d_n)^{\textrm{T}}\hspace{-1.111pt}\), with \(m=\frac{1}{2}\sum _{i=1}^n d_i\), the resulting random graph is exactly \({\mathscr {G}}(n,{\varvec{d}})\), the uniformly random graph with given degree sequence \({\varvec{d}}\). For the special case where \({\varvec{d}}=(d,\ldots ,d)^{\textrm{T}}\hspace{-1.111pt}\) for some constant d, that is the random d-regular graph, we simply write \({\mathscr {G}}(n,d)\).

The model \({\mathscr {G}}(n,{\varvec{d}})\) is among the most important in the study of random graphs and large networks. It is often referred to as the Molloy–Reed model [21] in the network community. Unlike for \({\mathscr {G}}(n,p)\), probabilities of events in \({\mathscr {G}}(n,{\varvec{d}})\) such as two vertices u and v being adjacent are highly non-trivial to compute. The most common methods of analysis of \({\mathscr {G}}(n,{\varvec{d}})\) are the configuration model [2] for constant or slowly growing degrees, the switching method [18] for degrees bounded by a small power of n, and the complex-analytic method [11, 19] for very high degrees; see also the detailed survey by Wormald [24].

Nevertheless, many questions that deserve an affirmative answer remain open for \({\mathscr {G}}(n,{\varvec{d}})\) because the methods listed above have severe restrictions. For instance, is \({\mathscr {G}}(n,{\varvec{d}})\) Hamiltonian? What is the chromatic number of \({\mathscr {G}}(n,{\varvec{d}})\)? What is the connectivity of \({\mathscr {G}}(n,{\varvec{d}})\)? Using highly non-trivial switching arguments and enumeration results for d-regular graphs, these particular questions were answered [4, 15] for \({\mathscr {G}}(n,d)\). Using similar techniques it may be possible to work out the answers for the more general model \({\mathscr {G}}(n,{\varvec{d}})\). However, it will be desirable to have simpler approaches.

This is the motivation of the sandwich conjecture, proposed by Kim and Vu in 2004. They conjectured that for every \(d\gg \log n\), the random d-regular graph can be sandwiched between two binomial random graphs \({\mathscr {G}}(n,p_1)\) and \({\mathscr {G}}(n,p_2)\), the former with average degree slightly less than d, and the latter with average degree slightly greater. The formal statement is as follows. Recall that a coupling of random variables \(Z_1,\ldots ,Z_k\) is a random variable \((\hat{Z}_1,\ldots ,\hat{Z}_k)\) whose marginal distributions coincide with the distributions of \(Z_1,\ldots ,Z_k\), respectively. With slight abuse of notation, we use \((Z_1,\ldots ,Z_k)\) as a coupling of \(Z_1,\ldots ,Z_k\).

Conjecture 1

(Sandwich Conjecture [14]) For \(d\gg \log n\), there are probabilities \(p_1=(1-o(1))d/n\) and \(p_2=(1+o(1))d/n\) and a coupling \(({G^{L}}, G, {G^U})\) such that \({G^{L}}\sim {\mathscr {G}}(n,p_1)\), \({G^U}\sim {\mathscr {G}}(n,p_2)\), \(G\sim {\mathscr {G}}(n,d)\) and \({\mathbb {P}}({G^{L}}\subseteq G\subseteq {G^U})=1-o(1)\).

The condition \(d\gg \log n\) in the conjecture is necessary. When \(p=O(\log n/n)\), there probably exist vertices in \({\mathscr {G}}(n,p)\) whose degrees differ from pn by a constant factor. Therefore, Conjecture 1 cannot hold for this range of d. For \(\log n\ll d\ll n^{1/3}/\log ^2 n\), Kim and Vu proved a weakened version of the sandwich conjecture where \(G\subseteq {G^U}\) is replaced by a bound on \(\varDelta (G\setminus {G^U})\) (see the precise statement in [14,  Theorem 2].Footnote 1) Note that this weakened sandwich theorem already allows direct translation of many results from \({\mathscr {G}}(n,p)\) to \({\mathscr {G}}(n,d)\), including all increasing graph properties such as Hamiltonicity.

Dudek, Frieze, Ruciński and Šileikis [6] improved one side of Kim and Vu’s result, \({G^{L}}\subseteq G\), to cover all degrees d such that \(\log n \ll d \ll n\) and also extended it to the hypergraph setting. In particular, this new embedding theorem allows them to translate Hamiltonicity from binomial random hypergraphs to random regular hypergraphs.

In the graph case, the embedding result of [6] relies on an estimate for the probability of edge appearances in a random \(\varvec{t}\)-factor (spanning subgraph with degrees \(\varvec{t}\)) of a nearly complete graph S, where \(\varvec{t}\) is near-regular and sparse. This can be done using a switching argument, which has already appeared in several enumeration works; see for example [18]. Extending the results of [6] to \(d=\varTheta (n)\) requires new proof methods beyond switchings: one needs to consider the case when S is no longer a nearly complete graph, and components of \(\varvec{t}\) are all linear in n.

An immediate corollary of the sandwich conjecture, if it were true, is that one can couple two random regular graphs \(G_{1}\sim {\mathscr {G}}(n,d_1)\) and \(G_{2}\sim {\mathscr {G}}(n,d_2)\) such that asymptotically almost surely (a.a.s.) \(G_{1}\subseteq G_{2}\), if \(d_2\) is sufficiently greater than \(d_1\). In fact we conjecture that such a coupling exists as long as \(d_2\geqslant d_1\). However, the weakened versions of the sandwich conjecture, as proved in [14] and [6], are not strong enough to imply the existence of such a coupling, even when \(d_2\) is much greater than \(d_1\).

Conjecture 2

Let \(0\leqslant d_1\leqslant d_2\leqslant n-1\) be integers, other than \((d_1,d_2)=(1,2)\) or \((d_1,d_2)=(n-3,n-2)\). Assume \(d_1n\) and \(d_2n\) are both even. Then there exists a coupling \((G_{1}, G_{2})\) such that \(G_{1}\sim {\mathscr {G}}(n,d_1)\), \(G_{2}\sim {\mathscr {G}}(n,d_2)\), and \({\mathbb {P}}(G_{1}\subseteq G_{2})=1-o(1)\).

Remark 3

This conjecture or some variant of it has already been the subject of speculation and discussion in the community, but we have not found any written work about it. The case when \(d_1=1\) and \(3\leqslant d_2\leqslant n-1\) is simple, since almost all \(d_2\)-regular graphs have perfect matchings, which follows from them being at least \((d_2-1)\)-connected [4, 15]. Generate a random \(d_2\)-regular graph \(G_{2}\). If \(G_{2}\) has any perfect matchings, select one at random; otherwise select a random 1-regular graph. By symmetry, this gives a random 1-regular graph which is a subgraph of \(G_{2}\) with probability \(1-o(1)\).

The two binomial random graphs in Conjecture 1 differ by o(d/n) in edge density. This gap gives enough room to sandwich a random graph with more relaxed degree sequences. We propose a stronger sandwich conjecture stated as Conjecture 4 below.

Given a vector \({\varvec{d}}=(d_1,\ldots , d_n)^{\textrm{T}}\hspace{-1.111pt}\in {\mathbb {R}}^n\), let \({\text {rng}}(\varvec{d})\) stand for the difference between the maximum and minimum components of \(\varvec{d}\). Denoting \(\varDelta (\varvec{d}):= \max _j d_j\), we can also write \({\text {rng}}(\varvec{d}) := \varDelta (\varvec{d}) + \varDelta (-\varvec{d})\). If \(\varvec{d}(G)\) is the degree sequence of a graph G, we will also use the notation \(\varDelta (G) := \varDelta (\varvec{d}(G))\).

Definition 1

A sequence \(\varvec{d}(n) \in \{0,\ldots , n-1\}^n\) is called near-regular as \(n\rightarrow \infty \) if

$$\begin{aligned} {\text {rng}}(\varvec{d}(n)) = o(\varDelta (\varvec{d}(n))) \quad \text {and}\quad {\text {rng}}(\varvec{d}(n)) = o(n- \varDelta (\varvec{d}(n))). \end{aligned}$$

Conjecture 4

Assume \({\varvec{d}}= \varvec{d}(n)\) is a near-regular degree sequence such that \(\varDelta (\varvec{d})\gg \log n\). Then, there are \(p_1=(1-o(1))\varDelta (\varvec{d})/n\) and \(p_2=(1+o(1))\varDelta (\varvec{d})/n\) and a coupling \(({G^{L}}, G, {G^U})\) such that \({G^{L}}\sim {\mathscr {G}}(n,p_1)\), \({G^U}\sim {\mathscr {G}}(n,p_2)\), \(G\sim {\mathscr {G}}(n,{\varvec{d}})\) and \({\mathbb {P}}({G^{L}}\subseteq G\subseteq {G^U})=1-o(1)\).

We categorise the family of near-regular degree sequences into the following three classes.

Definition 2

Let \(\varvec{d}\) be a near-regular degree sequence. We say \(\varvec{d}\) is sparse if \(\varDelta (\varvec{d})=o(n)\); dense if \(\min \{\varDelta (\varvec{d}), n-\varDelta (\varvec{d})\}=\varTheta (n)\); and co-sparse if \(n-\varDelta (\varvec{d})=o(n)\).

In this paper, we confirm Conjecture 4 for all dense near-regular \(\varvec{d}\), and we confirm Conjecture 1 for all d where \(\min \{d,n-d\}\gg n/\sqrt{\log n}\). This proves the sandwich conjecture by Kim and Vu for asymptotically almost all d. We will treat sparse and co-sparse near-regular degree sequences in a subsequent paper, as the proof techniques used for those ranges are very different from this work.

2 Main results

Throughout the paper we assume that \(\varvec{d}\) is a realisable degree sequence, i.e. \({\mathscr {G}}(n,\varvec{d})\) is nonempty. This necessarily requires that \({\varvec{d}}\) has nonnegative integer coordinates and even sum. All asymptotics in the paper refer to \(n\rightarrow \infty \), and there is an implicit assumption that statements about functions of n hold when n is large enough. For two sequences of real numbers \(a_n\) and \(b_n\), we say \(a_n=o(b_n)\) if \(b_n\ne 0\) eventually and \(\lim _{n\rightarrow \infty } a_n/b_n=0\). We say \(a_n=O(b_n)\) if there exists a constant \(C>0\) such that \(|a_n|\leqslant C\,|b_n|\) for all (large enough) n. We write \(a_n=\omega (b_n)\) or \(a_n=\varOmega (b_n)\) if \(a_n>0\) always and \(b_n=o(a_n)\) or \(b_n=O(a_n)\), respectively. If both \(a_n\) and \(b_n\) are positive sequences, we will also write \(a_n\ll b_n\) if \(a_n=o(b_n)\), and \(a_n\gg b_n\) if \(a_n=\omega (b_n)\). Our contribution towards Conjectures 1 and 4 are given by the following theorems.

Theorem 5

Conjecture 4 is true for all dense near-regular degree sequences.

Theorem 6

Conjecture 1 is true for all d where \(\min \{d,n-d\}\gg n/\sqrt{\log n}\,\).

Theorem 6 directly implies a weaker version of Conjecture 2.

Corollary 1

There is a coupling \((G_{d_1},G_{{d}_2})\) such that \(G_{{d}_1}\sim {\mathscr {G}}(n, {d}_1)\), \(G_{{d}_2}\sim {\mathscr {G}}(n, {d}_2)\) and \( {\mathbb {P}}(G_{{d}_1}\subseteq G_{{d}_2})=1-o(1)\), if \(d_1, n-d_2\gg n/\sqrt{\log n}\) and \(d_2/d_1 = 1 + \varOmega (1)\).

This section is organised as follows. First, we discuss some important properties that immediately translate from \({\mathscr {G}}(n,p)\) to random graphs with given degrees by our sandwich theorems. Then, we show that Theorem 5 and Theorem 6 follow from a more general and more accurate theorem for embedding \({\mathscr {G}}(n,p)\) inside \({\mathscr {G}}(n,\varvec{d})\) (see Theorem 8 in Sect. 2.2). The proof of the embedding theorem is long and technical so we postpone it till Sect. 4. Nevertheless, in Sect. 2.3, we state its key ingredient, which may be of independent interest: the probability estimate for a random factor in a pseudorandom graph to contain or avoid a given small set of edges.

2.1 Translation from \({\mathscr {G}}(n,p)\) to \({\mathscr {G}}(n,\varvec{d})\)

Our sandwich theorem allows translation of many results from binomial random graphs to random graphs with dense near-regular degree sequences. Some of the translations can already be obtained from a one-sided sandwich, e.g. the monotone properties. For instance, we can immediately transfer properties of Hamiltonicity or containment of other subgraphs from \({\mathscr {G}}(n,p)\) to \({\mathscr {G}}(n,\varvec{d})\). Other translations require sandwiching on both sides. For example, we can translate graph parameters such as chromatic number, diameter, and independence number. We refer the reader to the conference version of this work [10,  Section 3] for these applications.Footnote 2 In this section, we only give one example showing how to translate threshold functions of phase transitions from \({\mathscr {G}}(n,p)\) to those of the random graph obtained by edge percolation on \({\mathscr {G}}(n,\varvec{d})\). A graph property \(\varGamma \) has threshold f(n) in \({\mathscr {G}}(n,p)\) if

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}({\mathscr {G}}(n,p)\in \varGamma ) = {\left\{ \begin{array}{ll} 0, &{} \text{ if }\, p\ll f(n), \\ 1, &{}\text{ if }\,p\gg f(n). \end{array}\right. } \end{aligned}$$

We say \(\varGamma \) has a sharp threshold f(n) in \({\mathscr {G}}(n,p)\) if for every fixed \(\varepsilon >0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {P}}({\mathscr {G}}(n,p)\in \varGamma ) = {\left\{ \begin{array}{ll} 0, &{} \text{ if }\, p<(1-\varepsilon ) f(n), \\ 1, &{}\text{ if }\, p>(1+\varepsilon ) f(n). \end{array}\right. } \end{aligned}$$

The concept of (sharp) threshold extends naturally to other random graph models such as \({\mathscr {G}}(n,m)\), \({\mathscr {G}}(n,d)\) and \({\mathscr {G}}(n,\varvec{d})\) where \(\varvec{d}\) is near-regular.

Theorem 7

(Percolation on \({\mathscr {G}}(n,{\varvec{d}})\)) Assume \({\varvec{d}}\) is near-regular and dense. Let \(G\sim {\mathscr {G}}(n,{\varvec{d}})\) and \(G_p\) be the subgraph of G obtained by independently keeping each edge with probability p. Let Q be a monotone property and let th(Q) denote a (sharp) threshold function of Q in G(np). Then \((n/\varDelta (\varvec{d}))\cdot th(Q)\) is a (sharp) threshold function of Q in \(G_p\).

We give one example of Theorem 7. A giant component in \({\mathscr {G}}(n,p)\) is a component of size linear in n. Determining the sharp threshold of the emergence of a giant component in \({\mathscr {G}}(n,p)\) is a remarkable landmark result in random graph theory. The emergence threshold of a giant component in other random graph models has also been extensively studied. For instance, the emergence threshold of a giant component in \(G_p\) is known to be \(1/(d-1)\) in the special case where \(G\sim {\mathscr {G}}(n,d)\) with \(d\geqslant 3\), following from a sequence of results [15, 16, 22]. Theorem 7 extends this result to near-regular degree sequences where \(\varDelta (\varvec{d})=\varOmega (n)\).

Corollary 2

(Giant component) Assume \({\varvec{d}}\) is near-regular and \(\varDelta (\varvec{d})=\varOmega (n)\). The emergence of a giant component in \(G_p\) has a sharp threshold \(1/\varDelta (\varvec{d})\).

2.2 Embedding theorem

Our sandwich theorems are corollaries of the following more general theorem for embedding \({\mathscr {G}}(n,p)\) inside \({\mathscr {G}}(n,\varvec{d})\). In fact, the embedding theorem provides better \(p_1\) and \(p_2\) and sharper bounds on the probability of \({G^{L}}\subseteq G\subseteq {G^U}\) than Theorems 5 and 6.

Theorem 8

(The embedding theorem) Let \(\varvec{d}= \varvec{d}(n) \in {\mathbb {N}}^n\) be a degree sequence and \(\xi = \xi (n)>0\) be such that \(\xi (n) = o(1)\). Denote \(\varDelta = \varDelta (\varvec{d})\). Assume \( n\cdot {\text {rng}}(\varvec{d}) \leqslant \xi \varDelta (n-\varDelta )\) and \(n-\varDelta \gg \xi \varDelta \gg n / \log n\). Then there exist \(p = (1- O(\xi ))\varDelta /n\), and a coupling \(({G^{L}},G)\) with \({G^{L}}\sim {\mathscr {G}}(n,p)\) and \(G \sim {\mathscr {G}}(n,\varvec{d})\)

$$\begin{aligned} {\mathbb {P}}({G^{L}}\subseteq G) = 1 - e^{-\varOmega (\xi ^4\varDelta )} =1- e^{-\omega (n/\log ^4 n) }. \end{aligned}$$

We prove Theorem 8 in Sect. 4 using the coupling procedure described in Sect. 3. The proposition below shows that the probability bound of Theorem 8 is tight up to an additional three powers of \(\log n\) in the exponent.

Lemma 1

Assume is d-regular, i.e. all components equal to d. Let \(({G^{L}},G)\) be any coupling such that \( {G^{L}}\sim {\mathscr {G}}(n,p)\) and \(G\sim {\mathscr {G}}(n,d)\), where \(p(n-1)\leqslant d\). Then

$$\begin{aligned} {\mathbb {P}}({G^{L}}\not \subseteq G) = \varOmega \Bigl ( \Bigl (\frac{p(n-1)}{d}\Bigr )^{d+d^{1/2}}\Bigr ). \end{aligned}$$

In particular, if p is defined as in Theorem 8, then

$$\begin{aligned} {\mathbb {P}}({G^{L}}\not \subseteq G) = e^{-O(\xi d)} = e^{-o(\xi ^4 d\log ^3 n)}. \end{aligned}$$

Proof

We have

$$\begin{aligned} 1-{\mathbb {P}}({G^{L}}\subseteq G) \geqslant {\mathbb {P}}_{{\mathscr {G}}(n,p)}(v_1\,\text { has degree greater than}\, d)={\mathbb {P}}({\text {Bin}}(n-1,p) \geqslant d+1 ). \end{aligned}$$

The given bound follows from comparing values \(\{d+1,\ldots ,d+d^{1/2}\}\) of \({\text {Bin}}(n-1,p)\) to the same values of \({\text {Bin}}(n-1,d/(n-1))\). For the second part, observe that the assumptions imply \(\xi \gg 1/\log n\). \(\square \)

Now we prove that Theorem 5 follows from Theorem 8.

Proof (of Theorem 5)

Let \(\varvec{d}'=(n-1)\varvec{1}-\varvec{d}\). Since \(\varvec{d}\) is near-regular and dense, there exists \(\xi =o(1)\) which satisfies all the conditions in Theorem 8 for both \(\varvec{d}\) and \(\varvec{d}'\). Hence, by Theorem 8 applied to \(\varvec{d}\), there exists \(p_1=(1-o(1))\varDelta (\varvec{d})/n\) and a coupling \(\pi \) which a.a.s. embeds \(G(n,p_1)\) into \(G(\varvec{d})\). Similarly, by Theorem 8 applied to \(\varvec{d}'\), there exists \(p_2= (1+o(1))\varDelta (\varvec{d})/n\) and a coupling \(\bar{\pi }\) which a.a.s. embeds \(G(n,1-p_2)\) into \(G(\varvec{d}')\). Let \(\pi '\) be the coupling obtained by complementing each component of \(\bar{\pi }\). Clearly, \(\pi '\) a.a.s. imbeds \(G(\varvec{d})\) into \(G(n,p_2)\).

We can now stitch \(\pi \) and \(\pi '\) together to construct a coupling \(({G^{L}},{G},{G^U})\), where \({G^{L}}\sim {\mathscr {G}}(n,p_1)\), \({G}\sim {\mathscr {G}}(n,\varvec{d})\) and \({G^U}\sim {\mathscr {G}}(n,p_2)\). First uniformly generate a graph \(G\in {\mathscr {G}}(n,\varvec{d})\). Then, conditional on G, generate \({G^{L}}\) under \(\pi \) and generate \({G^U}\) under \(\pi '\). This yields \(({G^{L}},{G},{G^U})\) with the desired marginal distributions. Moreover, a.a.s. \({G^{L}}\subseteq {G}\subseteq {G^U}\). \(\square \)

Based on Theorem 8 we also establish the following result, which covers Theorem 6.

Theorem 9

Assume the degree sequence \(\varvec{d}\) is near-regular, with \(\varDelta (\varvec{d}), n-\varDelta (\varvec{d})\gg n/\sqrt{\log n}\) and \({\text {rng}}(\varvec{d})=O(\varDelta (\varvec{d})/\log n)\). Then, Conjecture 4 holds for \(\varvec{d}\).

Proof

Let \(\xi =1/\sqrt{\log n}\) and let \(\varvec{d}':=(n-1)\varvec{1}-\varvec{d}\) as in the proof of Theorem 5. Then the conditions of Theorem 8 are satisfied by both \(\varvec{d}\) and \(\varvec{d}'\). Hence, we have the embedding of \({\mathscr {G}}(n,p)\) inside \({\mathscr {G}}(n,\varvec{d})\) where \(p=(1-O(\xi ))\varDelta (\varvec{d})/n = (1-o(1))\varDelta (\varvec{d})/n\). We also have embedding of \( {\mathscr {G}}(n,p')\) inside \({\mathscr {G}}(n,\varvec{d}')\) where

$$\begin{aligned} p'&=(1-O(\xi ))\frac{n-\varDelta (\varvec{d})+{\text {rng}}(\varvec{d})}{n}\\&=1-(1+O(\xi ))\frac{\varDelta (\varvec{d})}{n}+ O\Bigl (\xi +\frac{{\text {rng}}(\varvec{d})}{n}\Bigr )=1-(1+o(1))\frac{\varDelta (\varvec{d})}{n}. \end{aligned}$$

Complementing gives an embedding of \({\mathscr {G}}(n,\varvec{d})\) inside \({\mathscr {G}}(n,1-p')\), where \(1-p'=(1+o(1))\varDelta /n\) as required. Note that the error \(O\bigl (\xi +{\text {rng}}(\varvec{d})/n\bigr )\) in the last equation is absorbed because of the condition on the range of \(\varDelta (\varvec{d})\). This proves that Conjecture 4 is true for near-regular \(\varvec{d}\) where \(\varDelta (\varvec{d}), n-\varDelta (\varvec{d})\gg n/\sqrt{\log n}\) and \({\text {rng}}(\varvec{d})=O(\varDelta (\varvec{d})/\log n)\). \(\square \)

2.3 Forced and forbidden edges in a random factor

As explained later, see Question 1 in Sect. 3, a key step towards proving Theorem 8 is to estimate, to the desired accuracy, the edge probability in a random \(\varvec{t}\)-factor \(S_{\varvec{t}}\) of a graph S, where \(\varvec{t}= (t_1,\ldots ,t_n)^{\textrm{T}}\hspace{-1.111pt}\) is a degree sequence.

We will estimate the edge probabilities by enumerating \(\varvec{t}\)-factors of S, using a complex-analytic approach which is presented in detail in Sect. 5. Here, we just give a quick overview. Given S, the generating function for subgraphs of S with given degrees is \(\prod _{jk\in S}(1+z_jz_k)\). Using Cauchy’s integral formula, we find that the number \( N(S, \varvec{t})\) of \(\varvec{t}\)-factors of S is given by

$$\begin{aligned} N(S, \varvec{t}) = \frac{1}{(2\pi i)^n} \oint \cdots \oint \, \frac{\prod _{jk\in S}(1+z_jz_k)}{ z_1^{t_1+1} \cdots \, z_{n}^{t_n+1}} \,dz_1 \cdots dz_n. \end{aligned}$$

We will derive an asymptotic expression of \(N(S,\varvec{t})\) using a multidimensional variant of the saddle-point method. The integral is split into two parts. The first part corresponds to the neighbourhood of saddle points. Using the Laplace approximation, we need to estimate the moment-generating function of a polynomial with complex coefficients of an n-dimensional Gaussian random vector. To do this, we apply the general theory based on complex martingales developed in [11]. The second part consists of the integral over the other regions and has a negligible contribution. See Sect. 5 and Theorem 11 for these calculations.

Estimating both parts of the integral is highly non-trivial and this analysis was previously done in the literature only for the case when S is the complete graph \(K_n\) or not far from it, see [1, 11, 19, 20]. Extending these results to a general graph S requires significant improvements of known techniques. Our enumeration result (see Theorem 11) gives an asymptotic value of \(N(S,\varvec{t})\) for S such that every pair of vertices have \(\varTheta (\varDelta ^2(S)/n)\) common neighbours and under some technical conditions on \(\varvec{t}\).

We also investigate the connection between the random graph \(S_{\varvec{t}}\) and the so-called \(\beta \)-model which belongs to the exponential family of random graphs. We show that the probability of containing/avoiding a prescribed small set of edges is asymptotically the same for both models; see Sect. 6 and Theorem 12.

We would like to note that, even for \(S=K_n\), Theorem 11 and Theorem 12 extend previously known results. Here, we state a special case of Theorem 12 for the case when the degree sequence \(\varvec{t}\) is approximately proportional to the degree sequence of S. Here, and throughout the paper, we use \(\Vert \cdot \Vert _p\) for \(p\in \{1,2,\infty \}\) to denote the customary vector norms and their induced matrix norms.

Theorem 10

Let S be a graph with vertex set [n] and degree sequence \((s_1,\ldots ,s_n)\). For a degree sequence \(\varvec{t}= (t_1,\ldots ,t_n)\), let

$$\begin{aligned} \lambda := \frac{t_1 +\cdots +t_n}{ s_1 + \cdots +s_n}. \end{aligned}$$

Let \(H^{+}\) and \(H^{-}\) be disjoint subgraphs of S, and let \(\varvec{h}\) be the degree sequence of \(H^{+}\cup H^{-}\). Assume that the following conditions hold:

  1. (A1)

    for any two distinct vertices j and k and some constant \(\gamma >0\), we have

    $$\begin{aligned} \frac{\gamma \varDelta ^2(S)}{n} \leqslant \left| \{\ell \mathrel {:}j\ell \in S \text { and } k\ell \in S\}\right| \leqslant \frac{\varDelta ^2(S)}{\gamma n}; \end{aligned}$$
  2. (A2)

    \(\lambda (1-\lambda ) \varDelta (S) \gg \frac{n}{\log n}\);

  3. (A3)

    \(\Vert \varvec{t}-\lambda \varvec{s}\Vert _\infty \,\Vert \varvec{h}\Vert _1 \ll \lambda (1-\lambda ) \varDelta (S)\);

  4. (A4)

    \(\Vert \varvec{h}\Vert _2^2 \ll \lambda (1-\lambda ) \varDelta (S)\).

Let \(S_{\varvec{t}}\) be a random \(\varvec{t}\)-factor of S. Then, for any \(\varepsilon >0\),

$$\begin{aligned}&{\mathbb {P}}\bigl ( H^{+} \subseteq S_{\varvec{t}} \text { and } H^{-} \cap S_{\varvec{t}}=\emptyset \bigr )\\&\quad = \left( 1 + O\Bigl (n^{-1/2+\varepsilon } + \frac{\Vert \varvec{t}-\lambda \varvec{s}\Vert _\infty \,\Vert \varvec{h}\Vert _1 + \Vert \varvec{h}\Vert _2^2}{ \lambda (1-\lambda ) \varDelta (S)} \Bigr )\right) \lambda ^{m(H^+)} (1-\lambda )^{m(H^{-})}, \end{aligned}$$

where \(m(H^{+})\) and \(m(H^{-})\) denote the number of edges in \(H^+\) and \(H^-\), respectively.

Theorem 10 will be sufficient for us to prove the embedding theorem. In fact, for Theorem 8, we only need to consider the case when \(H^+\) is an edge and \(H^-\) is empty, for which (A4) is trivial; see Sect. 4. We prove Theorem 10 in Sect. 6.3.

3 Embedding \({\mathscr {G}}(n,p)\) inside \({\mathscr {G}}(n,\varvec{d})\)

To prove our embedding theorem, we will use a procedure called Coupling \((\,)\) which constructs a joint distribution of \(({G^{L}},{G})\) where \({G^{L}}\subseteq {G}\) a.a.s. and their marginal distributions follow \({\mathscr {G}}(n,p)\) and \({\mathscr {G}}(n,\varvec{d})\) respectively. The procedure is given in Fig. 1.

Fig. 1
figure 1

Procedures Coupling \((\,)\) and IndSample \((\,)\)

3.1 The coupling procedure

Procedure Coupling \((\,)\) takes a graphical degree sequence \(\varvec{d}\), a positive integer \(\mathscr {I}\) and a positive real \(\zeta <1\) as an input, and outputs three random graphs \({G_\zeta }\), \({G}\), \({G_0}\), all on [n], such that \({G}\sim {\mathscr {G}}(n,{\varvec{d}})\) and \({G_\zeta }\subseteq {G_0}\). Roughly speaking, the procedure constructs \(({G_\zeta }^{(t)},{G}^{(t)},{G_0}^{(t)})\) by sequentially adding edges to the three graphs, and \({G_\zeta }^{(t)}\subseteq {G}^{(t)}\subseteq {G_0}^{(t)}\) is maintained up to step \(\mathscr {I}\). The outputs \({G_\zeta }\) and \({G_0}\) of Coupling \((\,)\) will be \({G_\zeta }^{(\mathscr {I})}\) and \({G_0}^{(\mathscr {I})}\), ignoring some technicality. The output \({G}\) will be a “proper” completion of \({G}^{(\mathscr {I})}\) into a graph with degree sequence \(\varvec{d}\). For a careful choice of \(\mathscr {I}\) and \(\zeta \), procedure Coupling \((\,)\) typically produces an outcome that \({G_\zeta }\subseteq {G}\) and \({G}- {G_0}\) is “small”. Moreover, if \(\mathscr {I}\) is chosen randomly according to a suitable distribution, which we specify later in this section, then \({G_\zeta }\sim {\mathscr {G}}(n,p_\zeta )\) and \({G_0}\sim {\mathscr {G}}(n, p_0)\), where \(p_{\zeta }\approx p_0\) for small \(\zeta >0\). (See the definition of \(p_{\zeta }\) in (2).) Even though we only need the coupling \(({G^{L}},{G})\) with \({G^{L}}= {G_\zeta }\) for our purposes, it will be convenient to include \({G_0}\) in our coupling construction in order to deduce certain properties of \({G}\) required for our proofs.

In rare cases, if certain parameters become too large, Coupling \((\,)\) calls another procedure IndSample \((\,)\). Procedure IndSample \((\,)\) also generates three random graphs \({G_\zeta }\sim {\mathscr {G}}(n,p_\zeta )\), \({G}\sim {\mathscr {G}}(n,{\varvec{d}})\) and \({G_0}\sim {\mathscr {G}}(n,p_0)\) but the relation \({G_\zeta }\subseteq {G}\) is not a.a.s. guaranteed. In fact, \({G}\) will be independent of \(({G_\zeta },{G_0})\). The main challenge will be to show that the probability for Coupling \((\,)\) to call IndSample \((\,)\) is rather small.

If M is a multigraph, we write \(G \lhd M\) if G is the simple graph obtained by suppressing multiple edges in M into single edges. With a slight abuse of notation, we write \(jk\in {\mathscr {G}}(n,{\varvec{d}})\) for the event that jk is an edge in a graph randomly chosen from \({\mathscr {G}}(n,{\varvec{d}})\). All graphs under consideration are defined on [n] and thus we can treat graphs as subsets of \(\left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \). Thus \(H\subseteq G\) is equivalent to \(E(H)\subseteq E(G)\). If \(X\subseteq K_n\), we write \({\mathbb {P}}(jk\in {\mathscr {G}}(n,{\varvec{d}})\mid X)\) for the probability that jk is an edge in G where G is randomly chosen from \({\mathscr {G}}(n,{\varvec{d}})\) conditioned on \(X\subseteq G\).

The details of procedures Coupling \((\,)\) and IndSample \((\,)\) are shown in Fig. 1. Note that Coupling \((\,)\) consists of two loops indexed by a contiguous sequence of values of \({\iota }\). When we refer to “step \({\iota }\)” or “\({\iota }\) iterations”, we refer to the point in Coupling \((\,)\) where \({\iota }\) has that value, regardless of which of the two loops we are in.

Our next lemma verifies that \({G_\zeta }\) and \({G_0}\) output by Coupling \((\varvec{d}, \mathscr {I}, \zeta )\) have the desired distributions if \(\mathscr {I}\) is an integer drawn from a Poisson random variable with a properly chosen mean. (With a slight abuse of notation, we write \(\mathscr {I}\sim {\text {Po}}(\mu )\), but note that the argument passed to Coupling \((\,)\) is not a random variable but a single integer drawn from the distribution \({\text {Po}}(\mu )\).) Denote by \( N=\left( {\begin{array}{c}n\\ 2\end{array}}\right) \) the number of edges in \(K_n\).

Lemma 2

Let \(\mathscr {I}\sim {\text {Po}}(\mu )\) and \(({G_\zeta }, {G}, {G_0})\) be the output of Coupling \((\varvec{d}, \mathscr {I}, \zeta )\). Then \({G_0}\sim {\mathscr {G}}(n,p_0)\) and \({G_\zeta }\sim {\mathscr {G}}(n, p_\zeta )\), where

$$\begin{aligned} p_0 := 1- e^{-\mu /N} \, \text { and } \, p_\zeta := 1-e^{-\mu (1-\zeta )/N}. \end{aligned}$$

Proof

By the definition of Coupling \((\,)\) and IndSample \((\,)\), whether IndSample \((\,)\) is called or not, the construction for \(G_{\zeta }\) and \(G_0\) lasts exactly \(\mathscr {I}\) steps. In each step \(1\leqslant {\iota }\leqslant \mathscr {I}\), a uniformly random edge jk from \(K_n\) is chosen. Then jk is added to \(M_0^{({\iota })}\) always, and jk is added to \(M_{\zeta }^{({\iota })}\) with probability \(1-\zeta \).

Let \(e_1,\ldots ,e_N\) be an enumeration of the edges of \(K_n\). For \(1\leqslant z\leqslant N\), let \(X_{z}\) denote the number of times that edge \(e_z\) is chosen during these \(\mathscr {I}\) iterations. Clearly,

$$\begin{aligned} {\mathbb {P}}(X_z=0)&=\sum _{m=0}^{\infty } e^{-\mu }\frac{\mu ^m}{m!} (1-1/N)^m=e^{-\mu +\mu (1-1/N)}=e^{-\mu /N}. \end{aligned}$$

Moreover, the probability generating function for the random vector \(\varvec{X}=(X_z)_{z\in [N]}\) is

$$\begin{aligned}&\sum _{j_1,\ldots ,j_{N}} {\mathbb {P}}(X_1=j_1,\ldots ,X_{N}=j_{N}) x_1^{j_1}\cdots x_{N}^{j_{N}} =\sum _{m=0}^\infty e^{-\mu } \frac{\mu ^m}{m!}\left( \frac{\sum _{1\leqslant j\leqslant N}x_j}{N}\right) ^m\\&\quad =\exp \left( -\mu +\mu \left( \frac{\sum _{1\leqslant j\leqslant N}x_j}{N}\right) \right) =\prod _{1\geqslant j\geqslant N} \exp \left( -\frac{\mu }{N}+\frac{\mu x_j}{N}\right) . \end{aligned}$$

This implies that the components of \(\varvec{X}\) are independent. Hence, each edge of \(K_n\) is included in G independently with probability \({\mathbb {P}}(X_z\geqslant 1)=1-e^{-\mu /N}\). This verifies that \({G_0}\sim {\mathscr {G}}(n,p_0)\).

Next we consider the distribution of \({G_\zeta }\). By the definition of Coupling \((\varvec{d},\mathscr {I},\zeta )\), for every \(1\leqslant {\iota }\leqslant \mathscr {I}\), the chosen edge \(e_z\) is added to \( M^{({\iota })}_\zeta \) with probability \(1-\zeta \). Let \(Y_z\) denote the multiplicity of \(e_z\) in \(M^{(\mathscr {I})}_\zeta \). Observe that the distribution of \(\varvec{Y}=(Y_z)_{z\in [N]}\) is similar to the distribution of \(\varvec{X}\) but with \(\mathscr {I}\) replaced by \(\mathscr {I}'\sim {\text {Bin}}(\mathscr {I}, 1-\zeta )\). It is also straightforward to verify that \(\mathscr {I}'\sim {\text {Po}}(\lambda ')\) where \(\lambda '=\mu (1-\zeta )\). Thus, we conclude that \({G_\zeta }\sim {\mathscr {G}}(n,p_\zeta )\). \(\square \)

If \(G \sim {\mathscr {G}}(n,\varvec{d})\) and \(m \leqslant \frac{1}{2} \sum _j d_j = |E(G)|\), let \({\mathscr {G}}(n,{\varvec{d}},m)\) denote the probability space of all subgraphs of G containing exactly m edges with the uniform distribution. In the next lemma, we verify the marginal distribution of \({G}^{({\iota })}\) during the coupling procedure. Define \( m^{({\iota })}\) to be the number of edges in \(G^{({\iota })}\).

Lemma 3

Suppose IndSample \((\,)\) was not called during the first \({\iota }\) iterations of procedure Coupling \((\,)\). Then \(G^{({\iota })} \sim {\mathscr {G}}(n,\varvec{d},m^{({\iota })})\).

Proof

With a slight abuse of notation, let \(G^{({\iota })} \) be the graph where edges are labelled with \([m^{({\iota })}]\) in the order that they are added by Coupling \((\,)\). We will prove by induction that \(G^{({\iota })} \) has the same distribution as the graph obtained by uniformly labelling edges in \({\mathscr {G}}(n,\varvec{d},m^{({\iota })})\) with \([m^{({\iota })}]\). This is obviously true for \({\iota }=1\).

Without loss of generality, assume \(G^{({\iota }-1)} \) has \(m^{({\iota })}-1\) edges and has the claimed distribution, and assume that \(G^{({\iota })}\) contains \(m^{({\iota })}\) edges. Let \({\mathscr {L}}(G^{({\iota }-1)})\) be the set of edge-labelled graphs with degree sequence \({\varvec{d}}\) which contain \(G^{({\iota }-1)} \) as an edge-labelled subgraph. For every \(jk\notin G^{({\iota }-1)} \), let \({\mathscr {L}}(G^{({\iota }-1)},jk)\) be the set of edge-labelled d-regular graphs in \({\mathscr {L}}(G^{({\iota }-1)})\) which contains jk as an edge labelled with \([m^{({\iota })}]\). Define \({\mathscr {U}}(G^{({\iota }-1)})\) and \({\mathscr {U}}(G^{({\iota }-1)},jk)\) similarly except that edges not in \(G^{({\iota }-1)}\) are not labelled. Since every graph in \({\mathscr {U}}(G^{({\iota }-1)},jk)\) corresponds to exactly \((M-m^{({\iota })})!\) edge-labelled graphs in \({\mathscr {L}}(G^{({\iota }-1)},jk)\), and every graph in \({\mathscr {U}}(G^{({\iota }-1)})\) corresponds to exactly \((M-m^{({\iota })}+1)!\) edge-labelled graphs in \({\mathscr {U}}(G^{({\iota }-1)})\), where \(M=\frac{1}{2}\sum _{j=1}^n d_j\), we have

$$\begin{aligned} \frac{|{\mathscr {U}}(G^{({\iota }-1)},jk)|}{|{\mathscr {U}}(G^{({\iota }-1)})|} =(M-m^{({\iota })}+1)\frac{|{\mathscr {L}}(G^{({\iota }-1)},jk)|}{|{\mathscr {L}}(G^{({\iota }-1)})|}. \end{aligned}$$

Since

$$\begin{aligned} \frac{|{\mathscr {U}}(G^{({\iota }-1)},jk)|}{|{\mathscr {U}}(G^{({\iota }-1)})|}= {\mathbb {P}}(jk\in {\mathscr {G}}(n,\varvec{d})\mid G^{({\iota }-1)}), \end{aligned}$$

it follows that \(|{\mathscr {L}}(G^{({\iota }-1)},jk)|/|{\mathscr {L}}(G^{({\iota }-1)})|\) is proportional to the conditional probability \({\mathbb {P}}(jk\in {\mathscr {G}}(n,\varvec{d})\mid G^{({\iota }-1)})\). Hence, the random graph \(G^{({\iota })}\) also has the claimed distribution.

The above immediately implies the statement of the lemma for the non-edge-labelled \(G^{({\iota })}\), since there are exactly \(m^{({\iota })}!\) ways to label edges of \(G^{({\iota })}\) for any realisation of \(G^{({\iota })}\) with \(m^{({\iota })}\) edges. \(\square \)

Lemma 3 immediately yields the following corollary.

Corollary 3

If \(({G_\zeta }, {G}, {G_0})\) is the output of Coupling \((\varvec{d}, \mathscr {I}, \zeta )\), then \({G}\sim {\mathscr {G}}(n,{\varvec{d}})\).

Thus, procedure Coupling \((\varvec{d}, \mathscr {I}, \zeta )\) with \(\mathscr {I}\sim {\textbf {Po}}(\mu )\) always produces a random triple of graphs with suitable marginal distributions. Next, we need to choose parameters \(\mu \) and \(\zeta \) in such a way that \(p_\zeta \) approximates the density of \({\mathscr {G}}(n,\varvec{d})\) reasonably well and the probability of \({G_\zeta }\not \subseteq {G}\) is small. Note that \({G_\zeta }\subseteq {G}\) could only be violated when IndSample \((\,)\) is called, in which case \({G_\zeta }\) and \({G}\) are generated independently.

Define \(\mathscr {I}^*\) to be the value of \({\iota }\) when IndSample \((\,)\) is called, otherwise \(\mathscr {I}^*=\mathscr {I}+1\). Then we have

$$\begin{aligned} {\mathbb {P}}({G_\zeta }\not \subseteq {G})&\geqslant {\mathbb {P}}\bigl ( \mathscr {I}^*\leqslant \mathscr {I}\bigr ) \nonumber \\&={\mathbb {P}}\Bigl (\exists \, {\iota }\leqslant \mathscr {I}^*-1 \mathrel {:}\eta _{jk}^{({\iota }+1)}>\zeta \Bigl )\nonumber \\&\geqslant {\mathbb {P}}\Bigl (\exists \, {\iota }\leqslant \mathscr {I}^*-1 \mathrel {:}\frac{\min _{jk\notin G^{({\iota })} } {\mathbb {P}}(jk\in {\mathscr {G}}(n,{\varvec{d}})\mid G^{({\iota })})}{\max _{jk\notin G^{({\iota })} }{\mathbb {P}}(jk\in {\mathscr {G}}(n,{\varvec{d}})\mid G^{({\iota })})}<1-\zeta \Bigr ). \end{aligned}$$
(1)

For each \(0\leqslant {\iota }\leqslant \mathscr {I}^*-1\), define

$$\begin{aligned} S^{({\iota })}:=K_n - G^{({\iota })}, \end{aligned}$$

and let \(\varvec{g}^{({\iota })}\) be the degree sequence of \(G^{({\iota })}\). (We use \(A \setminus B\) for set subtraction. For simplicity we use \(G-H\) to denote \(E(G)\setminus E(H)\) for graphs G and H defined on the same set of vertices.) Denote by \(H(\varvec{t})\) the set of spanning subgraphs of a graph H with degree sequence \(\varvec{t}\). Thus, \(K_n(\varvec{d})\) is the set of graphs with degree sequence \({\varvec{d}}\), and \(S^{({\iota })}(\varvec{d}-\varvec{g}^{({\iota })})\) is the set of graphs disjoint from \(G^{({\iota })}\) whose union with \(G^{({\iota })}\) is a graph in \({\mathscr {G}}(n,\varvec{d})\). Note that in each step of the algorithm, a new edge jk is added to \(G^{({\iota })}\) only if \({\mathbb {P}}(jk\in {\mathscr {G}}(n,\varvec{d})\mid G^{({\iota }-1)})\) is non-zero. Inductively that implies that the set of subgraphs of \(S^{({\iota })}\) with degree sequence \(\varvec{d}-\varvec{g}^{({\iota })}\) is never empty, for every \({\iota }\), as stated in the following observation.

Observation 1

For \(0\leqslant {\iota }\leqslant \mathscr {I}^*-1\) in Coupling \((\,)\), \(S^{({\iota })}(\varvec{d}-\varvec{g}^{({\iota })})\ne \emptyset \).

We find that

$$\begin{aligned} {\mathbb {P}}(jk\in {\mathscr {G}}(n,\varvec{d}) \mid G^{({\iota })})&= \frac{|\{G\in K_n(\varvec{d}): G^{({\iota })}\cup \{jk\}\subseteq G\}|}{|\{G\in K_n(\varvec{d}): G^{({\iota })}\subseteq G\}|} \nonumber \\&=\frac{|\{G \in S^{({\iota })}(\varvec{d}-\varvec{g}^{({\iota })}) : jk \in G \}|}{|S^{({\iota })}(\varvec{d}-\varvec{g}^{({\iota })})|}, \end{aligned}$$
(2)

where the second equation above holds since the denominators are nonzero by Observation 1. Thus, (1) and (2) motivate the following question.

Question 1

Let \(S_{\varvec{t}}\) be a uniform random \(\varvec{t}\)-factor (spanning subgraph with degree sequence \(\varvec{t}\)) of a graph S. Under which assumptions on S and \(\varvec{t}\) can one guarantee that

$$\begin{aligned} \frac{{\mathbb {P}}(z \in S_{\varvec{t}})}{{\mathbb {P}}(z' \in S_{\varvec{t}})} \approx 1 \end{aligned}$$

for any two edges \(z,z'\) of S?

Having an accurate estimate of the above probability ratio is crucial in our approach towards solving the sandwich conjecture, and tightening the density gap between the two binomial random graphs that sandwich \({\mathscr {G}}(n,\varvec{d})\). Theorem 10 answers Question 1 for dense pseudorandom graph S and dense near-regular \(\varvec{t}\). Resolving the full sandwich conjecture requires solving Question 1 for pseudorandom S with near-regular \(\varvec{t}\) in all density regimes: S can be sparse or dense and \(\varvec{t}\) can be sparse or dense relative to S. We will address that in the subsequent paper. Some partial solutions have been presented in the conference version [10].

4 Proof of theorem 8

We continue using all notations introduced in Sect. 3. In this paper, all graphs are defined on the vertex set [n]. When we do algebraic operations on graphs, we always operate on the edge sets of the graphs. In particular, for graphs G and H, \(G-H\) denotes \(E(G)\setminus E(H)\), \(G\cup H\) denotes \(E(G)\cup E(H)\), and \(G\cap H\) denotes \(E(G)\cap E(H)\).

In this section we show how to choose \(\mu \) and \(\zeta \) such that procedure Coupling \((\,)\) produces a desirable outcome. As explained before (in particular, see (1) and (2)) it is important that all edges of \(S^{({\iota })} = K_n - G^{({\iota })}\) are approximately equally likely to appear in the uniform random subgraph of \(S^{({\iota })}\) with degree sequence \(\varvec{d}- \varvec{g}^{({\iota })}\), where \(\varvec{g}^{({\iota })}\) denotes the degree sequence of \(G^{({\iota })}\). We will employ Theorem 10 for this purpose.

4.1 Preliminaries

We will need the following bounds.

Lemma 4

Let \(Y \sim {\text {Bin}}(K,p)\) for some postive integer K and \(p \in [0,1]\).

  1. (a)

    For any \(\varepsilon \ge 0\), we have \({\mathbb {P}}(|Y - pK| \ge \varepsilon pK) \leqslant 2 e^{-\frac{\varepsilon ^2}{2+\varepsilon } pK }.\)

  2. (b)

    If \(p = m/K\) for some integer \(m\in (0,K)\), then \({\mathbb {P}}(Y=m) \ge \frac{1}{3} \bigl (p(1-p)K \bigr )^{-1/2}.\)

  3. (c)

    Let \(\mathscr {I}\sim {\text {Po}}(\mu )\) for some \(\mu >0\). Then, for any \(\varepsilon \ge 0\), \( {\mathbb {P}}(\mathscr {I}\ge \mu (1+\varepsilon ) ) \leqslant e^{-\frac{\varepsilon ^2}{2+\varepsilon } \mu }.\)

Proof

Bound (a) follows combining the upper and lower Chernoff bounds in multiplicative form. For (b), we use the bounds \( \sqrt{2\pi k} \left( \frac{k}{e}\right) ^k \leqslant k! \leqslant \sqrt{2\pi k} \left( \frac{k}{e}\right) ^k e^{1/12}\) to estimate the factorials in the expression \({\mathbb {P}}(Y=m) = \frac{K!}{ K^K} \, \frac{m^m}{m!} \,\frac{(K-m)^{K-m}}{(K-m)!} \). Bound (c) comes from approximating \({\text {Po}}(\mu )\) with \({\text {Bin}}(K,\mu /K)\) as \(K\rightarrow \infty \) and using the upper Chernoff bound. \(\square \)

The next lemma will assist us in verifying assumption (A1) in Theorem 10.

Lemma 5

Let \(S \sim {\mathscr {G}}(n,m)\) for some integer \(m \gg n^{3/2}(\log n)^{1/2}\). Then, with probability \(1 - e^{-\varOmega (m^2/n^3)}\), assumption (A1) of Theorem 10 is satisfied with \(\gamma = \frac{1}{8}\).

Proof

Let \(\tilde{S}\sim {\mathscr {G}}(n,p)\) where \(p=m/N\). Observe that the degrees of \(\tilde{S}\) are distributed according to \({\text {Bin}}(n-1,p)\). Also, the number of common neighbours of any two vertices in \(\tilde{S}\) is distributed according \({\text {Bin}}(n-2, p^2)\). Observing that \(n p^2 \gg \log n\) and combining Lemma 4(a) and the union bound, we get that, with probability \(e^{-\varOmega (p^2 n)}\),

$$\begin{aligned} \frac{\varDelta (\tilde{S})}{pn} \in [\tfrac{1}{2},2] \qquad \text {and} \qquad \frac{|\{\ell \mathrel {:}j\ell \in \tilde{S} \text { and } k\ell \in \tilde{S}\}|}{p^2 n} \in [\tfrac{1}{2},2] \end{aligned}$$

for all pairs of distinct vertices j and k. This implies that

$$\begin{aligned} \frac{\varDelta ^2(\tilde{S})}{8n}\leqslant |\{\ell \mathrel {:}j\ell \in \tilde{S} \text { and } k\ell \in \tilde{S}\}| \leqslant 8 \frac{\varDelta ^2(\tilde{S})}{n}. \end{aligned}$$

Note that S has the same distribution as \(\tilde{S}\) conditioned on the event that \(\tilde{S}\) has exactly m edges. From Lemma 4(b), we know that \({\mathbb {P}}\) (\({|E(\tilde{S})|} = m) = \varOmega (m^{-1/2})\). Then observing that \(e^{-\varOmega (p^2 n)}/ {\mathbb {P}}( |E(\tilde{H})| = m) = e^{-\varOmega (m^2/n^3)}\) completes the proof. \(\square \)

4.2 Estimates for \(S^{({\iota })}\) and \(\varvec{g}^{({\iota })}\)

Recall that

$$\begin{aligned} N = \left( {\begin{array}{c}n\\ 2\end{array}}\right) , \qquad M = \frac{1}{2} \sum _{j=1}^n d_j. \end{aligned}$$

Lemma 4 is sufficient to extract some information about the density of \(S^{({\iota })}\) and the sequence \(\varvec{d}- \varvec{g}^{({\iota })}\), as described in the following lemma. Recall the definition of \(p_0\) and \(p_{\zeta }\) from Lemma 2 and that \(m^{({\iota })}\) denotes the number of edges in \(G^{({\iota })}\). Define

$$\begin{aligned} p^{({\iota })} := (M-m^{({\iota })})/M . \end{aligned}$$

Lemma 6

Let \(\xi \in (0,\frac{1}{3})\) be such \(\varDelta =\varDelta (\varvec{d}) \gg \xi ^{-4}\log n\). Suppose \(n-\varDelta \gg \xi \varDelta \). Take \(\mathscr {I}\sim {\text {Po}}(\mu )\), where \(\mu \) is such that

$$\begin{aligned} p_0 = 1- e^{-\mu /N} \leqslant (1 -\xi )M/N. \end{aligned}$$

Suppose \(0\leqslant {\iota }\leqslant \mathscr {I}^*-1\). Then,

  1. (a)

    \( p^{({\iota })} \ge \xi /2 \) with probability \(1-e^{-\varOmega (\xi ^2 M)}\);

  2. (b)

    \(\Vert \varvec{d}- \varvec{g}^{({\iota })} - p^{({\iota })} \varvec{d}\Vert _\infty \leqslant \min \{\xi p^{({\iota })}\varDelta ,\xi (n-\varDelta )\} \) with probability \(1-e^{-\varOmega (\xi ^{4} \varDelta )}\).

Proof

By the assumption that IndSample \((\,)\) was not called during the first \({\iota }\) steps, and using Lemma 2, we have that \(G^{({\iota })} \subseteq G_0 \sim {\mathscr {G}}(n,p_0)\). Therefore, \(m^{({\iota })} = |E(G^{({\iota })})| \leqslant |E(G_0)| \sim {\text {Bin}}(N,p_0)\). Applying Lemma 4(a), we find that

$$\begin{aligned} {\mathbb {P}}\bigl ( p^{({\iota })} \leqslant \xi /2\bigr ) = {\mathbb {P}}\bigl (M-m^{({\iota })} \leqslant \xi M /2 \bigr ) = e^{-\varOmega (\xi ^2M )}. \end{aligned}$$

Since \(e^{-\varOmega (\xi ^2M )} = e^{-\varOmega (\xi ^4 \varDelta )}\) we can proceed conditioned on the event that \( p^{({\iota })} \ge \xi /2\). Take \(G \sim G(n,\varvec{d})\) and let \(\varvec{h}=(h_1,\ldots , h_n)\) denote the degree sequence of the random graph \(G_{p^{({\iota })}}\) obtained by independently keeping every edge from G with probability \(p^{({\iota })}\). By Lemma 3, the sequence \(\varvec{d}- \varvec{g}^{({\iota })}\) has exactly the same distribution as \(\varvec{h}\) conditioned on the event \(|E(G_{p^{({\iota })}})| = M-m^{({\iota })}\), therefore

$$\begin{aligned} {\mathbb {P}}\bigl (\Vert \varvec{d}- \varvec{g}^{({\iota })} - p^{({\iota })} \varvec{d}\Vert _\infty \ge \xi p^{({\iota })}\varDelta \bigr ) \leqslant \frac{{\mathbb {P}}\left( \Vert \varvec{h}- p^{({\iota })} \varvec{d}\Vert _\infty \ge \xi p^{({\iota })}\varDelta \right) }{{\mathbb {P}}(|E(G_{p^{({\iota })}})| = M-m^{({\iota })}) }. \end{aligned}$$

Observing \(h_j \sim {\text {Bin}}(d_j,p^{({\iota })})\) and using Lemma 4(a), we find that

$$\begin{aligned} {\mathbb {P}}\bigl (\Vert \varvec{h}- p^{({\iota })} \varvec{d}\Vert _\infty \ge \xi p^{({\iota })} \varDelta \bigr )&\leqslant 2 \sum _{j=1}^n \exp \left( -\frac{ \xi \varDelta }{2 d_j + \xi \varDelta } \xi p^{({\iota })} \varDelta \right) \\&= ne^{- \varOmega (\xi ^2 p^{({\iota })} \varDelta )} = e^{-\varOmega (\xi ^4 \varDelta )}. \end{aligned}$$

If \(n-\varDelta > p^{({\iota })} \varDelta \) then

$$\begin{aligned} {\mathbb {P}}\bigl (\Vert \varvec{h}- p^{({\iota })} \varvec{d}\Vert _\infty \ge \xi (n-\varDelta ) \bigr ) \leqslant {\mathbb {P}}\bigl (\Vert \varvec{h}- p^{({\iota })} \varvec{d}\Vert _\infty \ge \xi p^{({\iota })} \varDelta \bigr ) = e^{-\varOmega (\xi ^4 \varDelta )}. \end{aligned}$$

Otherwise, if \(n-\varDelta \leqslant p^{({\iota })} \varDelta \) then, using Lemma 4(a) and the assumption \(n-\varDelta \gg \xi \varDelta \), we find that

$$\begin{aligned} {\mathbb {P}}\bigl (\Vert \varvec{h}- p^{({\iota })} \varvec{d}\Vert _\infty \ge \xi (n-\varDelta ) \bigr ) = ne^{- \varOmega \left( \xi ^2 \frac{(n-\varDelta )^2}{ p^{({\iota })} \varDelta } \right) } =ne^{-\varOmega (\xi ^4 \varDelta )}= e^{-\varOmega (\xi ^4 \varDelta )}. \end{aligned}$$

Finally, Lemma 4(b) gives a polynomial lower bound on \({\mathbb {P}}(|E(G_{p^{({\iota })}})| = M-m^{({\iota })})\), which is absorbed by the main error term by virtue of the assumption \(\varDelta (\varvec{d})\gg \xi ^{-4}\log n\). This completes the proof. \(\square \)

Alas, there is not much structural information available about the graphs \(S^{({\iota })}\). In fact, by virtue of Lemma 3, such questions are similar in some sense to investigating the model \({\mathscr {G}}(n,\varvec{d})\) that is the problem we started with. Nevertheless, it turns out that the following trivial observation will be sufficient for our purposes:

$$\begin{aligned} K_n - G_0^{({\iota })} \subseteq S^{({\iota })} \subseteq K_n - G_\zeta ^{({\iota })}, \quad \text{ for }\, {\iota }\leqslant \mathscr {I}^*-1 \end{aligned}$$

where

$$\begin{aligned} G_0^{({\iota })} \lhd M^{({\iota })}_0\quad \text{ and }\quad G_\zeta ^{({\iota })} \lhd M^{({\iota })}_\zeta . \end{aligned}$$

Lemma 7

Assume \({\iota }\leqslant \mathscr {I}^*-1\). Let \(m_0^{({\iota })}=|E(G_0^{({\iota })})|\) and \(m_\zeta ^{({\iota })} = |E(G_\zeta ^{({\iota })})|\). Then

$$\begin{aligned} G_0^{({\iota })}\sim {\mathscr {G}}(n,m_0^{({\iota })}), \qquad \text {and} \qquad G_\zeta ^{({\iota })}\sim {\mathscr {G}}(n,m_\zeta ^{({\iota })}). \end{aligned}$$

If \(\zeta = o(1)\) and \({\iota }\zeta = o(N)\) then we have

$$\begin{aligned} \bigl (1- \zeta - 3{\iota }\zeta /N\bigr ) (N-m^{({\iota })}) \leqslant N-m_0^{({\iota })} \leqslant N- m_\zeta ^{({\iota })} \leqslant \bigl (1+\zeta + 3{\iota }\zeta /N\bigr ) (N-m^{({\iota })}) \end{aligned}$$

with probability at least \(1 - e^{- \varOmega \left( \zeta ^2(N-M)^2/ {\iota }\right) }\).

Proof

The distributions of \(G_0^{({\iota })}\) and \(G_\zeta ^{({\iota })}\) follow directly from the definition. For the second part, it is sufficient to bound \({\mathbb {P}}\bigl ( m_0^{({\iota })}-m_\zeta ^{({\iota })} \ge (\zeta + 3{\iota }\zeta /N)(N- m^{({\iota })} ) \bigr )\) because

$$\begin{aligned} N- m_0^{({\iota })} \leqslant N- m^{({\iota })} \leqslant N- m_\zeta ^{({\iota })}. \end{aligned}$$

By the construction of \(G_0^{({\iota })}\) and \(G_\zeta ^{({\iota })}\), we have that

$$\begin{aligned} {\mathbb {E}}\left( m_0^{({\iota })} - m_\zeta ^{({\iota })} \right)&= N\left( 1 - \frac{1-\zeta }{N}\right) ^{\iota }- N\left( 1 - \frac{1}{N}\right) ^{\iota }\nonumber \\&= \left( N - {\mathbb {E}}m_0^{({\iota })} \right) \left( (1+\zeta /(N-1))^{\iota }-1 \right) \nonumber \\&\leqslant \left( N - {\mathbb {E}}m_0^{({\iota })} \right) 2 {\iota }\zeta /N. \end{aligned}$$
(3)

Note that \(m_0^{({\iota })}\) and \(m_\zeta ^{({\iota })}\) are functions of \(2{\iota }\) independent random variables (corresponding for the edge choices and rejections). The change of any of these random variables affects the values of \(m_0^{({\iota })}\) and \(m_\zeta ^{({\iota })}\) by at most 1. Using McDiarmid’s concentration inequality [17], we get that

$$\begin{aligned} \begin{aligned} {\mathbb {P}}\left( \left| m_0^{({\iota })} - {\mathbb {E}}m_0^{({\iota })} \right| \ge \zeta (N-M)/2\right)&\leqslant e^{- \varOmega \left( \zeta ^2(N-M)^2/ {\iota }\right) },\\ {\mathbb {P}}\left( \left| m_\zeta ^{({\iota })} - {\mathbb {E}}m_\zeta ^{({\iota })} \right| \ge \zeta (N-M)/2\right)&\leqslant e^{- \varOmega \left( \zeta ^2(N-M)^2/ {\iota }\right) }. \end{aligned} \end{aligned}$$
(4)

Using \(\zeta = o(1)\) and \({\iota }\zeta = o(N)\), the inequalities \( \left| m_0^{({\iota })} - {\mathbb {E}}m_0^{({\iota })} \right| \leqslant \zeta (N-M)/2\) , \(\left| m_\zeta ^{({\iota })} - {\mathbb {E}}m_\zeta ^{({\iota })} \right| \leqslant \zeta (N-M)/2\), and \(m^{({\iota })}\leqslant M\) together with (3) imply that

$$\begin{aligned} N- m^{({\iota })} = (1+o(1)) \left( N - {\mathbb {E}}m_0^{({\iota })}\right) \ge \frac{2}{3} \left( N - {\mathbb {E}}m_0^{({\iota })}\right) \ge \frac{N}{3{\iota }\zeta } {\mathbb {E}}\left( m_0^{({\iota })} - m_\zeta ^{({\iota })} \right) . \end{aligned}$$

Thus, by (4), we get

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\left( m_0^{({\iota })}-m_\zeta ^{({\iota })} \ge (\zeta + 3{\iota }\zeta /N)(N- m^{({\iota })} ) \right) \\&\quad \leqslant {\mathbb {P}}\left( m_0^{({\iota })} - m_\zeta ^{({\iota })} \ge {\mathbb {E}}\left( m_0^{({\iota })} - m_\zeta ^{({\iota })} \right) + \zeta (N-M)\right) \\&\quad \leqslant e^{- \varOmega \left( \zeta ^2(N-M)^2/ {\iota }\right) }, \end{aligned} \end{aligned}$$

completing the proof. \(\square \)

Lemma 7 implies that \(N-m_0^{({\iota })}, N-m_\zeta ^{({\iota })} = (1+o(1)) |E(S^{({\iota })})|\) with high probability provided \({\iota }\zeta \ll N\). This enables us to derive all the necessary structural properties about \(S^{({\iota })}\) from the well-studied model \({\mathscr {G}}(n,m)\).

4.3 Specifying \(\zeta \) and \(\mu \)

Take \(\mathscr {I}\sim {\text {Po}}(\mu )\), where \(\mu \) is the unique solution of

$$\begin{aligned} \left( 1- \xi \right) M/N = p_0 = 1- e^{-\mu /N}. \end{aligned}$$
(5)

Let \(\zeta = C\xi \) for some sufficiently large constant \(C>0\) (which depends only on the implicit constant in \(O(\,)\) of Theorem 10 with \(\gamma = \frac{1}{9}\) and \(\varepsilon =\frac{1}{4}\)).

4.4 Completing the proof of theorem 8

First, by the assumptions, observe that

$$\begin{aligned} \xi ^4 \varDelta \ge \frac{\xi ^4 \varDelta ^4 }{ n^3 } \gg \frac{n}{ (\log n)^4}. \end{aligned}$$

Thus, it is sufficient to prove the assertion with probability \(1-e^{-\varOmega (\xi ^4\varDelta )}\).

We prove that if IndSample \((\,)\) was not called during the first \({\iota }\) steps of Coupling \((\varvec{d}, \mathscr {I}, \zeta )\) then the probability that it is called in step \({\iota }+1\) is \(e^{-\varOmega (\xi ^4\varDelta )}\). Then our assertion holds by taking the union bound over the \(\mathscr {I}\) steps which, with probability at least \(1-e^{-\varOmega (\xi ^4\varDelta )}\), is bounded by \(n^2\). Suppose IndSample \((\,)\) was not called during the first \({\iota }\) steps of Coupling \((\varvec{d}, \mathscr {I}, \zeta )\). To bound the probability that it is called at the next iteration, we use Theorem 10 with

$$\begin{aligned} S:=S^{({\iota })}, \qquad \varvec{t}:= \varvec{d}- \varvec{g}^{({\iota })}, \qquad H^+:=\{jk\}, \qquad H^- := \emptyset . \end{aligned}$$

By Observation 1, the set of \(\varvec{t}\)-factors of S is not empty. By the assumptions of Theorem 8, for all j,

$$\begin{aligned} \varDelta \ge d_j \ge \varDelta - \xi \, \frac{\varDelta (n-\varDelta )}{n}, \end{aligned}$$
(6)

Using Lemma 6, we get that, with probability \(1 - e^{-\varOmega (\xi ^4 \varDelta )}\),

$$\begin{aligned} p^{({\iota })} \ge \xi /2 \quad \text { and } \quad t_j=p^{({\iota })}\varDelta +O(\xi ) \min \{p^{({\iota })}\varDelta , n-\varDelta \} . \end{aligned}$$
(7)

Let \(\varvec{s}\) denote the degree sequence of \(S^{({\iota })}\) and \(\lambda = \frac{t_1 + \cdots + t_n}{ s_1 + \cdots + s_n}\). Then, by (6) and the two bounds of \(t_j\) in (7),

$$\begin{aligned} s_j =n-1+t_j-d_j= (n-\varDelta +p^{({\iota })}\varDelta )+O(\xi )\left( \min \{ p^{({\iota })}\varDelta , n -\varDelta \} + \frac{\varDelta (n-\varDelta )}{n}\right) . \end{aligned}$$

Consequently, letting \(\bar{t}=\Vert \varvec{t}\Vert _1/n\) and \(\bar{s}=\Vert \varvec{s}\Vert _1/n\) we have

$$\begin{aligned} t_j - \lambda s_j = t_j - \frac{\bar{t}}{\bar{s}} s_j = (t_j-\bar{t}) - \frac{\bar{t}}{\bar{s}} (s_j-\bar{s}) = O\bigl (|t_j-\bar{t}|+\lambda |s_j-\bar{s}|\bigr ). \end{aligned}$$

Using the two bounds for \(t_j\) in (7) and the corresponding bounds for \(s_j\), we have

$$\begin{aligned} t_j - \lambda s_j = O(\xi )\left( \min \{ p^{({\iota })} \varDelta , n-\varDelta \} + \frac{\lambda \varDelta (n-\varDelta )}{n} \right) , \quad \text{ for }\, j\in [n]. \end{aligned}$$
(8)

Note that

$$\begin{aligned} \lambda =\frac{p^{({\iota })}M}{N-M+p^{({\iota })}M} = (1+o(1)) \frac{p^{({\iota })} \varDelta }{ p^{({\iota })} \varDelta + n -\varDelta }. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{\lambda \varDelta (n-\varDelta )}{n} = O(\min \{ p^{({\iota })} \varDelta , n-\varDelta \} ). \end{aligned}$$
(9)

Combining the bounds above, we get that

$$\begin{aligned} \lambda (1-\lambda )\varDelta (S^{({\iota })})&= \frac{p^{({\iota })}M (N-M)}{(N-M +p^{({\iota })}M)^2} \varDelta (S^{({\iota })})\\&\ge \frac{2p^{({\iota })}M (N-M)}{ n (N-M +p^{({\iota })}M)} = (1+o(1)) \frac{2p^{({\iota })} \varDelta (n-\varDelta )}{p^{({\iota })} \varDelta + n -\varDelta }. \end{aligned}$$

Next, we prove that

$$\begin{aligned} \frac{\Vert \varvec{t}-\lambda \varvec{s}\Vert _{\infty }}{\lambda (1-\lambda )\varDelta (S^{({\iota })})}=O(\xi ), \quad \frac{n/\log n}{\lambda (1-\lambda )\varDelta (S^{({\iota })})}=o(1). \end{aligned}$$
(10)

For the first inequality in (10), note that

$$\begin{aligned} \frac{2p^{({\iota })} \varDelta (n-\varDelta )}{p^{({\iota })} \varDelta + n-\varDelta }&\ge \min \{p^{({\iota })} \varDelta , n-\varDelta \} =\varOmega \left( \xi ^{-1}{ \Vert \varvec{t}-\lambda \varvec{s}\Vert _{\infty }}\right) , \end{aligned}$$
(11)

where the equality in (11) holds by (8) and (9). The second equality in (10) follows by (11), the bound \(p^{({\iota })}\geqslant \xi /2\) from (7) and the theorem assumption that \(n-\varDelta \gg \xi \varDelta \gg n/\log n\). It follows from (10) that \(\lambda (1-\lambda )\varDelta (S^{({\iota })})\gg \Vert \varvec{t}-\lambda \varvec{s}\Vert _{\infty } + n / \log n\), and thus assumptions (A2) and (A3) of Theorem 10 are satisfied. Assumption (A4) is also immediate since \(H = H^+\cup H^-\) consists of one edge.

Next, observe that \(1 - p_0 \ge \xi \), so \(\mu \leqslant N \log \tfrac{1}{\xi }\). From Lemma 4(c) we get that

$$\begin{aligned} {\mathbb {P}}\left( \mathscr {I}>2N \log \frac{1}{\xi } \right) = e^{-\varOmega (N \log \frac{1}{\xi })} = e^{-\varOmega (\xi ^4 \varDelta )}. \end{aligned}$$
(12)

Then, given that \(\mathscr {I}\leqslant 2N \log \frac{1}{\xi } \), using \({\iota }\leqslant \mathscr {I}^*-1\leqslant \mathscr {I}\) and \(\zeta =O(\xi )\) from its definition, we get that \({\iota }\zeta = O(N \xi \log \frac{1}{\xi })\ll N\). For each \({\iota }\leqslant \mathscr {I}^*-1\), by Lemma 7, we get that

$$\begin{aligned} N - m_0^{({\iota })} = (1+o(1)) (N- m^{({\iota })}), \qquad N - m_\zeta ^{({\iota })} =(1+o(1)) (N- m^{({\iota })}) \end{aligned}$$

with probability at least \( 1-e^{-\varOmega (\zeta ^2(N- M)^2/{\iota })}. \) Combining Lemma 5, the assumption \(n-\varDelta \gg \xi \varDelta \), using the monotonicity of the number of common neighbours, we find that, with probability at least

$$\begin{aligned}&1 - e^{-\varOmega (\xi ^4 \varDelta )}+ e^{-\varOmega (\zeta ^2(N- M)^2/{\iota })}- e^{-\varOmega ((N- m)^2/n^3)}\\&\quad = 1- e^{-\varOmega (\xi ^4 \varDelta )}- e^{-\varOmega (\xi ^2 n^2(n-\varDelta )^2 /N \log \frac{1}{\xi } )} - e^{-\varOmega ((n-\varDelta )^2/n)}\\&\quad = 1 - e^{-\varOmega (\xi ^4 \varDelta )}- e^{-\varOmega (\xi ^3 n \varDelta /\log \frac{1}{\xi } )} - e^{-\varOmega (\xi \varDelta (n-\varDelta )/n)} = 1-e^{-\varOmega (\xi ^4 \varDelta )}, \end{aligned}$$

assumption (A1) of Theorem 10 holds for \(S=S^{({\iota })}\) with \(\gamma =1/9\).

Observing \(\xi \gg (\log n)^{-1}\) by the assumptions, and applying Theorem 10 with \(\varepsilon =1/4\) and \(\gamma =1/9\), and by (10), we get that, with probability \(1 - e^{-\varOmega (\xi ^4 \varDelta )}\)

$$\begin{aligned} \frac{{\mathbb {P}}(jk\in {\mathscr {G}}(n,\varvec{d}) \mid G^{({\iota })})}{ {\mathbb {P}}(j'k'\in {\mathscr {G}}(n,\varvec{d}) \mid G^{({\iota })})} = 1 + O\left( n^{-1/4} + \frac{ \Vert \varvec{t}-\lambda \varvec{s}\Vert _{\infty }}{\lambda (1-\lambda ) \varDelta (S^{({\iota })})}\right) = 1+ O(\xi )> 1- \zeta \end{aligned}$$

for any \(jk, j'k' \notin G^{({\iota })}\), where the last inequality holds by choosing sufficiently large C in the definition of \(\zeta \). Applying the union bound for all such \(jk, j'k' \) we get that the probability that IndSample \((\,)\) is called at step \({\iota }+1\) is \(e^{-\varOmega (\xi ^4 \varDelta )}\).

Using (1) and (12), we conclude that procedure Coupling \((\varvec{d}, \mathscr {I}, \zeta )\) produces a “bad” output \((G_\zeta ,G,G_0)\) with probability

$$\begin{aligned} {\mathbb {P}}(G_\zeta \not \subseteq G) = O(N \log \frac{1}{\xi }) e^{-\varOmega (\xi ^4 \varDelta )} = e^{-\varOmega (\xi ^4 \varDelta )}. \end{aligned}$$

To complete the proof we take \(({G^{L}},G) = (G_\zeta ,G)\) and \(p=p_\zeta \), recall that \(G \sim {\mathscr {G}}(n,\varvec{d})\) by Corollary 3, and \(G_\zeta \sim {\mathscr {G}}(n,p_\zeta )\) by Lemma 2, where

$$\begin{aligned} p_\zeta&= 1 - e^{-\mu (1-\zeta )/N} = 1 - e^{-\mu /N} + e^{-\mu /N} \bigl (1- e^{\mu \zeta /N}\bigr )\\ {}&= p_0 + O\bigl (\xi (1-p_0)\log (1-p_0)\bigr ) = (1- O(\xi ) )p_0 = (1 - O(\xi )) \varDelta /n. \end{aligned}$$

The last equation follows by (5) and the assumption that \(\varvec{d}\) is near-regular. \(\square \)

5 Enumeration of factors

In this section we establish an asymptotic formula for the number of factors (subgraphs with given degree sequence) of a graph in the dense case.

Let S be a simple graph. We start from the observation that \(\prod _{jk\in S}(1+z_jz_k)\) is the generating function for subgraphs of S with powers of \(z_1, \ldots , z_n\) corresponding to degrees. In particular, the number \( N(S, \varvec{t})\) of \(\varvec{t}\)-factors of S is given by

$$\begin{aligned} N(S, \varvec{t}) = [z_1^{t_1}\cdots z_n^{t_n}] \prod _{jk\in S}(1+z_jz_k), \end{aligned}$$

where \([\,\cdot \,]\) denotes coefficient extraction. Using Cauchy’s integral formula, it follows that

$$\begin{aligned} N(S, \varvec{t}) = \frac{1}{(2\pi i)^n} \oint \cdots \oint \, \frac{\prod _{jk\in s}(1+z_jz_k)}{ z_1^{t_1+1} \cdots z_{n}^{t_n+1}} \,dz_1 \cdots dz_n. \end{aligned}$$

Let

$$\begin{aligned} U_n(\rho )=\{\varvec{\theta }= (\theta _1,\ldots , \theta _n)\in {\mathbb {R}}^n : \Vert \varvec{\theta }\Vert _{\infty } \leqslant \rho \}. \end{aligned}$$

Substituting \(z_j = e^{\beta _j + i \theta _j}\), we get that

$$\begin{aligned} N(S, \varvec{t})= & {} \frac{1}{(2\pi )^n} \int _{-\pi }^{\pi }\cdots \int _{-\pi }^{\pi } \frac{\prod _{jk\in S}(1+e^{\beta _j+\beta _k+i(\theta _j+\theta _k)})}{e^{\sum _{j=1}^n t_j(\beta _j+i\theta _j)}} d \theta _1\cdots d \theta _n \nonumber \\= & {} \frac{\prod _{jk\in S}(1+e^{\beta _j+\beta _k})}{(2\pi )^n e^{\sum _{j=1}^n t_j\beta _j}} \int _{-\pi }^{\pi }\cdots \int _{-\pi }^{\pi } \frac{\prod _{jk\in S}\frac{1+e^{\beta _j+\beta _k+i(\theta _j+\theta _k)}}{1+e^{\beta _j+\beta _k}}}{ e^{\sum _{j=1}^n i t_j\theta _j} }\,d \theta _1\cdots d \theta _n\nonumber \\= & {} \frac{\prod _{jk\in S}(1+e^{\beta _j+\beta _k})}{(2\pi )^n e^{\sum _{j=1}^n t_j\beta _j}} \int _{U_n(\pi )} F_{S, \varvec{t}} (\varvec{\theta })\, d \varvec{\theta }, \end{aligned}$$
(13)

where

$$\begin{aligned} F_{S, \varvec{t}} (\varvec{\theta }) := \frac{ \prod _{jk \in S} \bigl (1+\lambda _{jk}(e^{i(\theta _j+\theta _k)}-1)\bigr )}{e^{\sum _{j=1}^n i t_j \theta _j}} \end{aligned}$$

and

$$\begin{aligned} \lambda _{jk} = \lambda _{jk}(\varvec{\beta }):= \frac{e^{\beta _j + \beta _k} }{1+e^{\beta _j + \beta _k}}, \text { for } jk \in S. \end{aligned}$$
(14)

The choice of parameters \(\varvec{\beta }= (\beta _1,\ldots ,\beta _n)\) will be specified later.

The values \((\lambda _{jk})\) defined in (14) have an interesting property: if we consider a random subgraph \(S_{(\lambda _{jk})}\) of S with independent adjacencies where, for each \(jk \in S\), the probability that vertices j and k are adjacent in \(S_{(\lambda _{jk})}\) equals \(\lambda _{jk}\), then the probability of each outcome depends only on its degree sequence \(\varvec{t}=(t_1,\ldots , t_n)\). In other words, the conditional distribution of \(S_{(\lambda _{jk})}\) with respect to given \(\varvec{t}\) is uniform. The random model of \(S_{(\lambda _{jk})}\) is referred as the \(\beta \)-model and it is a special case of the exponential family of random graphs, see [3, 11] for more details. A further connection between \(S_{(\lambda _{jk})}\) and \(S_{\varvec{t}}\) is established in Sect. 6.

The exact value of the integral (13) can be found very rarely. Instead, we will approximate it. The complex-analytical approach consists of the following steps:

  1. (i)

    estimate the contribution of critical regions around concentration points, where the integrand achieves its maximum value,

  2. (ii)

    show that other regions give a negligible contribution.

The maximum absolute value of \(F_{S, \varvec{t}} (\varvec{\theta })\) is 1. It is achieved at points \((0,\ldots ,0)\) and \((\pm \pi , \ldots , \pm \pi )\). If S does not contain a bipartite component then \(|F_{S,\varvec{t}}(\varvec{\theta })|\) is strictly less than 1 at any other point of \(U_n(\pi )\) because there will be at least one pair \(jk\in S\) such that \(e^{i(\theta _j+\theta _k)} \ne 1\). Since \(\varvec{t}\) is a degree sequence, we have that \(t_1+\cdots + t_n\) is even. Then the contributions of neighbourhoods of \((0,\ldots ,0)\) and \((\pm \pi , \ldots , \pm \pi )\) to the integral (13) are identical because \(F_{S,\varvec{t}}(\varvec{\theta })\) is \(2\pi \)-periodic with respect to each component of \(\varvec{\theta }\) and

$$\begin{aligned} F_{S, \varvec{t}} (\theta _1 + \pi , \ldots , \theta _n+\pi ) = e^{i(t_1+\cdots +t_n)\pi } F_{S, \varvec{t}} (\theta _1, \ldots , \theta _n) = F_{S, \varvec{t}} (\theta _1, \ldots , \theta _n). \end{aligned}$$
(15)

Thus, we can focus on estimates around the origin and then multiply by 2.

By Taylor’s theorem, for \(a \in [0,1]\) and \(x \in [-\pi /4, \pi /4]\), we have

$$\begin{aligned}&1+a(e^{i x} -1)= \exp \Bigl ( i ax - \frac{1}{2} a(1-a) x^2 - \frac{1}{6} i a(1-a)(1-2a) x^3 \\&\quad +\frac{1}{24} a(1-a) (1-6a+6a^2) x^4 +O(x^5)\Bigr ). \end{aligned}$$

Using this to expand the multipliers of \(F_{S,\varvec{t}}(\varvec{\theta })\), we find that

$$\begin{aligned} F_{S, \varvec{t}}(\varvec{\theta })= & {} \exp \Bigl ( - i\sum _{j=1}^n \theta _j t_j + i \sum _{jk \in S} \lambda _{jk} (\theta _j + \theta _k) \nonumber \\{} & {} - \varvec{\theta }^{\textrm{T}}\hspace{-1.111pt}Q \varvec{\theta }+ u(\varvec{\theta }) - i v(\varvec{\theta }) + O\bigl (\Vert \varvec{\theta }\Vert _\infty ^5\, |E(S)|\bigr ) \Bigr ), \end{aligned}$$
(16)

where the \(n \times n\) symmetric matrix Q is defined by

$$\begin{aligned} \varvec{\theta }^{\textrm{T}}\hspace{-1.111pt}Q \varvec{\theta }= \frac{1}{2} \sum _{jk \in S} \lambda _{jk}(1-\lambda _{jk})(\theta _{j}+\theta _k)^2 \end{aligned}$$
(17)

and the multivariable polynomials u and v are defined by

$$\begin{aligned} \begin{aligned} u(\varvec{\theta })&:= \frac{1}{24}\sum _{jk \in S} \lambda _{jk}(1-\lambda _{jk}) (1-6\lambda _{jk} + 6\lambda _{jk}^2 ) (\theta _{j}+\theta _k)^4,\\ v(\varvec{\theta })&:= \frac{1}{6}\sum _{jk \in S} \lambda _{jk}(1-\lambda _{jk}) (1-2\lambda _{jk}) (\theta _{j}+\theta _k)^3. \end{aligned} \end{aligned}$$
(18)

Observe that \( \varvec{\theta }^{\textrm{T}}\hspace{-1.111pt}Q \varvec{\theta }\ge 0, \) so Q is a positive semidefinite matrix. Moreover, it is positive definite if S does not contain a bipartite component.

The optimal choice for \(\varvec{\beta }\) is such that the linear part in (16) disappears, which corresponds to the case when our contours in the complex plane pass through the saddle point. Thus, we get the following system of equations:

$$\begin{aligned} t_j = \sum _{k: jk \in S} \lambda _{jk} = \sum _{k: jk \in S} \frac{e^{\beta _j+\beta _k}}{1+ e^{\beta _j + \beta _k}} \qquad \text { for all}\, 1\leqslant j\leqslant n. \end{aligned}$$
(19)

For the case \(S=K_n\), the existence and the uniqueness of the solution was studied in [1, 3, 23]: the necessary and sufficient condition is that \(\varvec{t}\) lies in the interior of the polytope defined by the Erdős-Gallai inequalities. When S is the complete graph, it is also known that system (19) is equivalent to (i) maximisation of the likelihood with respect to the parameters of the \(\beta \)-model given observations of the degrees (ii) finding the random model with independent adjacencies and given expected degrees that maximises the entropy. Unfortunately, analogs of these results are not available for general S even though the methods used in the literature will certainly carry over. Since such results are not needed for our purposes here, we leave these questions for a subsequent paper.

Denote

$$\begin{aligned} \lambda := \frac{\sum _{ jk \in S} \lambda _{jk} }{|E(S)| } \end{aligned}$$
(20)

If system (19) holds then we have \(\lambda = \frac{t_1+\cdots +t_n}{2 |E(S)|}\), which is the relative density of a \(\varvec{t}\)-factor in S. We are ready to state our main result of this section.

Theorem 11

Let \(\varepsilon , \gamma \) and c be fixed positive constants. Suppose a graph S on n vertices and a degree sequence \(\varvec{t}\) satisfy the following assumptions:

  1. (B1)

    for any two distinct vertices j and k, we have

    $$\begin{aligned} \frac{\gamma \varDelta ^2(S)}{n} \leqslant \left| \{\ell \mathrel {:}j\ell \in S \text { and } k\ell \in S\}\right| \leqslant \frac{\varDelta ^2(S)}{\gamma n}; \end{aligned}$$
  2. (B2)

    there exists a solution \(\varvec{\beta }\) of system (19) such that \({\text {rng}}(\varvec{\beta }) \leqslant c\);

  3. (B3)

    \(\lambda (1-\lambda ) \varDelta (S) \gg \frac{n}{\log n}\).

Let \(\varvec{X}\) be a random variable with the normal density \(\pi ^{-n/2} |Q|^{1/2} e^{-\varvec{x}^{\textrm{T}}\hspace{-1.111pt}Q \varvec{x}}\). Then,

$$\begin{aligned} N(S,\varvec{t}) = \frac{ 2 \prod _{jk\in S} (1 + e^{\beta _j+\beta _k}) }{ (4 \pi )^{n/2} |Q|^{1/2} \, \prod _{j=1}^n e^{t_j \beta _j} } \exp \left( {\mathbb {E}}u(\varvec{X}) - \frac{1}{2} {\mathbb {E}}v^2(\varvec{X}) + O(n^{-1/2 + \varepsilon })\right) , \end{aligned}$$

where the constant implicit in \(O(\,)\) depends on \(\gamma , \varepsilon \) and c only.

There is a vast literature on asymptotic enumeration of dense subgraphs with given degrees in the case when S is the complete graph or not far from it, see, for example, [1, 11, 19, 20] and references therein. An important advantage of Theorem 11 with respect to the previous results is that it allows S to be essentially different from \(K_n\) and it holds for a very wide range of degrees. Theorem 11 follows immediately from Eqs. (13), (15), Lemma 9 and Corollary 4.

5.1 The integral in the critical regions

For given S and \(\varvec{t}\), denote

$$\begin{aligned} \varLambda := \lambda (1-\lambda )\qquad \text {and} \qquad \varDelta := \varDelta (S). \end{aligned}$$

In the following, we always assume that \(\varLambda \varDelta \gg n/\log n\) which is the assumption (B3) of Theorem 11. We also assume that (19) is satisfied. Let \(\varepsilon \) be a fixed positive constant required to be sufficiently small in several places of the argument. Define

$$\begin{aligned} \eta := \frac{ n^{\varepsilon }}{ (\varLambda \varDelta )^{1/2}} = o(1). \end{aligned}$$

Given \(x \in {\mathbb {R}}\), define

$$\begin{aligned} |x|_{2\pi } := \min \{|y| \,:\, y \equiv x \mod 2\pi \}. \end{aligned}$$

It is easily seen that \(|\cdot |_{2\pi }\) is a seminorm on \({\mathbb {R}}\) that induces a norm on \({\mathbb {R}}/(2\pi )\), the real numbers modulo \(2\pi \). Our critical regions are

$$\begin{aligned} \mathscr {B}_0 := U_n(\eta ) \qquad \text {and} \qquad \mathscr {B}_{\pi }:= \{\varvec{\theta }\in {\mathbb {R}}^n \,:\, |\theta _j -\pi |_{2\pi } \leqslant \eta \text{ for } \text{ all } j\}. \end{aligned}$$

As explained above (see (15)), the contributions of these two regions to the integral in (13) are identical so we can focus on \(\mathscr {B}_0\). From (16), we have

$$\begin{aligned} \int _{\mathscr {B}_0} F_{S, \varvec{t}}(\varvec{\theta }) \,d\varvec{\theta }= \int _{U_n(\eta )} e^{-\varvec{\theta }^{\textrm{T}}\hspace{-1.111pt}Q \varvec{\theta }+ u(\varvec{\theta })-iv(\varvec{\theta }) + h(\varvec{\theta })} d \varvec{\theta }, \end{aligned}$$
(21)

where \(h(\varvec{\theta }) = O(n^{-1/2 + 6\varepsilon })\) uniformly for \(\varvec{\theta }\in \mathscr {B}_0\). A general theory on the estimation of such integrals was developed in [11], based on the second-order approximation of complex martingales. We will apply the tools from [11] here and, for the reader’s convenience, also quote them in the appendix, see Section A.2.

We will need the following bounds.

Lemma 8

Let \(\lambda \) be defined in (20). If \({\text {rng}}(\varvec{\beta }) \geqslant c\) for some fixed \(c>0\), then

  1. (a)

    uniformly over all \(jk\in S\), \( \lambda _{jk} = \varTheta (\lambda )\) and \(1 - \lambda _{jk} = \varTheta (1-\lambda )\).

Furthermore, suppose \(\varDelta = \varOmega (n^{1/2})\) and assumption (B1) of Theorem 11 holds. Then Q is positive definite and the following hold.

  1. (b)

    If \(Q^{-1} = (\sigma _{jk})\), then \(\sigma _{jk} = {\left\{ \begin{array}{ll} \varTheta \Bigl (\frac{1 }{\varLambda \varDelta }\Bigr ), &{} \text {if } j=k;\\ O\Bigl (\frac{1 }{\varLambda \varDelta ^2 }\Bigr ), &{} \text {if } jk \in S;\\ O\Bigl (\frac{1 }{\varLambda \varDelta n}\Bigr ), &{} \text {otherwise}. \end{array}\right. }\)

  2. (c)

    There exists a real matrix T such that \(T^{\textrm{T}}\hspace{-1.111pt}Q T=I\) and

    $$\begin{aligned} \Vert T\Vert _1, \Vert T\Vert _{\infty } = O\bigl ((\varLambda \varDelta )^{-1/2}\bigr ), \qquad \Vert T^{-1}\Vert _1, \Vert T^{-1}\Vert _{\infty } = O\bigl ((\varLambda \varDelta )^{1/2}\bigr ). \end{aligned}$$

Proof

Observe that \(1 \leqslant \frac{1+e^y}{1+ e^x} \leqslant e^{y-x}\) for any real \(x \leqslant y\). Since all \(\beta _j+\beta _k\) and \(\beta _{j'}+\beta _{k'}\) are at most 2c apart, this implies that \(\frac{\lambda _{jk}}{\lambda _{j'k'}} = \varTheta (1)\) and \(\frac{1-\lambda _{jk}}{1-\lambda _{j'k'}} = \varTheta (1)\) for all \(jk, j'k' \in S\). Recalling the definition (20), we have proved (a).

Note that assumption (B1) of Theorem 11 implies that S is a connected non-bipartite graph. Thus, Q is positive definite. Parts (b) and (c) follow from Lemma 20 (see appendix) applied to the scaled matrix \(Q/\varLambda \). \(\square \)

We are ready to establish asymptotic estimates for the critical region \(\mathscr {B}_0\). Note that in the next lemma we allow the components of \(\varvec{t}\) to be non-integers.

Lemma 9

Suppose a graph S and a real vector \(\varvec{t}\in {\mathbb {R}}^n\) satisfy assumptions (B1)–(B3) of Theorem 11. Then, for any sufficiently small fixed \(\varepsilon >0\), we have

$$\begin{aligned} \int _{U_n(\eta )} F_{S, \varvec{t}}(\varvec{\theta })\, d\varvec{\theta }= \frac{ \pi ^{n/2} }{ |Q|^{1/2} } \exp \Bigl ( {\mathbb {E}}u(\varvec{X}) - \frac{1}{2} {\mathbb {E}}v^2(\varvec{X}) + O(n^{-1/2 + 13\varepsilon })\Bigr ), \end{aligned}$$

where \(\varvec{X}\) is a random vector in \({\mathbb {R}}^n\) with the normal density \(\pi ^{-n/2} |Q|^{1/2} e^{-\varvec{x}^{\textrm{T}}\hspace{-1.111pt}Q \varvec{x}}\). Furthermore,

$$\begin{aligned} {\mathbb {E}}u(\varvec{X}) = O\Bigl (\frac{n}{\varLambda \varDelta }\Bigr ), \qquad {\mathbb {E}}v^2(\varvec{X}) = O\Bigl (\frac{n}{\varLambda \varDelta }\Bigr ), \end{aligned}$$

and for any fixed \(c>0\)

$$\begin{aligned} \int _{U_n(c\eta )} |F_{S, \varvec{t}}(\varvec{\theta })| \,d\varvec{\theta }= \frac{ \pi ^{n/2} }{ |Q|^{1/2} }e^{O\bigl (\frac{n}{\varLambda \varDelta }\bigr )}. \end{aligned}$$

Proof

The proof is based on [11,  Theorem 4.4] which is quoted as Theorem 13 for the reader’s convenience. Let

$$\begin{aligned} \varOmega := U_n(\eta ), \qquad f(\varvec{\theta }) := u(\varvec{\theta }) -i v(\varvec{\theta }), \qquad g(\varvec{\theta }) := u(\varvec{\theta }). \end{aligned}$$

From Lemma 8, we know that Q is positive definite. Let T be the matrix from Lemma 8(c). Define

$$\begin{aligned} \rho _1 := \Vert T\Vert _\infty ^{-1}\,\eta , \qquad \rho _2 := \Vert T^{-1}\Vert _\infty \, \eta . \end{aligned}$$

Then we have \( U_n( \rho _1 ) \subseteq T^{-1}( \varOmega ) \subseteq U_n( \rho _2)\) and

$$\begin{aligned} \rho _2 \ge \rho _1 = \varOmega \Bigl ( \frac{n^{\varepsilon } }{(\varLambda \varDelta )^{1/2}} \, (\varLambda \varDelta )^{1/2}\Bigr ) = \varOmega ( n^{\varepsilon }). \end{aligned}$$

Thus, \(\rho _1\) and \(\rho _2\) satisfy assumption (a) of Theorem 13. Similarly, observe \(\rho _2 = O(n^{\varepsilon })\).

Next, we estimate the partial derivatives of \(f(\varvec{x})\). Recalling the definitions of u and v from (18) and using Lemma 8(a), we get that, provided \(\Vert \varvec{\theta }\Vert _\infty \leqslant 1\)

$$\begin{aligned} \frac{\partial f}{\partial \theta _j}(\varvec{\theta })&= \frac{1}{6}\sum _{k: jk \in S} \lambda _{jk}(1-\lambda _{jk}) (1-6\lambda _{jk} + 6\lambda _{jk}^2 ) (\theta _{j}+\theta _k)^3 \\&\quad - \frac{i}{2}\sum _{k: jk \in S} \lambda _{jk}(1-\lambda _{jk}) (1-2\lambda _{jk}) (\theta _{j}+\theta _k)^2 = O\left( \varLambda \varDelta \Vert \varvec{\theta }\Vert _\infty ^2\right) , \end{aligned}$$

and, if \( jk \in S\),

$$\begin{aligned} \frac{\partial ^2 f}{\partial \theta _j \partial \theta _k} (\varvec{\theta })&= \frac{1}{2} \lambda _{jk}(1-\lambda _{jk}) (1-6\lambda _{jk} + 6\lambda _{jk}^2 ) (\theta _{j}+\theta _k)^2 \\&\quad - i \lambda _{jk}(1-\lambda _{jk}) (1-2\lambda _{jk}) (\theta _{j}+\theta _k) = O(\varLambda \Vert \varvec{\theta }\Vert _\infty ). \end{aligned}$$

Again using Lemma 8(c), we find that assumption (b) of Theorem 13 holds with \(\phi _1 = n^{-1/6 + 4\varepsilon }\). Exactly the same calculation shows that assumption (c)(ii) holds with \(\phi _2 =n^{-1/6 + 4\varepsilon }\). Assumption (d) also holds because u and v are polynomials. Applying Theorem 13 to the integral of (21), we obtain that

$$\begin{aligned} \int _{U_n(\eta )} F_{S, \varvec{t}}(\varvec{\theta }) \,d\varvec{\theta }= (1+K)\frac{\pi ^{n/2} }{ |Q|^{1/2} } \exp \bigl ( {\mathbb {E}}f(\varvec{X}) + \frac{1}{2} {\mathbb {E}}(f(\varvec{X}) - {\mathbb {E}}f(\varvec{X})^2) \bigr ), \end{aligned}$$
(22)

where \( K = O( n^{-1/2+ 12\varepsilon }) e^{\frac{1}{2} {\text {Var}}v(\varvec{X}) }. \) Similarly, using (16) and Theorem 13, we get

$$\begin{aligned} \int _{U_n(c\eta )}|F_{S, \varvec{t}}(\varvec{\theta }) | \,d\varvec{\theta }&= \bigl (1+ O(n^{-1/2 + 6\varepsilon })\bigr ) \int _{U_n(c\eta ) } e^{-\varvec{\theta }^{\textrm{T}}\hspace{-1.111pt}Q \varvec{\theta }+ u(\varvec{\theta }) } d \varvec{\theta }\\&= \bigl (1+ O( n^{-1/2 + 12\varepsilon })\bigr ) \frac{\pi ^{n/2} }{ |Q|^{1/2} } \exp \bigl ( {\mathbb {E}}u(\varvec{X}) - \frac{1}{2} {\text {Var}}(u(\varvec{X})) \bigr ). \end{aligned}$$

Next, we need to estimate some moments of \(u(\varvec{X})\) and \(v(\varvec{X})\). Let \(\varSigma = (\sigma _{jk, \ell m})\) denote the covariance matrix of the variables \(X_j + X_k\) for \(jk\in S\):

$$\begin{aligned} \sigma _{jk,\ell m} := {\text {Cov}}(X_j+X_k, X_\ell +X_m). \end{aligned}$$
(23)

Since \(\varvec{X}\) is gaussian with density \(\pi ^{-n/2} |Q|^{1/2} e^{-\varvec{x}^{\textrm{T}}\hspace{-1.111pt}Q\varvec{x}}\), the covariances \({\text {Cov}}(X_j,X_k)\) equal the corresponding entries of \((2Q)^{-1}\). Using the bounds of Lemma 8(b), we find that

$$\begin{aligned} \sigma _{jk,\ell m} = {\left\{ \begin{array}{ll} O\Bigl ( \frac{1}{\varLambda \varDelta }\Bigr ), &{} \text {if } \{j,k\}\cap \{\ell ,m\} \ne \emptyset ;\\ O\Bigl ( \frac{1}{\varLambda \varDelta ^2}\Bigr ), &{} \text {if } \{j,k\}\cap \{\ell ,m\} = \emptyset \text { and } \{j\ell ,jm,k\ell , km\}\cap S \ne \emptyset ;\\ O\Bigl ( \frac{1}{n\varLambda \varDelta }\Bigr ), &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$
(24)

The expectation of a polynomial of odd degree is zero (due to the symmetry of the distribution) so \( {\text {Cov}}(u(\varvec{X}),v(\varvec{X})) = {\mathbb {E}}v (\varvec{X}) = 0\). The following are special cases of Isserlis’ theorem (see [12]), which is also known as Wick’s formula in quantum field theory:

$$\begin{aligned} \begin{aligned} {\mathbb {E}}(X_j + X_k)^4&= 3 \sigma _{jk,jk}^2, \qquad {\mathbb {E}}(X_j + X_k)^6 = 15 \sigma _{jk,jk}^3\\ {\mathbb {E}}(X_j+X_k)^3(X_\ell +X_m)^3&= 9 \sigma _{jk, jk} \, \sigma _{\ell m,\ell m} \,\sigma _{jk,\ell m} + 6 \sigma _{jk, \ell m}^3,\\ {\mathbb {E}}(X_j+X_k)^4(X_\ell +X_m)^4&= 9 \sigma _{jk,jk}^2 \sigma _{\ell m, \ell m}^2 + 72\, \sigma _{jk,jk}\, \sigma _{\ell m, \ell m}\, \sigma _{jk,\ell m}^2 + 24 \sigma _{jk,\ell m}^4. \end{aligned} \end{aligned}$$

Recalling (18) and using (24), we obtain that

$$\begin{aligned} {\mathbb {E}}u (\varvec{X}) = \frac{1}{8} \sum _{jk \in S} \lambda _{jk} (1-\lambda _{jk}) (1- 6\lambda _{jk}+6\lambda _{jk}^2) \sigma _{jk,jk}^2 = O\Bigl (\frac{n}{\varLambda \varDelta }\Bigr ). \end{aligned}$$
(25)

Similarly as above, we derive that

$$\begin{aligned} {\text {Var}}v (\varvec{X})= {\mathbb {E}}v^2(\varvec{X})= & {} O\biggl ( \varLambda ^2\sum _{jk\in S}\,\sum _{\ell m\in S} (|\sigma _{jk,jk}\sigma _{\ell m,\ell m}\sigma _{jk,\ell m}|+|\sigma ^3_{jk,\ell m}|) \biggr ) \nonumber \\= & {} O\biggl ( \varLambda ^2 \sum _{jk \in S}\,\sum _{\ell m \in S} \frac{|\sigma _{jk,\ell m}|}{(\varLambda \varDelta )^2} \biggr ) \nonumber \\= & {} O\Bigl ( \frac{1}{\varDelta ^2}\Bigr ) \Bigl ( \frac{n\varDelta ^2 }{\varLambda \varDelta } + \frac{n\varDelta ^3 }{\varLambda \varDelta ^2} + \frac{n^2\varDelta ^2 }{ n\varLambda \varDelta } \Bigr ) = O\Bigl (\frac{n}{\varLambda \varDelta }\Bigr ) \end{aligned}$$
(26)

and

$$\begin{aligned} {\text {Var}}u (\varvec{X})&= {\mathbb {E}}u^2(\varvec{X}) -({\mathbb {E}}u(\varvec{X}))^2\\&= O\biggl ( \varLambda ^2\sum _{jk\in S}\,\sum _{\ell m\in S} (|\sigma _{jk,jk}\sigma _{\ell m,\ell m}\sigma _{jk,\ell m}^2|+\sigma _{jk,\ell m}^4) \biggr ) \\&= O\biggl ( \varLambda ^2 \sum _{jk \in S}\,\sum _{\ell m \in S} \frac{|\sigma _{jk,\ell m}|^2}{(\varLambda \varDelta )^2} \biggr )\\&=O\Bigl ( \frac{1}{\varDelta ^2}\Bigr ) \left( \frac{n \varDelta ^2}{(\varLambda \varDelta )^2} + \frac{n\varDelta ^3 }{(\varLambda \varDelta ^2)^2} + \frac{n^2 \varDelta ^2}{(n \varLambda \varDelta )^2}\right) = o\Bigl (\frac{\log ^2 n}{n} \Bigr ), \end{aligned}$$

noting that the leading term containing \(\sigma _{jk,jk}^2\sigma _{\ell m,\ell m}^2\) appears in both \({\mathbb {E}}u^2(\varvec{X})\) and \(({\mathbb {E}}u(\varvec{X}))^2\) and gets cancelled from the subtraction. Substituting these bounds into (22) and bounding \(e^{\frac{1}{2} {\text {Var}}v(\varvec{X})} = e^{o(\log n)} = n^{o(1)}\), the proof is complete. \(\square \)

5.2 Estimates outside of the critical regions

In this section, we show that the contribution to the integral (13) of the remaining region \(\mathscr {B}= U_n(\pi ) - \mathscr {B}_0- \mathscr {B}_\pi \) is negligible, where the critical regions \( \mathscr {B}_0\) and \( \mathscr {B}_{\pi }\) are defined in Sect. 5.1. Observe that

$$\begin{aligned} |F_{S,\varvec{t}}(\varvec{\theta })| = \prod _{j k \in S}\, \bigl | 1+\lambda _{jk}(e^{i(\theta _j+\theta _k)}-1)\bigr | \end{aligned}$$

depends on S and \((\lambda _{jk})\) only but does not depend on \(\varvec{t}\). To bound the factors of \(|F_{S,\varvec{t}}(\varvec{\theta })|\), we use the following inequality, whose uninteresting proof we omit.

Lemma 10

For \(x\in {\mathbb {R}}\) and \(a\in [0,1]\), we have \(|1 + a(e^{ix}-1)| \leqslant e^{-\frac{1}{5} a(1-a)|x|_{2\pi }^2 }\).

Throughout this section, including the lemma statements, we always assume that the assumptions of Theorem 11 hold. Recall that

$$\begin{aligned} \mathscr {B}_0 = U_n(\eta ), \qquad \eta = \frac{n^{\varepsilon }}{(\varLambda \varDelta )^{1/2}}. \end{aligned}$$

Lemma 8(a,b) implies that all the eigenvalues of Q are \(\varTheta \left( \varLambda \varDelta \right) \) (by bounding the 1-norms of Q and \(Q^{-1}\)). From Lemma 9, we find that

$$\begin{aligned} J_0 := \int _{\mathscr {B}_0} F_{S, \varvec{t}}(\varvec{\theta }) \,d\varvec{\theta }= \pi ^{n/2} |Q|^{-1/2} e^{O\left( \tfrac{n}{\varLambda \varDelta }\right) } \geqslant \exp \bigl (-\frac{1}{2}n\log n+O(n)\bigr ). \end{aligned}$$
(27)

As a first step, we demonstrate as negligible the domain where many components of \(\varvec{\theta }\in U_n(\pi )\) lie sufficiently far from 0 and \(\pm \pi \). Define

$$\begin{aligned} \mathscr {B}'&:=\bigl \{ \varvec{\theta }\in U_n(\pi ) \\&\quad : ~\text {more than}\, \tfrac{1}{2} n^{1-\varepsilon }\,\text {components}\,\theta _j\, \text {satisfy }\, \eta /2\leqslant |\theta _j|_{2\pi }\leqslant \pi -\eta /2 \bigr \}. \end{aligned}$$

The following lemma depends on a technical lemma (Lemma 18) which we present in the appendix.

Lemma 11

We have

$$\begin{aligned} \int _{\mathscr {B}'} |F_{S,\varvec{t}}(\varvec{\theta })| \,d\varvec{\theta }= e^{-\varOmega (n^{1+\varepsilon })} J_0. \end{aligned}$$

Proof

Without loss of generality, at least \(\frac{1}{4} n^{1-\varepsilon }\) components \(\theta _j\) lie in \([\eta /2,\pi -\eta /2]\). Denote \(U = \{ j : \theta _j\in [\eta /2,\pi -\eta /2] \}\). We estimate the number \(N_T(U)\) of triangles \(\{j,k,\ell \}\) (i.e. \(jk,j\ell ,k\ell \in S\)) such that \(\{j,k,\ell \}\cap U \ne \emptyset \). Using Lemma 18(a), we find that the degree of any vertex of U is at least \(\gamma \varDelta \). For any \(jk \in S\) and \(\{j,k\} \cap U \ne \emptyset \) there are at least \(\frac{\gamma \varDelta ^2}{n}\) common neighbours each of which gives rise to a triangle contributing to \(N_T(U)\). Since every triangle is counted at most 3 times, we get that

$$\begin{aligned} N_T(U) \ge \frac{ \gamma \varDelta |U|}{2} \cdot \frac{ \gamma \varDelta ^2}{3n} = \frac{ \gamma ^2 \varDelta ^3 |U|}{6n}. \end{aligned}$$

For each such triangle \(\{j,k,\ell \}\) that \(j\in U\), observe that

$$\begin{aligned} |\theta _j + \theta _k|_{2\pi } + |\theta _k + \theta _\ell |_{2\pi } + |\theta _\ell + \theta _j|_{2\pi } \ge |\theta _j +\theta _k - \theta _k- \theta _\ell + \theta _\ell + \theta _j|_{2\pi } \ge \eta . \end{aligned}$$

Therefore, we can mark one edge \(j'k'\) from this triangle such that \(|\theta _{j'}+\theta _{k'}|_{2\pi }\geqslant \eta /3\). Repeating this argument for all such triangles and observing that any edge is present in at most \(\frac{\varDelta ^2}{\gamma n}\) triangles, we show that at least \(\gamma ^3 \varDelta |U|/6\) edges were marked. Using Lemma 8(a) and Lemma 10, we get that

$$\begin{aligned} |F_{S,\varvec{t}}(\varvec{\theta })| \leqslant e^{- \varOmega \left( \varLambda \varDelta |U| \eta ^2 \right) } = e^{-\varOmega (n^{1+\varepsilon })}. \end{aligned}$$

Multiplying by the volume of \(\mathscr {B}'\), which is less than \((2\pi )^n\), and comparing with (27), completes the proof. \(\square \)

If Lemma 11 doesn’t apply, we have at least \(n-\frac{1}{2} n^{1-\varepsilon }\) components of \(\varvec{\theta }\) lying in neighbourhoods of 0 and \(\pm \pi \). Next we will use a similar argument to show that most of these components lie in one of those two intervals (on a circle). Define

$$\begin{aligned} \mathscr {B}''&:=\bigl \{ \varvec{\theta }\in U_n(\pi ) \setminus \mathscr {B}' : |\theta _j| \leqslant \eta /2\,\text { holds for more than}\, n^{2\varepsilon }\, \text {components}\, \theta _j\\&\quad \text {and}\, |\theta _j-\pi |_{2\pi } \leqslant \eta /2\,\text { holds for more than}\, n^{2\varepsilon }\,\text { components}\, \theta _j \bigr \}. \end{aligned}$$

Lemma 12

We have

$$\begin{aligned} \int _{\mathscr {B}''} |F_{S,\varvec{t}}(\varvec{\theta })| \,d\varvec{\theta }= e^{-\varOmega (n^{1+\varepsilon })} J_0. \end{aligned}$$

Proof

Let \(U_1=\{ j : |\theta _j| \leqslant \eta /2\}\) and \(U_2=\{ j : |\theta _j-\pi |_{2\pi } \leqslant \eta /2\}\). Since \(\varvec{\theta }\notin \mathscr {B}'\), we have \(|U_1|+|U_2| \ge n - \frac{1}{2} n^{1-\varepsilon }\). For \(j \in U_1\), \(k \in U_2\) and any \(\ell \) such that \(j\ell , k\ell \in S\), we have

$$\begin{aligned} |\theta _j + \theta _\ell |_{2\pi } + |\theta _k + \theta _\ell |_{2\pi } \ge |\theta _j + \theta _\ell - \theta _k - \theta _\ell |_{2\pi } \ge \pi -\eta . \end{aligned}$$

Thus, we can mark some \(j'k' \in \{j\ell , k\ell \}\) that \(|\theta _j+ \theta _k|_{2\pi } = \varOmega (1)\). By the assumptions, the number of choices for \((j,k,\ell )\) is at least \(|U_1|\, |U_2| \frac{\gamma \varDelta ^2}{n}\). Dividing by \(2 \varDelta \) to compensate for over-counting, we get that at least \(|U_1|\, |U_2| \frac{\gamma \varDelta }{2n}\) edges were marked. Using Lemma 8(a) and Lemma 10, we find that

$$\begin{aligned} |F_{S,\varvec{t}}(\varvec{\theta })| = e^{- \varOmega (|U_1| |U_2| \varLambda \varDelta /n )} = e^{-\varOmega (n^{1+2\varepsilon }/\log n)} = e^{-\varOmega (n^{1+\varepsilon })}. \end{aligned}$$

The proof now follows the same line as in the previous lemma. \(\square \)

Since adding \(\pi \) to each component is a symmetry, see (15), we can now assume that at least \(n-n^{1-\varepsilon }\) components of \(\varvec{\theta }\) lie in \([-\eta /2,\eta /2]\). If \(\varvec{\theta }\notin \mathscr {B}_0\) then we should have some components \(|\theta _j| > \eta \). Let \(\mathscr {B}(m)\) denote the region of \(\varvec{\theta }\in \mathscr {B}\setminus (\mathscr {B}' \cup \mathscr {B}'')\) such that exactly m components of \(\varvec{\theta }\) lie outside of \([-\eta ,\eta ]\), where \(1\leqslant m \leqslant n^{1-\varepsilon }\). Let

$$\begin{aligned} J(m) = \int _{\mathscr {B}(m)} |F_{S,\varvec{t}}(\varvec{\theta })| \,d\varvec{\theta }. \end{aligned}$$

For notational simplicity, we first prove a bound for the integral over the region \(\mathscr {B}^*(m) \subset \mathscr {B}(m)\), where the set of m components of \(\varvec{\theta }\) lying outside of \([-\eta ,\eta ]\) is exactly \(\{\theta _1,\ldots ,\theta _m\}\). Our bound will be actually independent of this choice of m components so then we just need to multiply it by \(\left( {\begin{array}{c}n\\ m\end{array}}\right) \leqslant n^m\).

Note that

$$\begin{aligned} m \leqslant n^{1-\varepsilon } = o\Bigl ( \frac{n}{\log ^2 n}\Bigr ) = o\Bigl (\frac{\varDelta ^2}{n}\Bigr ) = o(\varDelta ). \end{aligned}$$
(28)

Take any \(j\leqslant m\). Using Lemma 18(a), we find that at least \(\gamma \varDelta - n^{1-\varepsilon }= \varTheta (\varDelta )\) vertices k such that \(jk\in S\) and \(|\theta _k|_{2\pi } \leqslant \eta /2\). For such k, we have \(|\theta _j+ \theta _k|_{2\pi }\ge \eta /2 \). Similarly as before, by Lemma 10, for \(\varvec{\theta }\in \mathscr {B}^*(m) \),

$$\begin{aligned} \prod _{j=1}^m \,\prod _{\begin{array}{c} k=m+1\\ jk\in S \end{array}}^n\, \bigl |1+\lambda _{jk}(e^{i(\theta _j+\theta _k)}-1)\bigr | = e^{-\varOmega (m \varLambda \varDelta \eta ^2)} =e^{-\varOmega (m n^{2\varepsilon })}. \end{aligned}$$

Thus, we can bound

$$\begin{aligned} \int _{\mathscr {B}^*(m)} |F_{S,\varvec{t}}(\varvec{\theta })| \,d\varvec{\theta }\leqslant \int _{U_{m}(\pi )} e^{-\varOmega (mn^{2\varepsilon })} \left( \int _{U_{n-m}(\eta )} |F_{S',\varvec{t}'}(\varvec{\theta }^1)| \,d \varvec{\theta }^{1} \right) \, d \varvec{\theta }^{2} , \end{aligned}$$

where \(\varvec{\theta }^{1}\in {\mathbb {R}}^{n-m}\), \(\varvec{\theta }^{2} \in {\mathbb {R}}^m\) and \(S'\) is obtained from S by deletion of the first m vertices. Recall that \(|F_{S',\varvec{t}'}(\varvec{\theta }^1)|\) does not depend on \(\varvec{t}'\), but we define \(\varvec{t}'\) anyway by

$$\begin{aligned} t_j' := \sum _{j : jk\in S'} \lambda _{jk} \text { for all}\, j. \end{aligned}$$

By (28), \(S'\) and \(\varvec{t}'\) satisfy all the assumptions of Lemma 9 and Lemma 20. Thus,

$$\begin{aligned} \int _{U_{n-m}(\eta )} |F_{S',\varvec{t}'}(\varvec{\theta }^1)| d \varvec{\theta }^{1} = \frac{\pi ^{(n-m)/2}}{ |Q'|^{1/2} } e^{O\left( \frac{n}{\varLambda \varDelta }\right) }, \end{aligned}$$

where \(Q'\) is the matrix of (17) for the graph \(S'\) and \((\lambda _{jk})_{jk \in S'}\). Applying Lemma 20(d) (see appendix) m times for the scaled matrix \(Q/\varLambda \), we find that

$$\begin{aligned} |Q|/ |Q'| = (\varLambda \varDelta )^m e^{O(m)}. \end{aligned}$$

Allowing \(n^m\) for the choice of the set of m big components and using (27), (28), we obtain that

$$\begin{aligned} J(m)&\leqslant n^m e^{-\varOmega (mn^{2\varepsilon })} (\varLambda \varDelta )^{m/2} e^{O(m)} \frac{\pi ^{(n-m)/2}}{ |Q|^{1/2} } e^{O\left( \frac{n}{\varLambda \varDelta }\right) } = e^{-\varOmega (m n^{2\varepsilon })} J_0. \end{aligned}$$

Summing over m and multiplying by 2 for the symmetry of \((0,\ldots ,0)\) and \((\pi ,\ldots ,\pi )\), we find that

$$\begin{aligned} \int _{ \mathscr {B}\setminus (\mathscr {B}' \cup \mathscr {B}'')} |F_{S,\varvec{t}}(\varvec{\theta })| \,d\varvec{\theta }\leqslant 2 \sum _{m=1}^{n^{1-\varepsilon }} J(m) = e^{-\varOmega (n^{2\varepsilon })} J_0. \end{aligned}$$

Using Lemma 11 and Lemma 12, we conclude the following.

Corollary 4

Under the assumptions of Theorem 11 and for sufficiently small \(\varepsilon \),

$$\begin{aligned} \int _{ \mathscr {B}} |F_{S,\varvec{t}}(\varvec{\theta })| \, d\varvec{\theta }= e^{-\varOmega (n^{2\varepsilon })} \int _{ \mathscr {B}_0} F_{S,\varvec{t}}(\varvec{\theta }) \,d\varvec{\theta }. \end{aligned}$$

6 Random \(\varvec{t}\)-factors and the beta model

In this section we establish a deep relation between \(S_{\varvec{t}} \) (a uniform random element of the set of \(\varvec{t}\)-factors of S) and the corresponding \(\beta \)-model: the probabilities of any forced or forbidden small structure are asymptotically the same.

Theorem 12

Suppose a graph S and a degree sequence \(\varvec{t}\) satisfy the assumptions of Theorem 11. Let \(H^{+}\) and \(H^{-}\) be disjoint subgraphs of S such that \(\Vert \varvec{h}\Vert _2 \ll (\varLambda \varDelta )^{1/2}\), where \(\varvec{h}\) is the degree sequence of \(H^{+}\cup H^{-}\). Then, for any \(\varepsilon >0\),

$$\begin{aligned} {\mathbb {P}}( H^{+} \subseteq S_{\varvec{t}} \text { and } H^{-} \cap S_{\varvec{t}}=\emptyset ) = \left( 1 + O\Bigl (n^{-1/2+\varepsilon } + \frac{\Vert \varvec{h}\Vert _2^2}{ \varLambda \varDelta } \Bigr )\right) \prod _{jk \in H^+} \lambda _{jk} \prod _{jk \in H^-} (1-\lambda _{jk}). \end{aligned}$$

For the case \(S=K_n\), the estimate of Theorem 12 was previously established by Isaev and McKay in [11,  Theorem 5.2] under the additional constraint that \(\Vert \varvec{h}\Vert _\infty = O(n^{1/6})\). When \(S = K_n\) and \(\varvec{t}\) is near-regular, a more precise formula for \({\mathbb {P}}( H^{+} \subseteq S_{\varvec{t}} \text { and } H^{-} \cap S_{\varvec{t}}=\emptyset )\) can be derived from [19,  Theorem 1.3], provided \(\Vert \varvec{h}\Vert _1 \leqslant n^{1+\varepsilon }\) and \(\Vert \varvec{h}\Vert _\infty \leqslant n^{1/2+\varepsilon }\). The latter result shows that the error term \(\Vert \varvec{h}\Vert _2^2/(\varLambda \varDelta )\) in Theorem 12 cannot be improved in general; see [19,  Corollary 2.6].

In this section, we first give some preliminary estimates for the solution of system (19). Then, we prove Theorem 12. Then, in Sect. 6.3, we show that Theorem 10 follows from Theorem 12.

6.1 The solution of the beta system

The following lemma will be useful for investigating system (19).

Lemma 13

Let \(\varvec{r}: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^n\), \(\delta >0\), and \(U = \{\varvec{x}\in {\mathbb {R}}^n \mathrel {:}\Vert \varvec{x}- \varvec{x}^{(0)}\Vert \leqslant \delta \Vert \varvec{r}(\varvec{x}^{(0)})\Vert \}\) and \(\varvec{x}^{(0)}\in {\mathbb {R}}^n\), where \(\Vert \cdot \Vert \) is any vector norm in \({\mathbb {R}}^n\). Assume that

$$\begin{aligned} \varvec{r}\,\text { is analytic in}\, U \qquad \text {and} \qquad \sup _{\varvec{x}\in U} \,\Vert J^{-1}(\varvec{x})\Vert < \delta , \end{aligned}$$

where J denotes the Jacobian matrix of \(\varvec{r}\) and \(\Vert \cdot \Vert \) stands for the induced matrix norm. Then there exists \(\varvec{x}^*\in U\) such that \(\varvec{r}(\varvec{x}^*) = \varvec{0}\).

Proof

Let \(\varvec{y}^{(0)} = \varvec{r}(\varvec{x}^{(0)})\) and note that \(\varvec{x}^{(0)}\in U\). If \(\varvec{y}^{(0)} =0\) there is nothing to prove so we may assume otherwise. Using the Cauchy-Kovalevskaya theorem, define the curve \(\varvec{x}(t)\) by \(\varvec{x}(0) = \varvec{x}^{(0)}\) and \(\frac{d \varvec{x}(t)}{dt} = -J^{-1}(\varvec{x}(t))\varvec{y}^{(0)}\). Note that \(\varvec{x}(t)\) remains in U for \(0\leqslant t \leqslant 1\), because

$$\begin{aligned} \varvec{x}(t) -\varvec{x}(0) = - \int _{0}^t J^{-1}(\varvec{x}(\tau )) \varvec{y}^{(0)} d \tau \end{aligned}$$

and \(\Vert \varvec{x}(t) -\varvec{x}(0)\Vert \leqslant t \sup _{\varvec{x}\in U} \Vert J^{-1}(\varvec{x}) \varvec{y}^{(0)}\Vert < \delta \Vert \varvec{y}^0\Vert \). Observe that \(\frac{d\, \varvec{r}(\varvec{x}(t))}{d\, t} = -\varvec{y}^{(0)}\). Therefore, \(\varvec{r}(\varvec{x}(t)) = (1-t)\varvec{y}^{(0)} \). Taking \(\varvec{x}^* = \varvec{x}(1)\) the proof is complete. \(\square \)

Using Lemma 13, we can estimate the difference between the exact solution of (19) and a given approximate solution \(\varvec{\beta }^{(0)}\) in terms of the deviation of the corresponding expected degrees.

Corollary 5

Let S satisfy assumption (B1) of Theorem 11 and \(\varDelta = \varOmega (n^{1/2})\). For \(\varvec{t}\in {\mathbb {R}}^n\), let \(\lambda = \frac{t_1+\cdots + t_n}{2 |E(S)|}\). For \(\varvec{\beta }\in {\mathbb {R}}^n\), define \(\varvec{r}(\varvec{\beta }) = (r_1,\ldots ,r_n)\) by

$$\begin{aligned} r_j = r_j(\varvec{\beta }) := -t_j + \sum _{k: jk\in S}\frac{e^{\beta _j+\beta _k}}{1 +e^{\beta _j+\beta _k}} \qquad \text {for all } j. \end{aligned}$$

Suppose, for some \(\varvec{\beta }^{(0)}\), we have \({\text {rng}}(\varvec{\beta }^{(0)}) \leqslant c\) and \(\Vert \varvec{r}(\varvec{\beta }^{(0)})\Vert _\infty \ll \lambda (1-\lambda ) \varDelta \). Then there exists a solution \(\varvec{\beta }^*\) of system (19) such that

$$\begin{aligned} \Vert \varvec{\beta }^* - \varvec{\beta }^{(0)}\Vert _p = O\Bigl (\frac{\Vert \varvec{r}(\varvec{\beta }^{(0)})\Vert _p}{ \lambda (1-\lambda ) \varDelta }\Bigr ), \qquad \text { for any } p\in \{1,2,\infty \}. \end{aligned}$$

Proof

Observe that

$$\begin{aligned} \frac{\partial }{\partial \beta _j} \Bigl ( \frac{e^{\beta _j+ \beta _k}}{1+e^{\beta _j+ \beta _k}}\Bigr ) = \frac{e^{\beta _j+ \beta _k} }{1+e^{\beta _j+ \beta _k} } \Bigl (1 - \frac{e^{\beta _j+ \beta _k}}{1+e^{\beta _j+ \beta _k}}\Bigr ) = \lambda _{jk}(1-\lambda _{jk}). \end{aligned}$$

Therefore, the Jacobian matrix \(J(\varvec{\beta })\) of \(\varvec{r}(\varvec{\beta }) \) coincides with \(2A(\varvec{\beta })\), where \(A(\varvec{\beta })\) is the matrix defined in (17) for \(\varvec{\beta }\). Using the bounds of Lemma 8(b), for any \(\varvec{\beta }\in {\mathbb {R}}^n\) that \(\Vert \varvec{\beta }- \varvec{\beta }^{(0)}\Vert _\infty \leqslant c\), we have

$$\begin{aligned} \Vert J^{-1}(\varvec{\beta })\Vert _2 \leqslant \Vert J^{-1}(\varvec{\beta })\Vert _1 = \Vert J^{-1}(\varvec{\beta })\Vert _\infty = O\Bigl (\frac{1}{\varLambda (\varvec{\beta })\varDelta }\Bigr ), \end{aligned}$$

where \(\varLambda (\varvec{\beta }) = \lambda (\varvec{\beta })(1- \lambda (\varvec{\beta }))\) and \(\lambda (\varvec{\beta })\) is defined according (20). Note that if \(\Vert \varvec{r}(\varvec{\beta })\Vert _\infty \ll \lambda (1-\lambda ) \varDelta \), then we get that \(\lambda (\varvec{\beta })= \varTheta (\lambda )\) and \(1- \lambda (\varvec{\beta }) = \varTheta (1-\lambda )\). Applying Lemma 13 with \(\delta =C/\bigl (\lambda (1-\lambda )\varDelta \bigr )\) where \(C>0\) is sufficiently large, the proof is complete. \(\square \)

6.2 Proof of theorem 12

Let \(S' = S-(H^{+} \cup H^{-})\) and \(\varvec{t}' \in {\mathbb {N}}^n\) be such that \(\varvec{t}- \varvec{t}'\) is the degree sequence of \(H^+\). Then, by definition,

$$\begin{aligned} {\mathbb {P}}(H^{+} \subseteq S_{\varvec{t}} \text { and } H^{-} \cap S_{\varvec{t}}=\emptyset ) =\frac{N(S', \varvec{t}')}{N(S, \varvec{t})}. \end{aligned}$$

Since \(\varvec{h}\) is an integer vector, we have that

$$\begin{aligned} \Vert \varvec{h}\Vert _1 \leqslant \Vert \varvec{h}\Vert _2^2 \ll \varLambda \varDelta , \qquad \Vert \varvec{h}\Vert _\infty \leqslant \Vert \varvec{h}\Vert _2 \ll (\varLambda \varDelta )^{1/2}. \end{aligned}$$
(29)

Using \(\varvec{\beta }\) as \(\varvec{\beta }^{(0)}\) in Corollary 5, we find a solution \(\varvec{\beta }'\) of system (19) for the graph \(S'\) and the vector \(\varvec{t}'\) such that

$$\begin{aligned} \begin{aligned} \Vert \varvec{\beta }' - \varvec{\beta }\Vert _\infty&\leqslant \Vert \varvec{\beta }' - \varvec{\beta }\Vert _2 = O\Bigl ( \frac{\Vert \varvec{h}\Vert _2}{ \varLambda \varDelta }\Bigr ) = o\bigl ((\varLambda \varDelta )^{-1/2}\bigr ),\\ \Vert \varvec{\beta }' - \varvec{\beta }\Vert _1&= O\Bigl ( \frac{\Vert \varvec{h}\Vert _1}{ \varLambda \varDelta }\Bigr ) = O\Bigl ( \frac{\Vert \varvec{h}\Vert _2^2}{ \varLambda \varDelta }\Bigr ) = o(1). \end{aligned} \end{aligned}$$
(30)

Observe that \({\text {rng}}(\varvec{\beta }') = {\text {rng}}(\varvec{\beta }) + o(1)\) and \(\Vert \varvec{h}\Vert _\infty \ll (\varLambda \varDelta )^{1/2} \ll \frac{\varDelta ^2}{n}\). Therefore, \(S'\) and \(\varvec{t}'\) also satisfy the assumptions of Theorem 11. Applying Theorem 11 twice, we find that

$$\begin{aligned} \frac{N(S', \varvec{t}')}{N(S, \varvec{t})} = \bigl (1+ O(n^{-1/2+\varepsilon })\bigr )\, \frac{|Q|^{1/2} \exp \bigl ( {\mathbb {E}}u'(\varvec{X}') - \tfrac{1}{2} {\mathbb {E}}v'^2(\varvec{X}') \bigr )}{|Q'|^{1/2} \exp \bigl ( {\mathbb {E}}u(\varvec{X}) - \tfrac{1}{2} {\mathbb {E}}v^2(\varvec{X}) \bigr )} \,R, \end{aligned}$$

where Q, u, v and \(Q'\), \(u'\), \(v'\) are matrices of (17) and polynomials of (18) for S, \(\varvec{t}\) and \(S'\), \(\varvec{t}'\), respectively, \(\varvec{X}\) and \(\varvec{X}'\) are the corresponding normally distributed vectors and

$$\begin{aligned} R := \frac{\prod _{jk \in S'} (1+ e^{\beta _j'+\beta _k'})}{ \prod _{jk \in S} (1+ e^{\beta _j+\beta _k})} \; \prod _{j=1}^n \,e^{t_j \beta _j - t_j' \beta _j'}. \end{aligned}$$

Lemma 14

Under the assumptions of Theorem 12, we have

$$\begin{aligned} R = \left( 1 + O\Bigl (\frac{\Vert \varvec{h}\Vert _2^2}{ \varLambda \varDelta }\Bigr )\right) \prod _{jk \in H^+} \lambda _{jk} \prod _{jk \in H^-} (1-\lambda _{jk}). \end{aligned}$$

Proof

Let \((\lambda _{jk})\) and \((\lambda _{jk}')\) be defined as in (14) for for S, \(\varvec{t}\) and \(S'\), \(\varvec{t}'\). From (30), we have

$$\begin{aligned} \lambda _{jk}'(1-\lambda _{jk}') = \bigl (1+ O\bigl (\beta _j+\beta _k - \beta _j' -\beta _k'\bigr ) \bigr )\,\lambda _{jk} (1-\lambda _{jk}) =O(\varLambda ). \end{aligned}$$
(31)

Applying Taylor’s theorem to \(\log (1 + e^{x})\) and symmetry \(\lambda _{jk}'=\lambda _{kj}'\), we obtain that

$$\begin{aligned} \sum _{jk \in S'} \log \Bigl ( \frac{1+ e^{\beta _j+\beta _k}}{ 1+ e^{\beta _j'+\beta _k'}}\Bigr )&= \sum _{jk \in S'} \Bigl ( \lambda _{jk}' (\beta _j+ \beta _k- \beta _j' - \beta _k') \\&\quad + O\bigl ( \varLambda ( |\beta _j- \beta _j'|^2 + |\beta _k- \beta _k'|^2) \bigr ) \Bigr )\\&= \sum _{j=1}^n \sum _{k \mathrel {:}jk \in S'} \Bigl ( \lambda _{jk}' (\beta _j - \beta _j') + O\bigl ( \varLambda ( |\beta _j- \beta _j'|^2) \bigr ) \Bigr )\\&= O\bigl (\varLambda \varDelta \Vert \varvec{\beta }' - \varvec{\beta }\Vert _2^2 \bigr ) + \sum _{j =1}^n t_j' (\beta _j - \beta _j'). \end{aligned}$$

Then, using (30) again, we get that

$$\begin{aligned} R&= \bigl (1 + O(\varLambda \varDelta \Vert \varvec{\beta }' - \varvec{\beta }\Vert _2^2 )\bigr ) \frac{\prod _{j =1}^n e^{(t_j- t_j') \beta _j}}{\prod _{jk \in S- S'} (1+ e^{\beta _j+\beta _k})}\\&= \left( 1 + O\Bigl (\frac{\Vert \varvec{h}\Vert _2^2}{ \varLambda \varDelta }\Bigr )\right) \prod _{jk \in H^+} \lambda _{jk} \prod _{jk \in H^-} (1-\lambda _{jk}). \end{aligned}$$

\(\square \)

To complete the proof of Theorem 11, it remains to be shown that

$$\begin{aligned} \log \left( \frac{ |Q|}{|Q'|}\right) + |{\mathbb {E}}u(\varvec{X})- {\mathbb {E}}u'(\varvec{X}')| + |{\mathbb {E}}v^2(\varvec{X}) - {\mathbb {E}}v'^2(\varvec{X}')| = O\Bigl ( n^{-1/2+\varepsilon } + \frac{\Vert \varvec{h}\Vert _2^2}{ \varLambda \varDelta }\Bigr ). \end{aligned}$$
(32)

Let \(Q = (q_{jk})\) and \(Q' = (q_{jk}')\). Using (31), we find that

$$\begin{aligned} q_{jk}' - q_{jk} = {\left\{ \begin{array}{ll} O(\varLambda ) \Bigl ( h_j + \sum _{k: jk \in S'} |\beta _j +\beta _k - \beta _j'-\beta _k'| \Bigr ) , &{} \text {if } j=k;\\ O(\varLambda )|\beta _j +\beta _k - \beta _j'-\beta _k'| ,&{} \text {if } jk \in S'; \\ O(\varLambda ), &{}\text {if } jk \in S- S';\\ 0, &{}\text {otherwise.} \end{array}\right. } \end{aligned}$$
(33)

Lemma 15

Under the assumptions of Theorem 12, we have

$$\begin{aligned} \log \left( \frac{ |Q|}{|Q'|}\right) = O\Bigl ( \frac{\Vert \varvec{h}\Vert _2^2}{ \varLambda \varDelta }\Bigr ). \end{aligned}$$

Proof

If UV are symmetric positive definite matrices of equal size, then the matrix UV has positive real eigenvalues. To see this, note that UV is similar to the matrix \(U^{-1/2}UVU^{1/2}=(V^{1/2}U^{1/2})^{\textrm{T}}\hspace{-1.111pt}\, (V^{1/2}U^{1/2})\), which is symmetric positive definite. In particular, \(Q^{-1}Q'\) and \((Q')^{-1}Q\) have positive real eigenvalues, since Q and \(Q'\) are symmetric positive definite (see Lemma 8). Therefore, using \(\log x\leqslant x-1\), we can bound

$$\begin{aligned} \frac{|Q'|}{|Q|} \leqslant e^{{\text {tr}}(Q^{-1}Q') - n} = e^{{\text {tr}}(Q^{-1}(Q' - Q))}, \quad \frac{|Q|}{|Q'|} \leqslant e^{{\text {tr}}((Q')^{-1}Q) - n} = e^{{\text {tr}}((Q')^{-1}(Q - Q'))}. \end{aligned}$$

Using (30), (33) and the bounds of Lemma 8(b), we get that

$$\begin{aligned} {\text {tr}}(Q^{-1}(Q' - Q))&= O(\varDelta ^{-1}) \sum _{j=1}^n\, \biggl (h_j + \sum _{k: jk \in S'} |\beta _j +\beta _k - \beta _j'-\beta _k'| \biggr ) \\&= O(\varDelta ^{-1}) \bigl (\Vert \varvec{h}\Vert _1 + \varDelta \Vert \varvec{\beta }'-\varvec{\beta }\Vert _1\bigr ) = O\Bigl (\frac{\Vert \varvec{h}\Vert _2^2}{ \varLambda \varDelta }\Bigr ). \end{aligned}$$

The same argument applies to \({\text {tr}}((Q')^{-1}(Q - Q'))\) and thus \(\log \frac{|Q|}{|Q'|} = O\Bigl (\frac{\Vert \varvec{h}\Vert _2^2}{ \varLambda \varDelta }\Bigr )\). \(\square \)

To show the remaining part of (32), we will use the following estimate

$$\begin{aligned} |{\mathbb {E}}u(\varvec{X})- {\mathbb {E}}u'(\varvec{X}')| \leqslant |{\mathbb {E}}u(\varvec{X})- {\mathbb {E}}u(\varvec{X}')| + |{\mathbb {E}}u(\varvec{X}')- {\mathbb {E}}u'(\varvec{X}')| \end{aligned}$$

and similarly for\( |{\mathbb {E}}v^2(\varvec{X}) - {\mathbb {E}}v'^2(\varvec{X}')|\).

Lemma 16

Under the assumptions of Theorem 12, we have

$$\begin{aligned} |{\mathbb {E}}(u(\varvec{X}') - u'(\varvec{X}'))| + |{\mathbb {E}}v^2(\varvec{X}') - {\mathbb {E}}v'^2(\varvec{X}')| =O(n^{-1/2+\varepsilon }). \end{aligned}$$

Proof

Repeating the arguments of Lemma 9 (see (25) and (26)) and using (29), (30), (31), we derive that

$$\begin{aligned} {\mathbb {E}}(u(\varvec{X}') - u'(\varvec{X}'))&= O\biggl ( \varLambda \Vert \varvec{\beta }'-\varvec{\beta }\Vert _\infty \sum _{jk \in S} \frac{1}{ (\varLambda \varDelta )^2}+ \varLambda \sum _{jk \in S- S'} \frac{1}{(\varLambda \varDelta )^2}\biggr )\\&= O\Bigl ( n \frac{(\varLambda \varDelta )^{-1/2}}{\varLambda \varDelta }+ \frac{\Vert \varvec{h}\Vert _1}{\varLambda \varDelta ^2} \Bigr )= O(n^{-1/2+\varepsilon }) \end{aligned}$$

and

$$\begin{aligned} {\mathbb {E}}\bigl ((v(\varvec{X}') - v'(\varvec{X}'))^2\bigr ) = O\Bigl ( \frac{n}{\varLambda \varDelta } \Vert \varvec{\beta }'-\varvec{\beta }\Vert _\infty ^2 + \varLambda ^2 \frac{\Vert \varvec{h}\Vert _1^2}{ (\varLambda \varDelta )^3}\Bigr ) = O\Bigl (\frac{n}{(\varLambda \varDelta )^2}\Bigr ). \end{aligned}$$

Observe also that (using the arguments of (26))

$$\begin{aligned} {\mathbb {E}}\bigl ((v(\varvec{X}') + v'(\varvec{X}'))^2\bigr ) \leqslant 2 {\mathbb {E}}v^2(\varvec{X}') + 2 {\mathbb {E}}v'^2(\varvec{X}') = O\Bigl (\frac{n}{\varLambda \varDelta }\Bigr ). \end{aligned}$$

Applying the Cauchy-Schwartz inequality, we find that

$$\begin{aligned} {\mathbb {E}}v^2(\varvec{X}') - {\mathbb {E}}v'^2(\varvec{X}')&= {\mathbb {E}}(v(\varvec{X}') -v'(\varvec{X}')) (v(\varvec{X}')+ v'(\varvec{X}')) \\ {}&= O\Bigl (\sqrt{\frac{n}{(\varLambda \varDelta )^2 } \cdot \frac{n}{\varLambda \varDelta }}\,\,\Bigr ) = O(n^{-1/2+\varepsilon }). \end{aligned}$$

\(\square \)

We complete the proof of (32) and of Theorem 12 with the following lemma.

Lemma 17

Under the assumptions of Theorem 12, we have

$$\begin{aligned} |{\mathbb {E}}u(\varvec{X})- {\mathbb {E}}u(\varvec{X}')| + |{\mathbb {E}}v^2(\varvec{X}) - {\mathbb {E}}v^2(\varvec{X}')| = O(n^{-1/2+\varepsilon }). \end{aligned}$$

Proof

First, we need to establish a few more bounds on the difference of the covariance matrices of \(\varvec{X}\) and \(\varvec{X}'\). From (29), (30) and (33), we get that

$$\begin{aligned} q_{jj}' - q_{jj} = O(\varLambda ) \left( \Vert \varvec{h}\Vert _\infty + \varDelta \Vert \varvec{\beta }-\varvec{\beta }'\Vert _\infty \right) = O(\Vert \varvec{h}\Vert _2) = O((\varLambda \varDelta )^{1/2}) \end{aligned}$$
(34)

and, for \(jk \in S'\),

$$\begin{aligned} q_{jk}' - q_{jk} = O(\varLambda \Vert \varvec{\beta }-\varvec{\beta }'\Vert _\infty ) = O\Bigl (\frac{\Vert \varvec{h}\Vert _2}{\varDelta }\Bigr ) = O\bigl ((\varLambda /\varDelta )^{1/2}\bigr ). \end{aligned}$$
(35)

Let \(Q^{-1}= (\sigma _{jk})\) and \((Q')^{-1}= (\sigma _{jk}')\). Observe that

$$\begin{aligned} Q^{-1} - (Q')^{-1}= Q^{-1} (Q'- Q)(Q')^{-1}. \end{aligned}$$

Then, using (29), (33), (34), (35), and the bounds of Lemma 8(b) for Q and \(Q'\), we obtain that

$$\begin{aligned} \sigma _{jj}' - \sigma _{jj}&= O\biggl (\frac{|q_{jj}'-{q_{jj}|}}{ \varLambda ^2\varDelta ^2} + \sum _{k=1}^n \frac{|q_{kk}'-{q_{kk}|}}{ \varLambda ^2\varDelta ^4} + \sum _{k\ell \in S} \frac{|q_{k\ell }'-{q_{k\ell }|}}{\varLambda ^2 \varDelta ^3} \biggr ) \\ {}&=O\biggl (\frac{|q_{jj}'-{q_{jj}|}}{ \varLambda ^2\varDelta ^2} + \sum _{k=1}^n \frac{|q_{kk}'-{q_{kk}|}}{ \varLambda ^2\varDelta ^4} + \sum _{k\ell \in S'} \frac{|q_{k\ell }'-{q_{k\ell }|}}{\varLambda ^2 \varDelta ^3} + \frac{\Vert \varvec{h}\Vert _1 \varLambda }{\varLambda ^2 \varDelta ^3} \biggr ) \\ {}&= O\Bigl ( \frac{(\varLambda \varDelta )^{1/2}}{\varLambda ^2\varDelta ^{2}} + \frac{n(\varLambda \varDelta )^{1/2}}{\varLambda ^2\varDelta ^4} + \frac{ n \varDelta (\varLambda /\varDelta )^{1/2}}{\varLambda ^2\varDelta ^3} + \frac{\Vert \varvec{h}\Vert _1 \varLambda }{\varLambda ^2 \varDelta ^3} \Bigr ) \\&= O(n^{-3/2 + \varepsilon }). \end{aligned}$$

Similarly, for \(jk\in S'\) or \(jk \notin S\), we have

$$\begin{aligned} \sigma _{jk}' - \sigma _{jk}= O\biggl (\frac{|q_{jk}'-{q_{jk}|}}{ \varLambda ^2\varDelta ^2} + \sum _{\ell =1}^n \, \Bigl ( \frac{|q_{j\ell }'-q_{j\ell }|+|q_{k\ell }'-q_{k\ell }|}{\varLambda ^2 \varDelta ^3} + \frac{|q_{\ell \ell }'-q_{\ell \ell }|}{ \varLambda ^2\varDelta ^4}\Bigr ) + \sum _{\ell m \in S } \frac{|q_{\ell m}'-{q_{\ell m}|}}{\varLambda ^2 \varDelta ^4} \biggr ). \end{aligned}$$

Next, using (33) and (35), we estimate

$$\begin{aligned} \sum _{\ell =1}^n \, |q_{j\ell }'-q_{j\ell }| = \sum _{\ell \mathrel {:}jl \in S-S'} O(\varLambda ) + \sum _{\ell \mathrel {:}jl \in S'} O((\varLambda /\varDelta )^{1/2}) = O\bigl (\Vert \varvec{h}\Vert _\infty \varLambda + \varDelta (\varLambda /\varDelta )^{1/2}\bigr ). \end{aligned}$$

The same bound holds for \( \sum _{\ell =1}^n \, |q_{k\ell }'-q_{k\ell }| \). By (33), (34) and (35), we also have that

$$\begin{aligned} |q_{jk}'-q_{jk}|&= O((\varLambda /\varDelta )^{1/2}),\\ \sum _{\ell m \in S } |q_{\ell m}'-q_{\ell m}|&=\sum _{\ell m \in S' } |q_{\ell m}'-q_{\ell m}| + O(\Vert \varvec{h}\Vert _1 \varLambda ) = O\bigl ( \varDelta n (\varLambda /\varDelta )^{1/2} + \Vert \varvec{h}\Vert _1 \varLambda \bigr ). \end{aligned}$$

Combining the above and using (29), (34), (35) we get, for any \(jk \notin S-S'\),

$$\begin{aligned} \sigma _{jk}' - \sigma _{jk}= O\Bigl (\frac{(\varLambda /\varDelta )^{1/2}}{ \varLambda ^2 \varDelta ^2} + \frac{ \varDelta (\varLambda /\varDelta )^{1/2}}{\varLambda ^2\varDelta ^3} + \frac{ n (\varLambda \varDelta )^{1/2}}{\varLambda ^2\varDelta ^4} +\frac{\Vert \varvec{h}\Vert _\infty \varLambda }{\varLambda ^2 \varDelta ^4} + \frac{\Vert \varvec{h}\Vert _1 \varLambda }{\varLambda ^2 \varDelta ^4} \Bigr ) =O(n^{-5/2 + \varepsilon }). \end{aligned}$$

For random vectors \(\varvec{X}\) and \(\varvec{X}'\), define \((\sigma _{jk,\ell m})\) and \((\sigma _{jk,\ell m}')\) as in (23). From the above and Lemma 8(b), we obtain that

$$\begin{aligned} \sigma _{jk,\ell m} - \sigma _{jk,\ell m}' = {\left\{ \begin{array}{ll} O(n^{-3/2 + \varepsilon }),&{} \text {if } \{j,k\} \cap \{\ell , m\} \ne \emptyset ;\\ O(n^{-5/2 + \varepsilon }), &{} \text {if } \{j,k\} \cap \{\ell , m\} = \emptyset \\ &{} \text { and } \{j\ell , jm, k\ell , km \} \cap (S- S') = \emptyset . \end{array}\right. } \end{aligned}$$

Now, using the arguments of (25) and (26), we get that

$$\begin{aligned} {\mathbb {E}}u(\varvec{X})- {\mathbb {E}}u(\varvec{X}') = O\biggl (\varLambda \sum _{jk\in S} |\sigma _{jk, jk}' - \sigma _{jk,jk}| \cdot |\sigma _{jk,jk}' + \sigma _{jk,jk}| \biggr ) = O(n^{-1/2 + \varepsilon }). \end{aligned}$$

Note that, if real \(x,y,z,x',y',z'\) admit bounds \(|x|,|x'| \leqslant a\), \(|y|,|y'| \leqslant b\) and \(|z|,|z'| \leqslant c\) for some positive abc, then

$$\begin{aligned} |xyz - x'y'z'| \leqslant \left( \frac{|x-x'|}{a} + \frac{|y-y'|}{b} + \frac{|z-z'|}{c}\right) abc. \end{aligned}$$

Thus, using (24), (26) and (29) and for \((\sigma _{jk,\ell m})\) and \((\sigma _{jk,\ell m}')\), we find that

$$\begin{aligned}&{\mathbb {E}}v^2 (\varvec{X}) - {\mathbb {E}}v^2(\varvec{X}') \\&\quad = O(\varLambda ^2) \sum _{jk\in S}\, \sum _{\ell m \in S} \Bigl ( \bigl |\sigma _{jk,jk} \sigma _{\ell m, \ell m} \sigma _{jk, \ell m} -\sigma _{jk,jk}' \sigma _{\ell m, \ell m}' \sigma _{jk, \ell m}' \bigr | + |\sigma _{jk,\ell m}^3 - (\sigma _{jk,\ell m}')^3 |\Bigr )\\&\quad = O\biggl ( \left( \frac{n^{-3/2 + \varepsilon }}{( \varLambda \varDelta )^{-1}} + \frac{n^{-5/2 + \varepsilon }}{(n \varLambda \varDelta )^{-1}} \right) \frac{n}{\varLambda \varDelta } + \varLambda ^2 \sum _{jk \in S}\, \sum _{\ell m \in U_{jk}} \frac{1}{\varLambda ^3 \varDelta ^4}\biggr )\\&\quad = O\Bigl (n^{-1/2 + \varepsilon } + \frac{ n\varDelta ^2 \Vert \varvec{h}\Vert _\infty }{\varLambda \varDelta ^4} \Bigr ) =O(n^{-1/2 + \varepsilon }). \end{aligned}$$

where \(U_{jk} = \left\{ \ell m \in S \mathrel {:}\{j,k\} \cap \{\ell , m\} = \emptyset \text { and } \{j\ell , jm, k\ell , km\} \cap (S- S') \ne \emptyset \right\} \). This completes the proof. \(\square \)

6.3 Proof of theorem 10

Since Theorem 10 is trivially true for \(\varvec{h}=\varvec{0}\), we assume otherwise.

Let \(\varvec{\beta }^{(0)} = (\beta ^{(0)}, \ldots , \beta ^{(0)})\), where \(\beta ^{(0)}\) is defined by

$$\begin{aligned} \frac{e^{2\beta ^{(0)}}}{1+ e^{2\beta ^{(0)}}} = \lambda = \frac{t_1 + \cdots + t_n}{s_1 + \cdots + s_n} . \end{aligned}$$

By assumption (A3), we get that

$$\begin{aligned} \Vert \varvec{r}(\varvec{\beta }^{(0)})\Vert _\infty = \Vert \varvec{t}- \lambda \varvec{s}\Vert _\infty \ll \varLambda \varDelta , \end{aligned}$$

where \(\varvec{r}(\,)\) is defined in Corollary 5. Applying that corollary, we find a solution \(\varvec{\beta }\) of system (19) such that

$$\begin{aligned} \Vert \varvec{\beta }-\varvec{\beta }^{(0)}\Vert _\infty = O\Bigl ( \frac{\Vert \varvec{t}-\lambda \varvec{s}\Vert _\infty }{ \varLambda \varDelta }\Bigr ). \end{aligned}$$
(36)

In particular, the assumptions of Theorem 10 and (36) imply that all the assumptions of Theorem 12 hold. By Taylor’s theorem, we have that

$$\begin{aligned} \lambda _{jk}&= \bigl (1 + O( \beta _j -\beta ^{(0)} ) + O( \beta _k -\beta ^{(0)} ) \bigr )\lambda , \\ 1- \lambda _{jk}&= \bigl (1 + O( \beta _j -\beta ^{(0)} ) + O( \beta _k -\beta ^{(0)} ) \bigr )(1-\lambda ). \end{aligned}$$

Then, we get that

$$\begin{aligned} \prod _{jk \in H^+} \lambda _{jk} \prod _{jk \in H^-} (1-\lambda _{jk}) = \lambda ^{m(H^+)} (1-\lambda )^{m(H^-)} \exp \biggl (\, \sum _{j\in [n]} O( (\beta _j - \beta ^{(0)})h_j ) \biggr ). \end{aligned}$$

Using (36), we find that

$$\begin{aligned} \sum _{j\in [n]}\, \bigl |(\beta _j - \beta ^{(0)})h_j\bigr | \leqslant \Vert \varvec{\beta }-\varvec{\beta }^{(0)}\Vert _\infty \, \Vert \varvec{h}\Vert _1 = O\Bigl ( \frac{\Vert \varvec{t}-\lambda \varvec{s}\Vert _\infty \, \Vert \varvec{h}\Vert _1}{ \varLambda \varDelta }\Bigr ). \end{aligned}$$

Thus, applying Theorem 12 gives the required probability bound. \(\square \)