Abstract
The Rado graph, also known as the random graph G(∞, p), is a classical limit object for finite graphs. We study natural ball walks as a way of understanding the geometry of this graph. For the walk started at i, we show that order \(\log _2^*i\) steps are sufficient, and for infinitely many i, necessary for convergence to stationarity. The proof involves an application of Hardy’s inequality for trees.
Dedicated to our friend and coauthor Harold Widom.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
A. Bendikov, L. Saloff-Coste, Random walks on some countable groups, in Groups, Graphs and Random Walks (2011), pp. 77–103
I. Benjamini, O. Schramm, Every graph with a positive Cheeger constant contains a tree with a positive Cheeger constant. Geom. Funct. Anal. 7(3), 403–419 (1997)
N. Berestycki, E. Lubetzky, Y. Peres, A. Sly, Random walks on the random graph. Ann. Probab. 46(1), 456–490 (2018)
C. Bordenave, P. Caputo, D. Chafaï, Spectrum of large random reversible Markov chains: heavy-tailed weights on the complete graph. Ann. Probab. 39(4), 1544–1590 (2011)
C. Bordenave, P. Caputo, J. Salez, Random walk on sparse random digraphs. Probab. Theory Related Fields 170(3–4), 933–960 (2018)
P.J. Cameron, The random graph, in The Mathematics of Paul Erdős II (1997), 333–351
P.J. Cameron, The random graph revisited, in European Congress of Mathematics (Birkhäuser, Basel, 2001), pp. 267–274
P. Diaconis, M. Malliaris, Complexity and randomness in the Heisenberg groups (and beyond) (2021). arXiv:2107.02923v2
P. Diaconis, P.M. Wood, Random doubly stochastic tridiagonal matrices. Random Struct. Algor. 42(4), 403–437 (2013)
W.D. Evans, D.J. Harris, L. Pick, Weighted Hardy and Poincaré inequalities on trees. J. Lond. Math. Soc. (2) 52(1), 121–136 (1995)
N. Fountoulakis, B.A. Reed, The evolution of the mixing rate of a simple random walk on the giant component of a random graph. Random Struct. Algor. 33(1), 68–86 (2008)
A. Georgakopoulos, J. Haslegrave, Percolation on an infinitely generated group. Combin. Probab. Comput. 29(4), 587–615 (2020)
G.H. Hardy, J.E. Littlewood, G. Pólya, Inequalities, 2nd edn. (Cambridge University Press, Cambridge, 1952)
M. Hildebrand, A survey of results on random random walks on finite groups. Probab. Surv. 2, 33–63 (2005)
D. Hunter, An upper bound for the probability of a union. J. Appl. Probab. 13, 597–603 (1976)
J. Jorgenson, S. Lang, The ubiquitous heat kernel, in Mathematics Unlimited—2001 and Beyond (Springer, Berlin, Heidelberg, 2001), pp. 655–683
C.A.J. Klaassen, J.A. Wellner, Hardy’s inequality and its descendants: a probability approach. Electron. J. Probab. 26, 1–34 (2021)
E.G. Kounias, Bounds for the probability of a union, with applications. Ann. Math. Stat. 39, 2154–2158 (1968)
D.A. Levin, Y. Peres, Markov Chains and Mixing Times (American Mathematical Society, Providence, 2017)
S. Martineau, Locally infinite graphs and symmetries. Grad. J. Math. 2(2), 42–50 (2017)
L. Miclo, Relations entre isopérimétrie et trou spectral pour les chaînes de Markov finies. Probab. Theory Related Fields 114(4), 431–485 (1999)
A. Nachmias, Y. Peres, Critical random graphs: diameter and mixing time. Ann. Probab. 36(4), 1267–1286 (2008)
L. Saloff-Coste, Lectures on finite Markov chains, in Lectures on Probability Theory and Statistics (Saint-Flour, 1996). Lecture Notes in Mathematics, vol. 1665 (Springer, Berlin, 1997), pp. 301–413
Wikipedia contributors. Pairwise independence - Wikipedia, the free encyclopedia (2021). https://en.wikipedia.org/wiki/Pairwise_independence [Online: Accessed 18 September 2021]
W. Woess, Random Walks on Infinite Graphs and Groups (Cambridge University Press, Cambridge, 2000)
K.J. Worsley, An improved Bonferroni inequality and applications. Biometrika 69, 297–302 (1982)
Acknowledgements
We thank Peter Cameron, Maryanthe Malliaris, Sebastien Martineau, Yuval Peres, and Laurent Saloff-Coste for their help. S. C.’s research was partially supported by NSF grants DMS-1855484 and DMS-2113242. P. D.’s research was partially suppported by NSF grant DMS-1954042. L. M. acknowledges funding from ANR grant ANR-17-EUR-0010.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix 1: Dirichlet–Cheeger Inequalities
We begin by showing the Dirichlet–Cheeger inequality that we have been using in the previous sections. It is a direct extension (even simplification) of the proof of the Cheeger inequality given in Saloff-Coste [23]. We end this appendix by proving that it is in general not possible to compare linearly the Dirichlet–Cheeger constant of an absorbed Markov chain with the largest Dirichlet–Cheeger constant induced on a spanning subtree.
Let us work in continuous time. Consider L a sub-Markovian generator on a finite set V . Namely, , whose off-diagonal entries are non-negative and whose row sums are non-positive. Assume that L is irreducible and reversible with respect to a probability π on V .
Let λ(L) be the smallest eigenvalue of − L (often called the Dirichlet eigenvalue). The variational formula for eigenvalues shows that
The Dirichlet–Cheeger constant ι(L) is defined similarly, except that only indicator functions are considered in the minimum:
Here is the Dirichlet–Cheeger inequality:
Theorem 5
Assuming L ≠ 0, we have
where .
When L is Markovian, the above inequalities are trivial and reduce to ι(L) = λ(L) = 0. Indeed, it is sufficient to consider and A = V respectively in the r.h.s. of (30) and (31). Thus there is no harm in supposing furthermore that L is strictly sub-Markovian: at least one of the row sums is negative. To bring this situation back to a Markovian setting, it is usual to extend V into where 0∉V is a new point. Then one introduces the extended Markov generator \(\overline L\) on \(\overline V\) via
Note that the point 0 is absorbing for the Markov processes associated to \(\overline L\).
It is convenient to give another expression for ι(L). Consider the set of edges . We define a measure μ on \(\overline E\):
(Note that the reversibility assumption was used to ensure that the first line is well-defined.) Extend any \(f\in \mathbb {R}^V\) into the function \(\overline f\) on \(\overline V\) by making it vanish at 0 and define
With these definitions we can check that
These notations enable to see (31) as a L 1 version of (30):
Proposition 8
We have
Proof
Restricting the minimum in the r.h.s. to indicator functions, we recover the r.h.s. of (31). It is thus sufficient to show that for any given \(f\in \mathbb {R}^V\setminus \{0\}\),
Note that \(\vert d\overline f \vert (e)\geqslant \vert d \vert \overline f\vert \vert (e)\) for any \( e\in \overline E\), so without lost of generality, we can assume \(f\geqslant 0\). For any \(t\geqslant 0\), consider the set F t and its indicator function given by
Note that
so that by integration,
Furthermore, we have
where for any A ⊂ V , we define
Note that for any such A, we have , so that
showing (32). □
Proof (Of Theorem 5)
Given \(g\in \mathbb {R}^V\), let f = g 2. By Proposition 8, we compute
Thus, we have
which gives the desired lower bound for λ(L). The upper bound is immediate. □
The unoriented graph associated to L is where . Consider \(\mathbb {T}\), the set of all subtrees of \(\overline G\), and for any \(T\in \mathbb {T}\), consider the sub-Markovian generator L T on V associated to T via
where x, y ∈ V and \(\overline E(T)\) is the set of (unoriented) edges of T.
Note that L T is also reversible with respect to π (it is irreducible if and only if 0 belongs to a unique edge of \(\overline E(T)\)). Denote μ T the corresponding measure on \(\overline E\). It is clear that \(\mu _T\leqslant \mu \), so we get \( \iota (L_T)\leqslant \iota (L)\). In the spirit of Benjamini and Schramm [2], we may wonder if conversely, ι(L) could be bounded above in terms of \(\max _{T\in \mathbb {T}}\iota (L_T)\). A linear comparison is not possible:
Proposition 9
It does not exist a universal constant χ > 0 such that for any L as above, \( \chi \iota (L)\leqslant \max _{T\in \mathbb {T}}\iota (L_T)\).
Proof
Let us construct a family \((L^{(n)})_{n\in \mathbb {N}_+}\) of sub-Markovian generators such that
For any \(n\in \mathbb {N}_+\), the state space V (n) of L (n) is (more generally, all notions associated to L (n) will marked by the exponent (n)). Denote and . We take
where x, y ∈ V (n), and 𝜖 > 0, that will depend on n, is such that n𝜖 < 1∕2.
Recall that 0 is the cemetery point added to V (n), we have
Note that π (n) is the uniform probability on V (n). Let us show that
Consider any ∅ ≠ A ⊂ V (n), and decompose A = A 0 ⊔ A 1, with and . Denote and . We have that ∂A is given by
and thus \( \mu ^{(n)}(\partial A)=\frac 1{2n}(\epsilon (a_0(n-a_1)+a_1(n-a_0))+a_0)\). It follows that
Taking into account that 1 − 2𝜖a 1 > 0, the r.h.s. is minimized with respect to when a 0 = 0 and we then get (independently of a 1), μ (n)(∂A)∕π (n)(A) = n𝜖. We deduce (34).
Consider any \(T\in \mathbb {T}^{(n)}\) and let us check that
Observe there exists \(x\in V^{(n)}_1\) such that there is a unique \(y\in V^{(n)}_0\) with {x, y} being an edge of T. Indeed, put on the edges of T the orientation toward the root 0. Thus from any vertex \(x\in V_1^{(n)}\) there is a unique exiting edge (but it is possible there are several incoming edges). Necessarily, there is a vertex in \(V^{(n)}_0\) whose edge exits to 0. So there are at most n − 1 vertices from \(V^{(n)}_0\) whose exit edge points toward \(V^{(n)}_1\). In particular, there is at least one vertex from \(V^{(n)}_1\) which is not pointed out by a vertex from \(V^{(n)}_0\). We can take x to be this vertex from \(V^{(n)}_1\) and \(y\in V^{(n)}_0\) is the vertex pointed out by the oriented edge exiting from x.
Considering the singleton {x}, we get
implying (35) (a little more work would prove that an equality holds there). As a consequence, we see that \( \max _{T\in \mathbb {T}^{(n)}}\iota (L^{(n)}_T)\leqslant \epsilon \). Taking for instance to fulfill the condition n𝜖 < 1∕2, we obtain \( \frac {\max _{T\in \mathbb {T}^{(n)}}\iota (L^{(n)}_T)}{\iota (L^{(n)})}\leqslant \frac 1{n}\), and (33) follows. □
Appendix 2: Hardy’s Inequalities
Our goal here is to extend the validity of Hardy’s inequalities on finite trees to general denumerable trees, without assumption of local finiteness. We begin by recalling the Hardy’s inequalities on finite trees. Consider \(\mathcal {T}=(\overline V, \overline E,0)\) a finite tree rooted in 0, whose vertex and (undirected) edge sets are \(\overline V\) and \(\overline E\). Denote , for each x ∈ V , the parent p(x) of x is the neighbor of x in the direction of 0. The other neighbors of x are called the children of x and their set is written C(x). For x = 0, by convention C(0) is the set of neighbors of 0. Let be given two positive measures μ, ν defined on V . Consider c(μ, ν) the best constant \(c\geqslant 0\) in the inequality
where f was extended to 0 via .
According to [21] (see also Evans, Harris and Pick [10]), c(μ, ν) can be estimated up to a factor 16 via Hardy’s inequalities for trees, see (39) below. To describe them we need several notations.
Let \(\mathbb {T}\) the set of subsets T of V satisfying the following conditions
-
T is non-empty and connected (in \(\mathcal {T}\)),
-
T does not contain 0,
-
if x ∈ T has a child in T, then all children of x belong to T.
Note that any \(T\in \mathbb {T}\) admits a closest element to 0, call it m(T), we have m(T) ≠ 0. When T is not reduced to the singleton {m(T)}, the connected components of T ∖{m(T)} are indexed by the set of the children of m(T), namely C(m(T)). For y ∈ C(m(T)), denote by T y the connected component of T ∖{m(T)} containing y. Note that \(T_{y}\in \mathbb {T}\).
We extend ν as a functional on \(\mathbb {T}\), via the iteration
-
when T is the singleton {m(T)}, we take ,
-
when T is not a singleton, decompose T as {m(T)}⊔⊔y ∈ C(m(T)) T y, then ν satisfies
$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \frac 1{\nu(T)}& =&\displaystyle \frac 1{\nu(m(T))}+\frac 1{\sum_{y\in C(m(T))} \nu(T_{y})}.\end{array} \end{aligned} $$(37)
For x ∈ V , let S x be the set of vertices y ∈ V whose path to 0 pass through x. For any \(T\in \mathbb {T}\) we associate the subset
where L(T) is the set of leaves of T, namely the x ∈ T having no children in T. Equivalently, T ∗ is the set of all descendants of the leaves of T, themselves included.
Consider \(\mathbb {S}\subset \mathbb {T}\), the set of \(T\in \mathbb {T}\) which are such that m(T) is a child of 0. Finally, define
We are interested in this quantity because of the Hardy inequality:
Our goal here is to extend this inequality to the situation where V is denumerable and where μ and ν are two positive measures on V , with ∑x ∈ V μ(x) < +∞.
Remark 3
Without lost of generality, we can assume 0 has only one child, because what happens on different S x and S y, where both x and y are children of 0, can be treated separately.
More precisely, while V is now (denumerable) infinite, we first assume that the height of is finite (implying that \(\mathcal {T}\) cannot be locally finite). Recall that the height h(x) of a vertex \(x\in \overline V\) is the smallest number of edges linking x to 0. The assumption that \(\sup _{x\in \overline V} h(x)<+\infty \) has the advantage that the iteration (37) enables us to compute ν on \(\mathbb {T}\), starting from the highest vertices from an element of \(\mathbb {T}\). Then b(μ, ν) is defined exactly as in (38), except the maximum has to be replaced by a supremum. Extend c(μ, ν) as the minimal constant \(c\geqslant 0\) such that (36) is satisfied, with the possibility that c(μ, ν) = +∞ when there is no such c. Note that in (36), the space \(\mathbb {R}^V\) can be reduced and replaced by \(\mathcal {B}(V)\), the space of bounded mappings on V :
Lemma 10
We have
Proof
Denote \(\widetilde c(\mu ,\nu )\) the above r.h.s. A priori we have \(c(\mu ,\nu )\geqslant \widetilde c(\mu ,\nu )\). To prove the reverse bound, consider any \(f\in \mathbb {R}^V\) and consider for M > 0, . Note that
(This a general property of Dirichlet forms and comes from the 1-Lipschitzianity of the mapping \(\mathbb {R}\ni r\mapsto (r\wedge M)\vee (-M)\).) Since \(f_M\in \mathcal {B}(V)\), we have
Letting M go to infinity, we get at the limit by monotonous convergence
Since this is true for all \(f\in \mathbb {R}^V\), we deduce that \(c(\mu ,\nu )\leqslant \widetilde c(\mu ,\nu )\). □
Consider \((x_n)_{n\in \mathbb {N}_+}\) an exhaustive sequence of \(\overline V\), with x 0 = 0 and such that for any \(n\in \mathbb {N}_+\), is connected. We denote \(\mathcal {T}_n\) the tree rooted on 0 induced by \(\mathcal {T}\) on \(\overline V_n\) and as above, . For any \(n\in \mathbb {N}_+\) and x ∈ V n, introduce the set
In words, this is the set of elements of V whose path to 0 first enters V n at x.
From now on, we assume that 0 has only one child, taking into account Remark 3. It follows that
Let μ n and ν n be the measures defined on V n via
The advantage of the μ n and ν n is that they brought us back to the finite situation while enabling to approximate c(μ, ν):
Proposition 10
We have limn→∞ c(μ n, ν n) = c(μ, ν).
Proof
We first check that the limit exists. For \(n\in \mathbb {N}_+\), consider the sigma-field \(\mathcal {F}_n\) generated by the partition (40). To each \(\mathcal {F}_n\)-measurable function f, associate the function f n defined on V n by
This function determines f, since for any x ∈ V n and any y ∈ R n(x), f(y) = f n(x). Furthermore, we have:
It follows that
where \(\mathcal {B}(\mathcal {F}_n)\) is the set of \(\mathcal {F}_n\)-measurable functions, which are necessarily bounded, i.e., belong to \(\mathcal {B}(V)\). Since for any \(n\in \mathbb {N}_+\) we have \(\mathcal {F}_n\subset \mathcal {F}_{n+1}\), we get that the sequence \((c(\mu _n,\nu _n))_{n\in \mathbb {N}_+}\) is non-decreasing and, taking into account Lemma 10, that
To get the reverse bound, first assume that c(μ, ν) < +∞. For given 𝜖 > 0, find a function \(f\in \mathcal {B}(V)\) with
Consider π the normalization of μ into a probability measure and let f n be the conditional expectation of f with respect to π and to the sigma-field \(\mathcal {F}_n\). Note that the f n are uniformly bounded by \(\left \Vert f\right \Vert { }_{\infty }\). Thus by the bounded martingale convergence theorem and since π gives a positive weight to any point of V , we have
From Fatou’s lemma, we deduce
By another application of the bounded martingale convergence theorem, we get
so that
It follows that \( \lim _{n\rightarrow \infty } c(\mu _n,\nu _n)\geqslant c(\mu ,\nu )-\epsilon \), and since 𝜖 > 0 can be chosen arbitrary small,
It remains to deal with the case where c(μ, ν) = +∞. Then for any M > 0, we can find a function \(f\in \mathcal {B}(V)\) with
By the above arguments, we end up with \( \lim _{n\rightarrow \infty } c(\mu _n,\nu _n)\geqslant M\), and since M can be arbitrary large, limn→∞ c(μ n, ν n) = +∞ = c(μ, ν). □
Our next goal is to show the same result holds for b(μ, ν). We need some additional notations. The integer \(n\in \mathbb {N}_+\) being fixed, denote \(\mathbb {T}_n\) and \(\mathbb {S}_n\) the sets \(\mathbb {T}\) and \(\mathbb {S}\) associated to \(\mathcal {T}_n\). The functional ν n is extended to \(\mathbb {T}_n\) via the iteration (37) understood in \(\mathcal {T}_n\). To any \(T\in \mathbb {T}_n\), associate T n the minimal element of \(\mathbb {T}\) containing T. It is obtained in the following way: to any x ∈ T, if x has a child in T, then add all the children of x in V , and otherwise do not add any other points.
Lemma 11
We have the comparisons
where T ∗ is understood in \(\mathcal {T}_n\) (and \(T_n^*\) in \(\mathcal {T}\) ).
Proof
The first bound is proven by iteration on the height of \(T\in \mathbb {T}_n\).
-
If this height is zero, then T is a singleton and T n is the same singleton, so that ν n(T) = ν(T n).
-
If the height h(T) of T is at least equal to 1, decompose
$$\displaystyle \begin{aligned} \begin{array}{rcl} T& =&\displaystyle \{m_n(T)\}\sqcup \bigsqcup_{y\in C_n(m_n(T))}T_{n,y}\end{array} \end{aligned} $$where m n(⋅), C n(⋅) and T n,⋅ are the notions corresponding to m(⋅), C(⋅) and T ⋅ in \(\mathcal {T}_n\).
Note that T and T n have the same height and decompose
$$\displaystyle \begin{aligned} \begin{array}{rcl} T_n& =&\displaystyle \{m( T_n)\}\sqcup \bigsqcup_{z\in C(m( T_n))} T_{n,z}.\end{array} \end{aligned} $$On the one hand, we have m(T n) = m n(T) and C n(m n(T)) ⊂ C(m n(T)) and on the other hand, we have for any y ∈ C n(m n(T)), \( \nu _n(T_y)\geqslant \nu ( (T_y)_n) =\nu ( T_{n,y})\), due to the iteration assumption and to the fact that the common height of T y and (T y)n is at most equal to h(T) − 1. The equality (T y)n = T n,y is due to the fact that T n,y is obtained by the same completion of T y as the one presented for T just above the statement of Lemma 11, and thus coincides with (T y)n. It follows that
$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac 1{\nu_n(T)}& =&\displaystyle \frac 1{\nu_n(m_n(T))}+\frac 1{\sum_{y\in C_n(m_n(T))} \nu_n(T_{y})}\\& =&\displaystyle \!\!\frac 1{\nu(m( T_n))}\!+\!\frac 1{\sum_{y\in C_n(m_n(T))} \nu_n(T_{y})} \! \leq\! \frac 1{\nu(m( T_n))}\! +\! \frac 1{\sum_{y\in C_n(m_n(T))} \nu(T_{n,y})}\\ & \leqslant &\displaystyle \frac 1{\nu(m( T_n))}+\frac 1{\sum_{y\in C(m( T_n))} \nu(T_{n,y})} =\frac 1{\nu( T_n)}, \end{array} \end{aligned} $$establishing the wanted bound \(\nu _n(T)\geqslant \nu ( T_n)\). We now come to the second bound of the above lemma. By definition, we have
$$\displaystyle \begin{aligned} \begin{array}{rcl} T^*& =&\displaystyle \sqcup_{x\in L_n(T)}S_{n,y},\end{array} \end{aligned} $$where L n(T) is the set of leaves of T in \(\mathcal {T}_n\) and S n,y is the subtree rooted in y in \(\mathcal {T}_n\).
Note that L n(T) ⊂ L(T n) and by definition of μ n, we have
It follows that
□
Let \(\widetilde {\mathbb {S}}_n\) be the image of \(\mathbb {S}_n\) under the mapping \(\mathbb {S}_n\ni T\mapsto T_n\in \mathbb {S}\). Since \(\mathbb {S}_n\ni T\mapsto T_n\in \widetilde {\mathbb {S}}_n\) is a bijection, we get from Lemma 11,
so that
Let us show more precisely:
Proposition 11
We have limn→∞ b(μ n, ν n) = b(μ, ν).
Proof
According to (41), it remains to show that
Consider \(T\in \mathbb {S}\) such that the ration μ(T ∗)∕ν(T) serves to approximate b(μ, ν), namely up to an arbitrary small 𝜖 > 0 if b(μ, ν) < +∞ or is an arbitrary large quantity if b(μ, ν) = +∞. Define
Arguing as at the end of the proof of Proposition 10, we will deduce (42) from
where (T (n))∗ is understood in \(\mathcal {T}_n\). This convergence will be the consequence of
For (43), note that
and as we have seen at the end of the proof of Lemma 11,
Thus (43) follows by dominated convergence (since μ(V ) < +∞), from
To show the latter convergences, consider two cases:
-
If x ∈ L(T), then we will have x ∈ L n(T (n)) as soon as x ∈ V n.
-
If x ∈ T ∖ L(T), then we will have x∉L n(T (n)) as soon as V n contains one of the children of x in T.
We now come to (44), and more generally let us prove by iteration over their height, that for any \(\widetilde T\in \mathbb {T}\) and \(\widetilde T\subset T\), we have
i.e., the limit is non-decreasing. Indeed, if \(\widetilde T\) has height 0, it is a singleton {x}, we have as soon as x belongs to V n, insuring (45).
Assume that \(\widetilde T\) has height a \(h\geqslant 1\) and that (45) holds for any \(\widetilde T\) whose height is at most equal to h − 1. Write as usual
Assume that n is large enough so that and in particular \(m(\widetilde T)\in V_n\) and . Thus we also have
On the one hand, the set \(C_n(m(\widetilde T))\) is non-decreasing and its limit is \(C(m(\widetilde T))\), and on the other hand, due to the induction hypothesis, we have for any \(y\in C(m(\widetilde T))\),
By monotone convergence, we get
which leads to (45), via (46) and (47). This ends the proof of (42). □
The conjunction of Propositions 10 and 11 leads to the validity of (39), when V is denumerable with \(\mathcal {T}\) of finite height.
Let us now remove the assumption of finite height. The arguments are very similar to the previous one, except that the definition of b(μ, ν) has to be modified (μ and ν are still positive measures on V , with μ of finite total mass). More precisely, for any \(M\in \mathbb {N}_+\), consider . Define on V M the measure ν M as the restriction to V M of ν and μ M via
By definition, we take
This limit exists and the convergence is monotone, since he have for any \( M\in \mathbb {N}_+\), \(b(\mu _M,\nu _M)=\max _{T\in \mathbb {S}_M} \frac {\mu (T^*)}{\nu (T)}\), where . Note that a direct definition of b(μ, ν) via the iteration (37) is not possible: we could not start from leaves that are singletons.
By definition, c(μ, ν) is the best constant in (36). It also satisfies , as can be seen by adapting the proof of Proposition 10. We conclude that (39) holds by passing at the limit in
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Chatterjee, S., Diaconis, P., Miclo, L. (2022). A Random Walk on the Rado Graph. In: Basor, E., Böttcher, A., Ehrhardt, T., Tracy, C.A. (eds) Toeplitz Operators and Random Matrices. Operator Theory: Advances and Applications, vol 289. Birkhäuser, Cham. https://doi.org/10.1007/978-3-031-13851-5_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-13851-5_13
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-031-13850-8
Online ISBN: 978-3-031-13851-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)