Skip to main content

A Random Walk on the Rado Graph

  • Chapter
  • First Online:
Toeplitz Operators and Random Matrices

Part of the book series: Operator Theory: Advances and Applications ((OT,volume 289))

  • 415 Accesses

Abstract

The Rado graph, also known as the random graph G(, p), is a classical limit object for finite graphs. We study natural ball walks as a way of understanding the geometry of this graph. For the walk started at i, we show that order \(\log _2^*i\) steps are sufficient, and for infinitely many i, necessary for convergence to stationarity. The proof involves an application of Hardy’s inequality for trees.

Dedicated to our friend and coauthor Harold Widom.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. A. Bendikov, L. Saloff-Coste, Random walks on some countable groups, in Groups, Graphs and Random Walks (2011), pp. 77–103

    Google Scholar 

  2. I. Benjamini, O. Schramm, Every graph with a positive Cheeger constant contains a tree with a positive Cheeger constant. Geom. Funct. Anal. 7(3), 403–419 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  3. N. Berestycki, E. Lubetzky, Y. Peres, A. Sly, Random walks on the random graph. Ann. Probab. 46(1), 456–490 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  4. C. Bordenave, P. Caputo, D. Chafaï, Spectrum of large random reversible Markov chains: heavy-tailed weights on the complete graph. Ann. Probab. 39(4), 1544–1590 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. C. Bordenave, P. Caputo, J. Salez, Random walk on sparse random digraphs. Probab. Theory Related Fields 170(3–4), 933–960 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  6. P.J. Cameron, The random graph, in The Mathematics of Paul Erdős II (1997), 333–351

    Google Scholar 

  7. P.J. Cameron, The random graph revisited, in European Congress of Mathematics (Birkhäuser, Basel, 2001), pp. 267–274

    Google Scholar 

  8. P. Diaconis, M. Malliaris, Complexity and randomness in the Heisenberg groups (and beyond) (2021). arXiv:2107.02923v2

    Google Scholar 

  9. P. Diaconis, P.M. Wood, Random doubly stochastic tridiagonal matrices. Random Struct. Algor. 42(4), 403–437 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. W.D. Evans, D.J. Harris, L. Pick, Weighted Hardy and Poincaré inequalities on trees. J. Lond. Math. Soc. (2) 52(1), 121–136 (1995)

    Google Scholar 

  11. N. Fountoulakis, B.A. Reed, The evolution of the mixing rate of a simple random walk on the giant component of a random graph. Random Struct. Algor. 33(1), 68–86 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  12. A. Georgakopoulos, J. Haslegrave, Percolation on an infinitely generated group. Combin. Probab. Comput. 29(4), 587–615 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  13. G.H. Hardy, J.E. Littlewood, G. Pólya, Inequalities, 2nd edn. (Cambridge University Press, Cambridge, 1952)

    MATH  Google Scholar 

  14. M. Hildebrand, A survey of results on random random walks on finite groups. Probab. Surv. 2, 33–63 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  15. D. Hunter, An upper bound for the probability of a union. J. Appl. Probab. 13, 597–603 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  16. J. Jorgenson, S. Lang, The ubiquitous heat kernel, in Mathematics Unlimited—2001 and Beyond (Springer, Berlin, Heidelberg, 2001), pp. 655–683

    Book  MATH  Google Scholar 

  17. C.A.J. Klaassen, J.A. Wellner, Hardy’s inequality and its descendants: a probability approach. Electron. J. Probab. 26, 1–34 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  18. E.G. Kounias, Bounds for the probability of a union, with applications. Ann. Math. Stat. 39, 2154–2158 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  19. D.A. Levin, Y. Peres, Markov Chains and Mixing Times (American Mathematical Society, Providence, 2017)

    Book  Google Scholar 

  20. S. Martineau, Locally infinite graphs and symmetries. Grad. J. Math. 2(2), 42–50 (2017)

    MathSciNet  MATH  Google Scholar 

  21. L. Miclo, Relations entre isopérimétrie et trou spectral pour les chaînes de Markov finies. Probab. Theory Related Fields 114(4), 431–485 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  22. A. Nachmias, Y. Peres, Critical random graphs: diameter and mixing time. Ann. Probab. 36(4), 1267–1286 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  23. L. Saloff-Coste, Lectures on finite Markov chains, in Lectures on Probability Theory and Statistics (Saint-Flour, 1996). Lecture Notes in Mathematics, vol. 1665 (Springer, Berlin, 1997), pp. 301–413

    Google Scholar 

  24. Wikipedia contributors. Pairwise independence - Wikipedia, the free encyclopedia (2021). https://en.wikipedia.org/wiki/Pairwise_independence [Online: Accessed 18 September 2021]

  25. W. Woess, Random Walks on Infinite Graphs and Groups (Cambridge University Press, Cambridge, 2000)

    Book  MATH  Google Scholar 

  26. K.J. Worsley, An improved Bonferroni inequality and applications. Biometrika 69, 297–302 (1982)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank Peter Cameron, Maryanthe Malliaris, Sebastien Martineau, Yuval Peres, and Laurent Saloff-Coste for their help. S. C.’s research was partially supported by NSF grants DMS-1855484 and DMS-2113242. P. D.’s research was partially suppported by NSF grant DMS-1954042. L. M. acknowledges funding from ANR grant ANR-17-EUR-0010.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sourav Chatterjee .

Editor information

Editors and Affiliations

Appendices

Appendix 1: Dirichlet–Cheeger Inequalities

We begin by showing the Dirichlet–Cheeger inequality that we have been using in the previous sections. It is a direct extension (even simplification) of the proof of the Cheeger inequality given in Saloff-Coste [23]. We end this appendix by proving that it is in general not possible to compare linearly the Dirichlet–Cheeger constant of an absorbed Markov chain with the largest Dirichlet–Cheeger constant induced on a spanning subtree.

Let us work in continuous time. Consider L a sub-Markovian generator on a finite set V . Namely, , whose off-diagonal entries are non-negative and whose row sums are non-positive. Assume that L is irreducible and reversible with respect to a probability π on V .

Let λ(L) be the smallest eigenvalue of − L (often called the Dirichlet eigenvalue). The variational formula for eigenvalues shows that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \lambda(L)& =&\displaystyle \min_{f\in \mathbb{R}^V\setminus\{0\}}\frac{-\pi[fL[f]]}{\pi[f^2]}.\end{array} \end{aligned} $$
(30)

The Dirichlet–Cheeger constant ι(L) is defined similarly, except that only indicator functions are considered in the minimum:

(31)

Here is the Dirichlet–Cheeger inequality:

Theorem 5

Assuming L ≠ 0, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{\iota(L)^2}{2\ell(L)}\ \leqslant \ \lambda(L)\ \leqslant \ \iota(L)\end{array} \end{aligned} $$

where .

When L is Markovian, the above inequalities are trivial and reduce to ι(L) = λ(L) = 0. Indeed, it is sufficient to consider and A = V  respectively in the r.h.s. of (30) and (31). Thus there is no harm in supposing furthermore that L is strictly sub-Markovian: at least one of the row sums is negative. To bring this situation back to a Markovian setting, it is usual to extend V  into where 0∉V  is a new point. Then one introduces the extended Markov generator \(\overline L\) on \(\overline V\) via

Note that the point 0 is absorbing for the Markov processes associated to \(\overline L\).

It is convenient to give another expression for ι(L). Consider the set of edges . We define a measure μ on \(\overline E\):

(Note that the reversibility assumption was used to ensure that the first line is well-defined.) Extend any \(f\in \mathbb {R}^V\) into the function \(\overline f\) on \(\overline V\) by making it vanish at 0 and define

With these definitions we can check that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall\ f\in\mathbb{R}^V,\qquad -\pi[fL[f]]& =&\displaystyle \sum_{e\in\overline E}\vert d\overline f\vert^2(e) \mu(e).\end{array} \end{aligned} $$

These notations enable to see (31) as a L 1 version of (30):

Proposition 8

We have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \iota(L)& =&\displaystyle \min_{f\in\mathbb{R}^V\setminus\{0\}}\frac{\sum_{e\in\overline E}\vert d\overline f\vert(e) \mu(e)}{\pi[\vert f\vert]}.\end{array} \end{aligned} $$

Proof

Restricting the minimum in the r.h.s. to indicator functions, we recover the r.h.s. of (31). It is thus sufficient to show that for any given \(f\in \mathbb {R}^V\setminus \{0\}\),

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \frac{\sum_{e\in\overline E}\vert d\overline f\vert(e) \mu(e)}{\pi[\vert f\vert]}& \geqslant &\displaystyle \iota(L).\end{array} \end{aligned} $$
(32)

Note that \(\vert d\overline f \vert (e)\geqslant \vert d \vert \overline f\vert \vert (e)\) for any \( e\in \overline E\), so without lost of generality, we can assume \(f\geqslant 0\). For any \(t\geqslant 0\), consider the set F t and its indicator function given by

Note that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall\ x\in V,\qquad f(x)& =&\displaystyle \int_0^{+\infty} f_t(x)\, dt,\end{array} \end{aligned} $$

so that by integration,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \pi[f]& =&\displaystyle \int_0^{+\infty}\pi[F_t]\, dt.\end{array} \end{aligned} $$

Furthermore, we have

where for any A ⊂ V , we define

Note that for any such A, we have , so that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{e\in\overline E}\vert d\overline f\vert(e) \mu(e)& =&\displaystyle -\int_{0}^{+\infty}\pi[f_tL[f_t]]\, dt \geqslant \iota(L) \int_0^{+\infty}\pi[F_t]\, dt =\iota(L) \pi[f],\end{array} \end{aligned} $$

showing (32). □

Proof (Of Theorem 5)

Given \(g\in \mathbb {R}^V\), let f = g 2. By Proposition 8, we compute

Thus, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{\iota(L)^2}{2\ell(L)}\pi[g^2]& \leqslant &\displaystyle -\pi[gL[g]],\end{array} \end{aligned} $$

which gives the desired lower bound for λ(L). The upper bound is immediate. □

The unoriented graph associated to L is where . Consider \(\mathbb {T}\), the set of all subtrees of \(\overline G\), and for any \(T\in \mathbb {T}\), consider the sub-Markovian generator L T on V  associated to T via

where x, y ∈ V  and \(\overline E(T)\) is the set of (unoriented) edges of T.

Note that L T is also reversible with respect to π (it is irreducible if and only if 0 belongs to a unique edge of \(\overline E(T)\)). Denote μ T the corresponding measure on \(\overline E\). It is clear that \(\mu _T\leqslant \mu \), so we get \( \iota (L_T)\leqslant \iota (L)\). In the spirit of Benjamini and Schramm [2], we may wonder if conversely, ι(L) could be bounded above in terms of \(\max _{T\in \mathbb {T}}\iota (L_T)\). A linear comparison is not possible:

Proposition 9

It does not exist a universal constant χ > 0 such that for any L as above, \( \chi \iota (L)\leqslant \max _{T\in \mathbb {T}}\iota (L_T)\).

Proof

Let us construct a family \((L^{(n)})_{n\in \mathbb {N}_+}\) of sub-Markovian generators such that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \lim_{n\rightarrow\infty}\frac{\max_{T\in\mathbb{T}}\iota(L^{(n)}_T)}{\iota(L^{(n)})}& =&\displaystyle 0\end{array} \end{aligned} $$
(33)

For any \(n\in \mathbb {N}_+\), the state space V (n) of L (n) is (more generally, all notions associated to L (n) will marked by the exponent (n)). Denote and . We take

where x, y ∈ V (n), and 𝜖 > 0, that will depend on n, is such that n𝜖 < 1∕2.

Recall that 0 is the cemetery point added to V (n), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall\ x\in V^{(n)},\qquad \overline L^{(n)}(x,0)& =&\displaystyle \left\{\begin{array}{ll} 1&\displaystyle \mbox{if }x\in V^{(n)}_0,\\ 0& \mbox{if }x\in V^{(n)}_1. \end{array}\right.\end{array} \end{aligned} $$

Note that π (n) is the uniform probability on V (n). Let us show that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \iota(L^{(n)})& =&\displaystyle n\epsilon.\end{array} \end{aligned} $$
(34)

Consider any ∅ ≠ A ⊂ V (n), and decompose A = A 0 ⊔ A 1, with and . Denote and . We have that ∂A is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} \{\{x,y\}: x\in A_0,\, y\in V^{(n)}_1\setminus A_1\} \{\{x,y\}: x\in V^{(n)}_0\setminus A_0,\, y\in A_1\}\sqcup\{\{x,0\}: x\in A_0\},\end{array} \end{aligned} $$

and thus \( \mu ^{(n)}(\partial A)=\frac 1{2n}(\epsilon (a_0(n-a_1)+a_1(n-a_0))+a_0)\). It follows that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{\mu^{(n)}(\partial A)}{\pi^{(n)}(A)}& =&\displaystyle n\epsilon +\frac{a_0(1-2\epsilon a_1)}{a_0+a_1}.\end{array} \end{aligned} $$

Taking into account that 1 − 2𝜖a 1 > 0, the r.h.s. is minimized with respect to when a 0 = 0 and we then get (independently of a 1), μ (n)(∂A)∕π (n)(A) = n𝜖. We deduce (34).

Consider any \(T\in \mathbb {T}^{(n)}\) and let us check that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \iota(L^{(n)}_{T})& \leqslant &\displaystyle \epsilon.\end{array} \end{aligned} $$
(35)

Observe there exists \(x\in V^{(n)}_1\) such that there is a unique \(y\in V^{(n)}_0\) with {x, y} being an edge of T. Indeed, put on the edges of T the orientation toward the root 0. Thus from any vertex \(x\in V_1^{(n)}\) there is a unique exiting edge (but it is possible there are several incoming edges). Necessarily, there is a vertex in \(V^{(n)}_0\) whose edge exits to 0. So there are at most n − 1 vertices from \(V^{(n)}_0\) whose exit edge points toward \(V^{(n)}_1\). In particular, there is at least one vertex from \(V^{(n)}_1\) which is not pointed out by a vertex from \(V^{(n)}_0\). We can take x to be this vertex from \(V^{(n)}_1\) and \(y\in V^{(n)}_0\) is the vertex pointed out by the oriented edge exiting from x.

Considering the singleton {x}, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mu^{(n)}_T(\partial \{x\})=\mu_T(\{x,y\})\ =\ \frac{\epsilon}{2n}& \mbox{ and }&\displaystyle \pi^{(n)}(x)=\frac 1{2n}.\end{array} \end{aligned} $$

implying (35) (a little more work would prove that an equality holds there). As a consequence, we see that \( \max _{T\in \mathbb {T}^{(n)}}\iota (L^{(n)}_T)\leqslant \epsilon \). Taking for instance to fulfill the condition n𝜖 < 1∕2, we obtain \( \frac {\max _{T\in \mathbb {T}^{(n)}}\iota (L^{(n)}_T)}{\iota (L^{(n)})}\leqslant \frac 1{n}\), and (33) follows. □

Appendix 2: Hardy’s Inequalities

Our goal here is to extend the validity of Hardy’s inequalities on finite trees to general denumerable trees, without assumption of local finiteness. We begin by recalling the Hardy’s inequalities on finite trees. Consider \(\mathcal {T}=(\overline V, \overline E,0)\) a finite tree rooted in 0, whose vertex and (undirected) edge sets are \(\overline V\) and \(\overline E\). Denote , for each x ∈ V , the parent p(x) of x is the neighbor of x in the direction of 0. The other neighbors of x are called the children of x and their set is written C(x). For x = 0, by convention C(0) is the set of neighbors of 0. Let be given two positive measures μ, ν defined on V . Consider c(μ, ν) the best constant \(c\geqslant 0\) in the inequality

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \forall\ f\in \mathbb{R}^V,\qquad \mu[f^2]& \leqslant &\displaystyle c\sum_{x\in V} (f(p(x))-f(x))^2\nu(x)\end{array} \end{aligned} $$
(36)

where f was extended to 0 via .

According to [21] (see also Evans, Harris and Pick [10]), c(μ, ν) can be estimated up to a factor 16 via Hardy’s inequalities for trees, see (39) below. To describe them we need several notations.

Let \(\mathbb {T}\) the set of subsets T of V  satisfying the following conditions

  • T is non-empty and connected (in \(\mathcal {T}\)),

  • T does not contain 0,

  • if x ∈ T has a child in T, then all children of x belong to T.

Note that any \(T\in \mathbb {T}\) admits a closest element to 0, call it m(T), we have m(T) ≠ 0. When T is not reduced to the singleton {m(T)}, the connected components of T ∖{m(T)} are indexed by the set of the children of m(T), namely C(m(T)). For y ∈ C(m(T)), denote by T y the connected component of T ∖{m(T)} containing y. Note that \(T_{y}\in \mathbb {T}\).

We extend ν as a functional on \(\mathbb {T}\), via the iteration

  • when T is the singleton {m(T)}, we take ,

  • when T is not a singleton, decompose T as {m(T)}⊔⊔yC(m(T)) T y, then ν satisfies

    $$\displaystyle \begin{aligned} \begin{array}{rcl} {} \frac 1{\nu(T)}& =&\displaystyle \frac 1{\nu(m(T))}+\frac 1{\sum_{y\in C(m(T))} \nu(T_{y})}.\end{array} \end{aligned} $$
    (37)

For x ∈ V , let S x be the set of vertices y ∈ V  whose path to 0 pass through x. For any \(T\in \mathbb {T}\) we associate the subset

where L(T) is the set of leaves of T, namely the x ∈ T having no children in T. Equivalently, T is the set of all descendants of the leaves of T, themselves included.

Consider \(\mathbb {S}\subset \mathbb {T}\), the set of \(T\in \mathbb {T}\) which are such that m(T) is a child of 0. Finally, define

(38)

We are interested in this quantity because of the Hardy inequality:

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} b(\mu,\nu)\ \leqslant \ {c}(\mu,\nu)\ \leq\ 16\,b(\mu,\nu).\end{array} \end{aligned} $$
(39)

Our goal here is to extend this inequality to the situation where V  is denumerable and where μ and ν are two positive measures on V , with ∑xV μ(x) < +.

Remark 3

Without lost of generality, we can assume 0 has only one child, because what happens on different S x and S y, where both x and y are children of 0, can be treated separately.

More precisely, while V  is now (denumerable) infinite, we first assume that the height of is finite (implying that \(\mathcal {T}\) cannot be locally finite). Recall that the height h(x) of a vertex \(x\in \overline V\) is the smallest number of edges linking x to 0. The assumption that \(\sup _{x\in \overline V} h(x)<+\infty \) has the advantage that the iteration (37) enables us to compute ν on \(\mathbb {T}\), starting from the highest vertices from an element of \(\mathbb {T}\). Then b(μ, ν) is defined exactly as in (38), except the maximum has to be replaced by a supremum. Extend c(μ, ν) as the minimal constant \(c\geqslant 0\) such that (36) is satisfied, with the possibility that c(μ, ν) = + when there is no such c. Note that in (36), the space \(\mathbb {R}^V\) can be reduced and replaced by \(\mathcal {B}(V)\), the space of bounded mappings on V :

Lemma 10

We have

$$\displaystyle \begin{aligned} \begin{array}{rcl} c(\mu,\nu)& =&\displaystyle \sup_{f\in\mathcal{B}(V)\setminus\{0\}}\frac{\mu[f^2]}{ \sum_{x\in V} (f(p(x))-f(x))^2\nu(x)}.\end{array} \end{aligned} $$

Proof

Denote \(\widetilde c(\mu ,\nu )\) the above r.h.s. A priori we have \(c(\mu ,\nu )\geqslant \widetilde c(\mu ,\nu )\). To prove the reverse bound, consider any \(f\in \mathbb {R}^V\) and consider for M > 0, . Note that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{x\in V} (f_M(p(x))-f_M(x))^2\nu(x)& \leqslant &\displaystyle \sum_{x\in V} (f(p(x))-f(x))^2\nu(x). \end{array} \end{aligned} $$

(This a general property of Dirichlet forms and comes from the 1-Lipschitzianity of the mapping \(\mathbb {R}\ni r\mapsto (r\wedge M)\vee (-M)\).) Since \(f_M\in \mathcal {B}(V)\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mu[f_M^2]& \leqslant &\displaystyle \widetilde c(\mu,\nu)\sum_{x\in V} (f_M(p(x))-f_M(x))^2\nu(x)\\ & \leqslant &\displaystyle \widetilde c(\mu,\nu)\sum_{x\in V} (f(p(x))-f(x))^2\nu(x).\end{array} \end{aligned} $$

Letting M go to infinity, we get at the limit by monotonous convergence

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mu[f^2] & \leqslant &\displaystyle \widetilde c(\mu,\nu)\sum_{x\in V} (f(p(x))-f(x))^2\nu(x).\end{array} \end{aligned} $$

Since this is true for all \(f\in \mathbb {R}^V\), we deduce that \(c(\mu ,\nu )\leqslant \widetilde c(\mu ,\nu )\). □

Consider \((x_n)_{n\in \mathbb {N}_+}\) an exhaustive sequence of \(\overline V\), with x 0 = 0 and such that for any \(n\in \mathbb {N}_+\), is connected. We denote \(\mathcal {T}_n\) the tree rooted on 0 induced by \(\mathcal {T}\) on \(\overline V_n\) and as above, . For any \(n\in \mathbb {N}_+\) and x ∈ V n, introduce the set

In words, this is the set of elements of V  whose path to 0 first enters V n at x.

From now on, we assume that 0 has only one child, taking into account Remark 3. It follows that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} V& =&\displaystyle \bigsqcup_{x\in V_n} R_n(x).\end{array} \end{aligned} $$
(40)

Let μ n and ν n be the measures defined on V n via

The advantage of the μ n and ν n is that they brought us back to the finite situation while enabling to approximate c(μ, ν):

Proposition 10

We have limn c(μ n, ν n) = c(μ, ν).

Proof

We first check that the limit exists. For \(n\in \mathbb {N}_+\), consider the sigma-field \(\mathcal {F}_n\) generated by the partition (40). To each \(\mathcal {F}_n\)-measurable function f, associate the function f n defined on V n by

This function determines f, since for any x ∈ V n and any y ∈ R n(x), f(y) = f n(x). Furthermore, we have:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mu[f^2]& = &\displaystyle \mu_n[f_n^2]\\ \sum_{x\in V} (f(p(x))-f(x))^2\nu(x)& =&\displaystyle \sum_{x\in V_n} (f_n(p(x))-f_n(x))^2\nu_n(x). \end{array} \end{aligned} $$

It follows that

$$\displaystyle \begin{aligned} \begin{array}{rcl} c(\mu_n,\nu_n)& =&\displaystyle \sup_{f\in\mathcal{B}(\mathcal{F}_n)\setminus\{0\}}\frac{\mu[f^2]}{ \sum_{x\in V} (f(p(x))-f(x))^2\nu(x)},\end{array} \end{aligned} $$

where \(\mathcal {B}(\mathcal {F}_n)\) is the set of \(\mathcal {F}_n\)-measurable functions, which are necessarily bounded, i.e., belong to \(\mathcal {B}(V)\). Since for any \(n\in \mathbb {N}_+\) we have \(\mathcal {F}_n\subset \mathcal {F}_{n+1}\), we get that the sequence \((c(\mu _n,\nu _n))_{n\in \mathbb {N}_+}\) is non-decreasing and, taking into account Lemma 10, that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{n\rightarrow\infty} c(\mu_n,\nu_n)& \leqslant &\displaystyle c(\mu,\nu).\end{array} \end{aligned} $$

To get the reverse bound, first assume that c(μ, ν) < +. For given 𝜖 > 0, find a function \(f\in \mathcal {B}(V)\) with

$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{\mu[f^2]}{ \sum_{x\in V} (f(p(x))-f(x))^2\nu(x)}& \geqslant &\displaystyle c(\mu,\nu)-\epsilon.\end{array} \end{aligned} $$

Consider π the normalization of μ into a probability measure and let f n be the conditional expectation of f with respect to π and to the sigma-field \(\mathcal {F}_n\). Note that the f n are uniformly bounded by \(\left \Vert f\right \Vert { }_{\infty }\). Thus by the bounded martingale convergence theorem and since π gives a positive weight to any point of V , we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall\ x\in V,\qquad \lim_{n\rightarrow\infty}f_n(x)& =&\displaystyle f(x).\end{array} \end{aligned} $$

From Fatou’s lemma, we deduce

By another application of the bounded martingale convergence theorem, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{n\rightarrow\infty} \mu_n[f_n^2]=\lim_{n\rightarrow\infty} \mu[f_n^2] =\mu[f^2],\end{array} \end{aligned} $$

so that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \limsup_{n\rightarrow\infty}\frac{\mu_n[f_n^2]}{ \sum_{x\in V} (f_n(p(x))-f_n(x))^2\nu(x)}& \geqslant &\displaystyle \frac{\mu[f^2]}{ \sum_{x\in V} (f(p(x))-f(x))^2\nu(x)}.\end{array} \end{aligned} $$

It follows that \( \lim _{n\rightarrow \infty } c(\mu _n,\nu _n)\geqslant c(\mu ,\nu )-\epsilon \), and since 𝜖 > 0 can be chosen arbitrary small,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{n\rightarrow\infty} c(\mu_n,\nu_n)& \geqslant &\displaystyle c(\mu,\nu).\end{array} \end{aligned} $$

It remains to deal with the case where c(μ, ν) = +. Then for any M > 0, we can find a function \(f\in \mathcal {B}(V)\) with

$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{\mu[f^2]}{ \sum_{x\in V} (f(p(x))-f(x))^2\nu(x)}& \geqslant &\displaystyle M.\end{array} \end{aligned} $$

By the above arguments, we end up with \( \lim _{n\rightarrow \infty } c(\mu _n,\nu _n)\geqslant M\), and since M can be arbitrary large, limn c(μ n, ν n) = +  =  c(μ, ν). □

Our next goal is to show the same result holds for b(μ, ν). We need some additional notations. The integer \(n\in \mathbb {N}_+\) being fixed, denote \(\mathbb {T}_n\) and \(\mathbb {S}_n\) the sets \(\mathbb {T}\) and \(\mathbb {S}\) associated to \(\mathcal {T}_n\). The functional ν n is extended to \(\mathbb {T}_n\) via the iteration (37) understood in \(\mathcal {T}_n\). To any \(T\in \mathbb {T}_n\), associate T n the minimal element of \(\mathbb {T}\) containing T. It is obtained in the following way: to any x ∈ T, if x has a child in T, then add all the children of x in V , and otherwise do not add any other points.

Lemma 11

We have the comparisons

$$\displaystyle \begin{aligned} \begin{array}{rcl} \nu_n(T)\geqslant \nu( T_n) & \mathit{\mbox{ and }}&\displaystyle \mu_n(T^*)\leqslant \mu( T_n^*),\end{array} \end{aligned} $$

where T is understood in \(\mathcal {T}_n\) (and \(T_n^*\) in \(\mathcal {T}\) ).

Proof

The first bound is proven by iteration on the height of \(T\in \mathbb {T}_n\).

  • If this height is zero, then T is a singleton and T n is the same singleton, so that ν n(T) = ν(T n).

  • If the height h(T) of T is at least equal to 1, decompose

    $$\displaystyle \begin{aligned} \begin{array}{rcl} T& =&\displaystyle \{m_n(T)\}\sqcup \bigsqcup_{y\in C_n(m_n(T))}T_{n,y}\end{array} \end{aligned} $$

    where m n(⋅), C n(⋅) and T n,⋅ are the notions corresponding to m(⋅), C(⋅) and T in \(\mathcal {T}_n\).

    Note that T and T n have the same height and decompose

    $$\displaystyle \begin{aligned} \begin{array}{rcl} T_n& =&\displaystyle \{m( T_n)\}\sqcup \bigsqcup_{z\in C(m( T_n))} T_{n,z}.\end{array} \end{aligned} $$

    On the one hand, we have m(T n) = m n(T) and C n(m n(T)) ⊂ C(m n(T)) and on the other hand, we have for any y ∈ C n(m n(T)), \( \nu _n(T_y)\geqslant \nu ( (T_y)_n) =\nu ( T_{n,y})\), due to the iteration assumption and to the fact that the common height of T y and (T y)n is at most equal to h(T) − 1. The equality (T y)n = T n,y is due to the fact that T n,y is obtained by the same completion of T y as the one presented for T just above the statement of Lemma 11, and thus coincides with (T y)n. It follows that

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac 1{\nu_n(T)}& =&\displaystyle \frac 1{\nu_n(m_n(T))}+\frac 1{\sum_{y\in C_n(m_n(T))} \nu_n(T_{y})}\\& =&\displaystyle \!\!\frac 1{\nu(m( T_n))}\!+\!\frac 1{\sum_{y\in C_n(m_n(T))} \nu_n(T_{y})} \! \leq\! \frac 1{\nu(m( T_n))}\! +\! \frac 1{\sum_{y\in C_n(m_n(T))} \nu(T_{n,y})}\\ & \leqslant &\displaystyle \frac 1{\nu(m( T_n))}+\frac 1{\sum_{y\in C(m( T_n))} \nu(T_{n,y})} =\frac 1{\nu( T_n)}, \end{array} \end{aligned} $$

    establishing the wanted bound \(\nu _n(T)\geqslant \nu ( T_n)\). We now come to the second bound of the above lemma. By definition, we have

    $$\displaystyle \begin{aligned} \begin{array}{rcl} T^*& =&\displaystyle \sqcup_{x\in L_n(T)}S_{n,y},\end{array} \end{aligned} $$

    where L n(T) is the set of leaves of T in \(\mathcal {T}_n\) and S n,y is the subtree rooted in y in \(\mathcal {T}_n\).

Note that L n(T) ⊂ L(T n) and by definition of μ n, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall\ y \in L_n(T),\qquad \mu_n(S_{n,y})& =&\displaystyle \mu(S_y).\end{array} \end{aligned} $$

It follows that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mu_n(T^*)& =&\displaystyle \sum_{x\in L_n(T)}\mu_n(S_{n,y}) =\sum_{x\in L_n(T)}\mu(S_y) \leqslant \sum_{x\in L(T_n)}\mu(S_y) =\mu(T_n^*).\end{array} \end{aligned} $$

Let \(\widetilde {\mathbb {S}}_n\) be the image of \(\mathbb {S}_n\) under the mapping \(\mathbb {S}_n\ni T\mapsto T_n\in \mathbb {S}\). Since \(\mathbb {S}_n\ni T\mapsto T_n\in \widetilde {\mathbb {S}}_n\) is a bijection, we get from Lemma 11,

so that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \limsup_{n\rightarrow\infty}b(\mu_n,\nu_n)& \leqslant &\displaystyle b(\mu,\nu).\end{array} \end{aligned} $$
(41)

Let us show more precisely:

Proposition 11

We have limn b(μ n, ν n) = b(μ, ν).

Proof

According to (41), it remains to show that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \liminf_{n\rightarrow\infty}b(\mu_n,\nu_n)& \geqslant &\displaystyle b(\mu,\nu).\end{array} \end{aligned} $$
(42)

Consider \(T\in \mathbb {S}\) such that the ration μ(T )∕ν(T) serves to approximate b(μ, ν), namely up to an arbitrary small 𝜖 > 0 if b(μ, ν) < + or is an arbitrary large quantity if b(μ, ν) = +. Define

Arguing as at the end of the proof of Proposition 10, we will deduce (42) from

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{n\rightarrow\infty} \frac{\mu_n((T^{(n)})^*)}{\nu_n(T^{(n)})}& = &\displaystyle \frac{\mu(T^*)}{\nu(T)},\end{array} \end{aligned} $$

where (T (n)) is understood in \(\mathcal {T}_n\). This convergence will be the consequence of

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \lim_{n\rightarrow\infty} \mu_n((T^{(n)})^*)& = &\displaystyle \mu(T^*), \end{array} \end{aligned} $$
(43)
$$\displaystyle \begin{aligned} \begin{array}{rcl} {}\lim_{n\rightarrow\infty} {\nu_n(T^{(n)})}& = &\displaystyle \nu(T).\end{array} \end{aligned} $$
(44)

For (43), note that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mu(T^*)& =&\displaystyle \sum_{x\in L(T)}\mu(S_y),\end{array} \end{aligned} $$

and as we have seen at the end of the proof of Lemma 11,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mu(T^*)& =&\displaystyle \sum_{x\in L_n(T^{(n)})}\mu(S_y).\end{array} \end{aligned} $$

Thus (43) follows by dominated convergence (since μ(V ) < +), from

To show the latter convergences, consider two cases:

  • If x ∈ L(T), then we will have x ∈ L n(T (n)) as soon as x ∈ V n.

  • If x ∈ T ∖ L(T), then we will have xL n(T (n)) as soon as V n contains one of the children of x in T.

We now come to (44), and more generally let us prove by iteration over their height, that for any \(\widetilde T\in \mathbb {T}\) and \(\widetilde T\subset T\), we have

(45)

i.e., the limit is non-decreasing. Indeed, if \(\widetilde T\) has height 0, it is a singleton {x}, we have as soon as x belongs to V n, insuring (45).

Assume that \(\widetilde T\) has height a \(h\geqslant 1\) and that (45) holds for any \(\widetilde T\) whose height is at most equal to h − 1. Write as usual

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \frac 1{\nu(\widetilde T)}& =&\displaystyle \frac 1{\nu(m(\widetilde T))}+\frac 1{\sum_{y\in C(m(\widetilde T))} \nu(\widetilde T_{y})}.\end{array} \end{aligned} $$
(46)

Assume that n is large enough so that and in particular \(m(\widetilde T)\in V_n\) and . Thus we also have

(47)

On the one hand, the set \(C_n(m(\widetilde T))\) is non-decreasing and its limit is \(C(m(\widetilde T))\), and on the other hand, due to the induction hypothesis, we have for any \(y\in C(m(\widetilde T))\),

By monotone convergence, we get

which leads to (45), via (46) and (47). This ends the proof of (42). □

The conjunction of Propositions 10 and 11 leads to the validity of (39), when V  is denumerable with \(\mathcal {T}\) of finite height.

Let us now remove the assumption of finite height. The arguments are very similar to the previous one, except that the definition of b(μ, ν) has to be modified (μ and ν are still positive measures on V , with μ of finite total mass). More precisely, for any \(M\in \mathbb {N}_+\), consider . Define on V M the measure ν M as the restriction to V M of ν and μ M via

By definition, we take

This limit exists and the convergence is monotone, since he have for any \( M\in \mathbb {N}_+\), \(b(\mu _M,\nu _M)=\max _{T\in \mathbb {S}_M} \frac {\mu (T^*)}{\nu (T)}\), where . Note that a direct definition of b(μ, ν) via the iteration (37) is not possible: we could not start from leaves that are singletons.

By definition, c(μ, ν) is the best constant in (36). It also satisfies , as can be seen by adapting the proof of Proposition 10. We conclude that (39) holds by passing at the limit in

$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall\ M\in\mathbb{N}_+,\qquad b(\mu_M,\nu_M)\ \leqslant \ {c}(\mu_M,\nu_M)\ \leq\ 16\,b(\mu_M,\nu_M).\end{array} \end{aligned} $$

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Chatterjee, S., Diaconis, P., Miclo, L. (2022). A Random Walk on the Rado Graph. In: Basor, E., Böttcher, A., Ehrhardt, T., Tracy, C.A. (eds) Toeplitz Operators and Random Matrices. Operator Theory: Advances and Applications, vol 289. Birkhäuser, Cham. https://doi.org/10.1007/978-3-031-13851-5_13

Download citation

Publish with us

Policies and ethics