Scaling Limits and Generic Bounds for Exploration Processes

Abstract

We consider exploration algorithms of the random sequential adsorption type both for homogeneous random graphs and random geometric graphs based on spatial Poisson processes. At each step, a vertex of the graph becomes active and its neighboring nodes become blocked. Given an initial number of vertices N growing to infinity, we study statistical properties of the proportion of explored (active or blocked) nodes in time using scaling limits. We obtain exact limits for homogeneous graphs and prove an explicit central limit theorem for the final proportion of active nodes, known as the jamming constant, through a diffusion approximation for the exploration process which can be described as a unidimensional process. We then focus on bounding the trajectories of such exploration processes on random geometric graphs, i.e., random sequential adsorption. As opposed to exploration processes on homogeneous random graphs, these do not allow for such a dimensional reduction. Instead we derive a fundamental relationship between the number of explored nodes and the discovered volume in the spatial process, and we obtain generic bounds for the fluid limit and jamming constant: bounds that are independent of the dimension of space and the detailed shape of the volume associated to the discovered node. Lastly, using coupling techinques, we give trajectorial interpretations of the generic bounds.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. 1.

    \(B( \varvec{x} ,r)\) denotes a sphere of radius r centered around the point \( \varvec{x} \).

  2. 2.

    We solved (58) numerically by reformulating it as a system of differential equations. Specifically, we solved \(\dot{w_1}(t) = 1 + \max { \{ 0, c ( 1 - ( 3ct w_2(t) ) / ( 1 - w_1(t) ) ) w_2(t) \} }\), and \(\dot{w_2}(t) = - w_2(t) / ( 1 - w_1(t) )\) for \(w_1(t)\) with initial conditions \(w_1(0) = 0\), \(w_2(0) = 1\).

  3. 3.

    Note that we use the notation that \(\int _a^b = - \int _b^a\) when \(a > b\).

References

  1. 1.

    Sanders, J., Jonckheere, M., Kokkelmans, S.: Sub-Poissonian statistics of jamming limits in ultracold Rydberg gases. Phys. Rev. Lett. 115, 043002 (2015)

    Article  ADS  Google Scholar 

  2. 2.

    Bermolen, P., Jonckheere, M., Moyal, P.: The jamming constant of uniform random graphs. Stoch. Proces. Appl. 127, 2138–2178 (2016)

    Article  MATH  MathSciNet  Google Scholar 

  3. 3.

    Evans, J.W.: Random and cooperative sequential adsorption. Rev. Mod. Phys. 65, 1281–1329 (1993)

    Article  ADS  Google Scholar 

  4. 4.

    Bermolen, P., Jonckheere, M., Larroca, F., Moyal, P.: Estimating the transmission probability in wireless networks with configuration models. ACM Trans. Model. Perform. Eval. Comput. Syst. 1(2), 9:1–9:23 (2016)

    Article  Google Scholar 

  5. 5.

    Gallagher, T.F.: Rydberg Atoms (Cambridge Monographs on Atomic, Molecular and Chemical Physics). Cambridge University Press, Cambridge (1994)

    Google Scholar 

  6. 6.

    Darling, R., Norris, J.: Differential equation approximations for Markov chains. Probab. Surv. 5, 37–79 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  7. 7.

    Berkes, I., Liu, W., Wu, W.B.: Komlós-Major-Tusnády approximation under dependence. Ann. Probab. 42(2), 794–817 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  8. 8.

    Sanders, J.: Stochastic optimization of large-scale complex systems. Ph.D. thesis, Technische Universiteit Eindhoven (2016)

  9. 9.

    Bermolen, P., Jonckheere, M., Sanders, J.: Scaling limits for exploration algorithms. Technical Report (2015)

  10. 10.

    Erdös, P., Rényi, A.: On random graphs, I. Publ. Math. (Debrecen) 6, 290–297 (1959)

    MATH  MathSciNet  Google Scholar 

  11. 11.

    Grimmett, G., Stirzaker, D.: Probability and Random Processes. Oxford University Press, Oxford (2001)

    Google Scholar 

  12. 12.

    Steele, J.M.: Stochastic Calculus and Financial Applications. Springer, New York (2001)

    Google Scholar 

  13. 13.

    Aiello, W., Graham, F.C., Lu, L.: A random graph model for power law graphs. Exp. Math. 10, 53–66 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  14. 14.

    Chung, F., Lu, L.: Connected components in random graphs with given expected degree sequences. Ann. Comb. 6(2), 125–145 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  15. 15.

    Chung, F., Lu, L.: The average distance in a random graph with given expected degrees. Internet Math. 1(1), 91–113 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  16. 16.

    Dhara, S., van Leeuwaarden, J.S.H., Mukherjee, D.: Generalized random sequential adsorption on Erdös–Rényi random graphs. J. Stat. Phys. 164(5), 1217–1232 (2016)

    Article  MATH  ADS  MathSciNet  Google Scholar 

  17. 17.

    Dhara, S., van Leeuwaarden, J., Mukherjee, D.: Solvable random network model for disordered sphere packing. arXiv:1611.05019 (2016)

  18. 18.

    Kurtz, T.G.: Strong approximation theorems for density dependent Markov chains. Stoch. Proces. Appl. 6(3), 223–240 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  19. 19.

    Komlós, J., Major, P., Tusnády, G.: An approximation of partial sums of independent rv’-s, and the sample df. i. Z. Wahrscheinlichkeitstheor. Verw. Geb. 32(1–2), 111–131 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  20. 20.

    Komlós, J., Major, P., Tusnády, G.: An approximation of partial sums of independent rv’s, and the sample df. ii. Z. Wahrscheinlichkeitstheor. Verw. Geb. 34(1), 33–58 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  21. 21.

    McDiarmid, C.: Colouring random graphs. Ann. Oper. Res. 1(3), 183–200 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  22. 22.

    Teerapabolaan, K.: A bound on the binomial-Poisson relative error. Int. J. Pure Appl. Math. 87(4), 535–540 (2013)

    Google Scholar 

  23. 23.

    Penrose, M.D., Yukich, J.: Limit theory for random sequential packing and deposition. Ann. Appl. Probab. 12(1), 272–301 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  24. 24.

    Penrose, M.D.: Random parking, sequential adsorption, and the jamming limit. Commun. Math. Phys. 218(1), 153–176 (2001)

    Article  MATH  ADS  MathSciNet  Google Scholar 

  25. 25.

    Penrose, M.D.: Random geometric graphs, vol. 5. Oxford University Press, Oxford (2003)

    Google Scholar 

  26. 26.

    Rudin, W.: Real and Complex Analysis. Tata McGraw-Hill, New York (1987)

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Matthieu Jonckheere.

Appendices

Appendix

A Proof of Proposition 3.2

Proof

Doob’s martingale decomposition [12] for the Markov process \( \{ Z_n \}_{ n \ge 0 } \) gives that for \(n \ge 0\),

$$\begin{aligned} Z_n = \sum _{i=0}^n ( 1 + \gamma _N(Z_i) ) + M_n. \end{aligned}$$
(74)

Here, we have used that \(Z_0 = 0\), and \(M_n\) denotes a local martingale that is actually a global martingale since the state space is finite.

We will now examine the scaled random variable \(Z_t^N\), for which

$$\begin{aligned} Z^N_t&= \frac{Z_{[tN]}}{N} = \frac{1}{N} \sum _{i=0}^{[tN]} \bigl ( 1+\gamma _N(Z_i) \bigr ) + \frac{M_{[tN]}}{N} \nonumber \\&\overset{ (\mathrm{i}) }{=}\frac{1}{N} \int _{0}^{[tN]} \bigl ( 1+\gamma _N(Z_s) \bigr ) {\text {d}}\,\!{s} + \frac{M_{[tN]}}{N} \overset{ (\mathrm{ii}) }{=} \int _{0}^{ \frac{[tN]}{N} } \bigl ( 1 + \gamma _N(Z_{uN}) \bigr ) {\text {d}}\,\!{u} + M^N_t, \end{aligned}$$
(75)

since we (i) view each trajectory as being path-wise continuous, and (ii) use the change of variables \(u = s / N\), and introduce the notation \( M^N_t = M_{[tN]} / N \) for a scaled martingale.

We can replace the integral \(\int _0^{[tN]/N} \cdots {\text {d}}\,\!{u} \) by the integral \(\int _0^t \cdots {\text {d}}\,\!{u}\), which introduces an error \(\Delta _{N,t}\). Specifically, we can write

$$\begin{aligned} \int _0^{ \frac{[tN]}{N} } \bigl ( 1 + \gamma _N(Z_{uN}) \bigr ) {\text {d}}\,\!{u} = \int _0^t \bigl ( 1 + \gamma _N(Z_{uN}) \bigr ) {\text {d}}\,\!{u} + \Delta _{N,t} \end{aligned}$$
(76)

where

$$\begin{aligned} \Delta _{N,t} = \int _0^{ \frac{[tN]}{N} } \bigl ( 1 + \gamma _N(Z_{uN}) \bigr ) {\text {d}}\,\!{u} - \int _0^t \bigl ( 1 + \gamma _N(Z_{uN}) \bigr ) {\text {d}}\,\!{u}. \end{aligned}$$
(77)

For large N such replacement has negligible impact, since independently of t,

$$\begin{aligned} | \Delta _{N,t} | \le \sup _{u\in [0,1]} \{ 1 + \gamma _N(Z_{uN}) \} \Bigl | \frac{[tN]}{N} - t \Bigr | \le \frac{1 + \bar{\gamma }_N }{N}, \end{aligned}$$
(78)

where in the last inequality we have used that \(\bar{\gamma }_N = \sup _x \gamma _N(x)\) and \(| [tN] - tN | \le 1\).

Using (i) the integral version of (19), the triangle inequality [26], and (ii) Lipschitz continuity of \(\gamma \), condition (17) and bound (78), we find that

$$\begin{aligned} \sup _{s\in [0,t]}| Z^N_s - z(s) |&\overset{ (\mathrm{i}) }{\le }\sup _{s \in [0,t]} \left( \int _0^{s} \bigl | \gamma _N(Z_{uN}) - \gamma (z(u)) \bigr | {\text {d}}\,\!{u} + |\Delta _{N,s}| + | M_s^N | \right) \nonumber \\&\overset{ (\mathrm{ii}) }{\le }C_L \int _0^{t} \sup _{u\in [0,s]}| Z^N_u - z(u)| {\text {d}}\,\!{s} + \delta _N t + \frac{1+\bar{\gamma }_N}{N} + \sup _{s\in [0,t]}|M_s^N|. \end{aligned}$$
(79)

Next, we define \(\epsilon _N(T)= \sup _{s \in [0,T]} |Z^{N}_s- z(s)|\) for notational convenience and to prepare for an application of Grönwall’s lemma [12]. Eq. (79) then shortens for \(T > 0\) to

$$\begin{aligned} \epsilon _N(T) \le \delta _N T + \frac{1+\bar{\gamma }_N}{N} + \sup _{s\in [0,T]}|M_s^N| + C_L \int _0^T \epsilon _N(s) {\text {d}}\,\!{s}. \end{aligned}$$
(80)

Because \(\delta _N T + (1+\bar{\gamma }_N)/N + \sup _{s\in [0,T]}|M_s^N|\) is nondecreasing in T, it follows from Grönwall’s lemma that

$$\begin{aligned} \epsilon _N(T) \le \left( \delta _N T + \frac{1+\bar{\gamma }_N}{N} + \sup _{s\in [0,T]}|M_s^N| \right) {\mathrm {e}}^{ C_L T } . \end{aligned}$$
(81)

Using Minkowsky’s inequality for \(p \in [1,\infty )\), strict monotonicity of \(\exp {( C_L T )}\) and \(\delta _N T\), and the triangle inequality, we find that

$$\begin{aligned} ||\epsilon _N(T) ||_p \le \left( \delta _N T + \frac{1+\bar{\gamma }_N}{N} + ||\sup _{s \in [0,T]} |M_s^N| ||_p \right) {\mathrm {e}}^{ C_L T } . \end{aligned}$$
(82)

Finally, using Doob’s martingale inequality [12] for \(p > 1\), we obtain

$$\begin{aligned} ||\epsilon _N(T) ||_p \le \left( \delta _N T + \frac{1+\bar{\gamma }_N}{N} + \kappa _p ||M_T^N ||_p \right) {\mathrm {e}}^{ C_L T } , \end{aligned}$$
(83)

completing the first part of the proof.

For \(p = 2\), this inequality can be further simplified by computing the increasing process associated to the martingale. Note specifically that for \(l \ge 0\) we have

$$\begin{aligned} \mathbb {E} [ (M_l)^2 ] = \mathbb {E} [ \langle M_l \rangle ] = \mathbb {E} \left[ \sum _{i=0}^{l} {\text {Var}}[ \gamma _N(Z_i) ] \right] \end{aligned}$$
(84)

where

$$\begin{aligned} \mathrm {Var} [ \gamma _N(x) ] = \sum _{k=0}^{N-x-1} (k+1)^2 p_{x, x+k+1} - \left( \sum _{k=0}^{N-x-1} (k+1) p_{x, x+k+1} \right) ^2 = \psi _N(x). \end{aligned}$$
(85)

Therefore for the scaled martingale \(M_t^N\), we find by combining (84) and (85) that for \(t > 0\)

$$\begin{aligned} ||M_t^N ||_2^{2} = \mathbb {E} [ (M_{t}^N)^2 ] = \frac{ \mathbb {E} [ M^2_{ [ tN ] } ] }{N^2} = \frac{1}{N^2} \sum _{i=0}^{[tN]} \psi _N(Z_i) \le \frac{\bar{\psi }_N t}{N}. \end{aligned}$$
(86)

This completes the second part of the proof. \(\square \)

B Proof of Proposition 3.4

Proof

We adapt the results of Kurtz which were derived for continuous time Markov jump processes. To this aim, we can replace the Poisson processes involved in the construction of the jump processes by some random walks that can be used to construct discrete time Markov chains. We can then use exactly the same steps as in [18], by first comparing the original process \(Z^N\) to a diffusion of the form

$$\begin{aligned} \tilde{Z}^N_t = \frac{1}{N} \sum _{l \le N} l B_l \left( N \sum _0^t p_N(l,\tilde{Z}^N_s) {\text {d}}\,\!{s} \right) , \end{aligned}$$
(87)

that is a sum of a finite number of scaled independent Brownian motions \(B_l\).

Rewriting the inequalities in [18, (3.6)], and using a random walk version of the approximation lemma of Komlós–Major–Tusnády [7], we obtain

$$\begin{aligned} \mathbb {E} \left[ \sup _{t \le T} |\tilde{Z}^N_t - Z^N_t| \right] \le C_2 \frac{\log (N)}{N}. \end{aligned}$$
(88)

This leads using the results of [18, Sect. 3] to

$$\begin{aligned} \mathbb {E} \left[ \sup _{t \le T} | W^N_t- W_t | \right] \le C_3 \frac{\log (N)}{\sqrt{N}}, \end{aligned}$$
(89)

which concludes the proof. \(\square \)

C Proof of Proposition 3.5

Proof

Remark that if \(| z(s) - Z_{s}^N | \le \delta / 2\) for all \(s>0\), that then

$$\begin{aligned} \Bigl | \frac{T^*_N}{N} - T^* \Bigr | \le \Big | \big (z-\tfrac{1}{2}\delta \big )^{-1}(1) - \big (z+\tfrac{1}{2}\delta \big )^{-1}(1)\Big | \le \big |\big (T^*+\tfrac{1}{2}\delta \big ) - \big (T^*-\tfrac{1}{2}\delta \big )\big | = \delta . \end{aligned}$$
(90)

Here, the last inequality follows from the fact that \(\dot{z}(s) = 1 + \gamma (z(s)) \ge 1\), since

$$\begin{aligned} \big (z-\tfrac{1}{2}\delta \big )\big (T^{*}+\tfrac{1}{2}\delta \big )&= \big (z-\tfrac{1}{2}\delta \big )\big (z^{-1}(1)+\tfrac{1}{2}\delta \big ) \nonumber \\&\ge z\big (z^{-1}(1))+ \tfrac{1}{2}\delta - \tfrac{1}{2}\delta = 1 =\big (z-\tfrac{1}{2}\delta \big )\big ( \big (z-\tfrac{1}{2}\delta \big )^{-1}(1)\big ). \end{aligned}$$
(91)

Thus the first claim follows directly from the observation that the event

$$\begin{aligned} \left\{ \left| \frac{T^*_N}{N} - T^* \right| \ge \delta \right\} \subseteq \Big \{ | z(s) - Z_{s}^N | \ge \tfrac{1}{2}\delta \Big \}, \end{aligned}$$
(92)

and then using (i) Markov’s inequality [12], and (ii) invoking Proposition 3.2, so that

$$\begin{aligned} \mathbb {P} \left[ \Bigl | \frac{T^*_N}{N} - T^* \Bigr | \ge \delta \right] \overset{ (92) }{\le } \mathbb {P} \left[ | z(s) - Z_{s}^N | \ge \tfrac{1}{2}\delta \right] \overset{ (\mathrm{i}) }{\le }\frac{2}{\delta } \mathbb {E} [ | z(s) - Z_{s}^N | ] \overset{ (\mathrm{ii}) }{\le }\frac{2 \omega _N}{\delta }. \end{aligned}$$
(93)

Now (i) using that \(Z_{T_N^*} / N = z(T^*) = 1\) together with (19) and (75), and (ii) after expanding the integrals, we find that

$$\begin{aligned} \frac{T^*_N}{N}- T^*&\overset{ (\mathrm{i}) }{=} \int _0^{T^*} \gamma (z(s)) {\text {d}}\,\!{s} - \int _0^{\frac{T^*_N}{N}} \gamma _N(Z_{sN}) {\text {d}}\,\!{s} - \frac{M_{T_N^*}}{N} \nonumber \\&\overset{ (\mathrm{ii}) }{=} \int _0^{ \frac{T_N^*}{N} \wedge T^* } ( \gamma (z(s)) - \gamma _N(Z_{sN}) ) {\text {d}}\,\!{s} - \frac{M_{T_N^*}}{N} \nonumber \\&\quad + \int _{ \frac{T_N^*}{N} \wedge T^* }^{ T^* } \gamma (z(s)) {\text {d}}\,\!{s} - \int _{ \frac{T_N^*}{N} \wedge T^* }^{ \frac{T_N^*}{N} } \gamma _N(Z_{sN}) {\text {d}}\,\!{s}. \end{aligned}$$
(94)

Then taking the absolute value and using the triangle inequality, it follows that

$$\begin{aligned} \Bigl | \frac{T^*_N}{N} - T^* \Bigr | \le&\int _0^{ \frac{T_N^*}{N} \wedge T^*} |\gamma (z(s)) - \gamma _N(Z_{sN})| {\text {d}}\,\!{s} + |M_{T_N^*/N}^N| \nonumber \\&+ \int _{ \frac{T_N^*}{N} \wedge T^*}^{ T^* } | \gamma (z(s))| {\text {d}}\,\!{s} + \int _{ \frac{T_N^*}{N} \wedge T^* }^{ \frac{T_N^*}{N} } | \gamma _N(Z_{sN}) | {\text {d}}\,\!{s}. \end{aligned}$$
(95)

Approximating \(\gamma _N\) by \(\gamma \) via (17), using Lipschitz continuity of \(\gamma \), and recalling that \(\max \{ T_N^* / N, T^* \} \le 1\), we find that

$$\begin{aligned} \Bigl | \frac{T^*_N}{N} - T^* \Bigr | \le 2 C_L \sup _{s \le 1 } | z(s) - Z_{s}^N | + 2 \delta _N + |M_{T_N^*/N}^N| + \int _{ \frac{T_N^*}{N} \wedge T^* }^{ \frac{T_N^*}{N} \vee T^* } | \gamma (z(s)) | {\text {d}}\,\!{s}. \end{aligned}$$
(96)

The continuity of \(\gamma (x)\) guarantees that there exist constants \(C_1, \varepsilon > 0\) such that (i) \(\gamma (z(s)) \le 1 - \varepsilon \) for all \(s \ge C_1\), and (ii) \(C_1 < T^* - \delta \), provided that \(\delta \) is sufficiently small. There are now two possible cases: either (a) \(C_1 \le T_N^* / N \wedge T^*\), or (b) \(T_N^* / N \wedge T^*< C_1 < T_N^* / N \vee T^*\). For convenience, we first split the integral according to

$$\begin{aligned} \int _{\frac{T_N^*}{N} \wedge T^*}^{\frac{T_N^*}{N} \vee T^*} | \gamma (z(s)) | {\text {d}}\,\!{s} = \int _{\frac{T_N^*}{N} \wedge T^*}^{\frac{T_N^*}{N} \vee T^*} | \gamma (z(s)) | ( \mathbb {1} [ s < C_1 ] + \mathbb {1} [ s \ge C_1 ] ) {\text {d}}\,\!{s}. \end{aligned}$$
(97)

Then splitting further into case (a), we have that

$$\begin{aligned} \int _{\frac{T_N^*}{N} \wedge T^*}^{\frac{T_N^*}{N} \vee T^*} | \gamma (z(s)) | \mathbb {1} \left[ s < C_1, C_1 \le \frac{T_N^*}{N} \wedge T^* \right] {\text {d}}\,\!{s} = 0, \end{aligned}$$
(98)

and

$$\begin{aligned}&\int _{\frac{T_N^*}{N} \wedge T^*}^{\frac{T_N^*}{N} \vee T^*} | \gamma (z(s)) | \mathbb {1} \left[ s \ge C_1, C_1 \le \frac{T_N^*}{N} \wedge T^* \right] {\text {d}}\,\!{s} \nonumber \\&\le (1-\varepsilon ) \left| \frac{T_N^*}{N} - T^* \right| \mathbb {1} \left[ C_1 \le \frac{T_N^*}{N} \wedge T^* \right] . \end{aligned}$$
(99)

Next let \(C_2\) be a constant such that \(C_2 \ge \int _{ T_N^* / N \wedge T^* }^{C_1} | \gamma (z(s)) | {\text {d}}\,\!{s}\). We can then, after splitting further into case (b), bound

$$\begin{aligned}&\int _{\frac{T_N^*}{N} \wedge T^*}^{\frac{T_N^*}{N} \vee T^*} | \gamma (z(s)) | \mathbb {1} \left[ s< C_1, \frac{T_N^*}{N} \wedge T^*< C_1< \frac{T_N^*}{N} \vee T^* \right] {\text {d}}\,\!{s} \nonumber \\&\quad = \int _{ \frac{T_N^*}{N} \wedge T^* }^{C_1} | \gamma (z(s)) | {\text {d}}\,\!{s} \mathbb {1} \left[ \frac{T_N^*}{N} \wedge T^*< C_1< \frac{T_N^*}{N} \vee T^* \right] \nonumber \\&\quad \le C_2 \mathbb {1} \left[ \frac{T_N^*}{N} \wedge T^*< C_1< \frac{T_N^*}{N} \vee T^* \right] \le C_2 \mathbb {1} \left[ \frac{T_N^*}{N} < C_1 \right] , \end{aligned}$$
(100)

since if \( \mathbb {1} [ T_N^* / N \wedge T^*< C_1 < T_N^* / N \vee T^* ] = 1\), clearly \(T_N^* / N \wedge T^* < C_1\). But by construction \(C_1 < T^*\), so it must hold that \(T_N^* / N < C_1\) and thus \( \mathbb {1} [ T_N^* / N < C_1 ] = 1\). Next, we bound

$$\begin{aligned}&\int _{\frac{T_N^*}{N} \wedge T^*}^{\frac{T_N^*}{N} \vee T^*} | \gamma (z(s)) | \mathbb {1} \left[ s \ge C_1, \frac{T_N^*}{N} \wedge T^*< C_1< \frac{T_N^*}{N} \vee T^* \right] {\text {d}}\,\!{s} \nonumber \\&\quad \le (1-\varepsilon ) \left| \frac{T_N^*}{N} - T^* \right| \mathbb {1} \left[ \frac{T_N^*}{N} \wedge T^*< C_1 < \frac{T_N^*}{N} \vee T^* \right] . \end{aligned}$$
(101)

Summarizing, there thus exists a constant \(C_2\) such that

$$\begin{aligned} \left| \frac{T^*_N}{N} - T^* \right|&\le 2 C_L \sup _{s \le 1 } | z(s) - Z_{s}^N | + 2 \delta _N + |M_{T_N^*/N}^N| \nonumber \\&\quad + (1-\varepsilon ) \left| \frac{T_N^*}{N} - T^* \right| + C_2 \mathbb {1} \left[ \frac{T_N^*}{N} < C_1 \right] . \end{aligned}$$
(102)

Now recall that if \(|z(s) - Z_s^N | \le \delta / 2\), then \(| T_N^* / N - T^* | \le \delta \). Moreover then also \(T_N^* / N \ge C_1\) since \(C_1 < T^* - \delta \). Hence,

$$\begin{aligned} \Bigl \{ \frac{T_N^*}{N} < C_1 \Bigr \} \subset \bigl \{ |z(s) - Z_s^N | > \tfrac{1}{2} \delta \bigr \}. \end{aligned}$$
(103)

Then by (i) collecting terms in and subsequently using (102), and then (ii) applying Minkowski’s inequality [26], we obtain

$$\begin{aligned}&\varepsilon \left\| \frac{T_N^*}{N} - T^* \right\| _2 \overset{ (\mathrm{i}) }{\le }\left\| 2 C_L \sup _{s \le 1 } | z(s) - Z_{s}^N | + 2 \delta _N + | M^N_{T_N^*/N} | + C_2 \mathbb {1} \left[ \frac{T_N^*}{N}< C_1 \right] \right\| _2 \nonumber \\&\quad \overset{ (\mathrm{ii}) }{\le }2 C_L ||\sup _{s \le 1 } | z(s) - Z_{s}^N | ||_2 + 2 \delta _N + ||M^N_{T_N^*/N} ||_2 + C_2 \left\| \mathbb {1} \left[ \frac{T_N^*}{N} < C_1 \right] \right\| _2. \end{aligned}$$
(104)

We now note that (iii) since \(f(y) = y^2\) is monotonically increasing for \(y \ge 0\) and (iv) by Markov’s inequality,

$$\begin{aligned}&\left\| \mathbb {1} \left[ \frac{T_N^*}{N}< C_1 \right] \right\| _2 = \mathbb {P} \left[ \frac{T_N^*}{N} < C_1 \right] ^{\frac{1}{2}} \overset{ (103) }{\le } \mathbb {P} [ | z(s) - Z_s^N |> \tfrac{1}{2} \delta ] ^{\frac{1}{2}} \nonumber \\&\quad \overset{ (\mathrm{iii}) }{=} \mathbb {P}\big [{ \big | z(s) - Z_s^N \big |^2 > \tfrac{1}{4} \delta ^2 }\big ]^{\frac{1}{2}} \overset{ (\mathrm{iv}) }{\le }\frac{2}{\delta } \mathbb {E}\big [{ \big | z(s) - Z_s^N \big |^2 }\big ]^{\frac{1}{2}} = \frac{2}{\delta } \big \Vert z(s) - Z_s^N \big \Vert _2. \end{aligned}$$
(105)

Therefore,

$$\begin{aligned} \varepsilon \left\| \frac{T^*_N}{N} - T^* \right\| _2 \le \left( 2 C_L + \frac{2C_2}{\delta } \right) ||\sup _{s \le 1 } | z(s) - Z_{s}^N | ||_2 + 2\delta _N + \Big \Vert M_{T_N^*/N}^N \Big \Vert _2. \end{aligned}$$
(106)

Thus by finally using Proposition 3.2 and (86), we have that there exist constants \(C_3\), \(C_4\) so that

$$\begin{aligned} \varepsilon \left\| \frac{T^*_N}{N} - T^* \right\| _2 \le C_3 \omega _N + 2\delta _N + \sqrt{ \frac{\bar{\psi }_N}{N} } \le C_4 \omega _N, \end{aligned}$$
(107)

which concludes the proof. \(\square \)

D Proof of Proposition 3.7

Proof

First, recall that by (19) and (75), see (94),

$$\begin{aligned} \frac{T^*_N}{N} - T^* = \int _0^{T^*} \gamma (z(s)) {\text {d}}\,\!{s} - \int _0^{\frac{T^*_N}{N}} \gamma _N(Z_{sN}) {\text {d}}\,\!{s} - M_{T_N^*/N}^N, \end{aligned}$$
(108)

Note furthermore that

$$\begin{aligned} W_{T^*}^N&\overset{ (23) }{=} \sqrt{N} \bigl ( Z_{T^*}^N - z(T^*) \bigr ) \overset{ (75) }{=} \sqrt{N} \left( \int _0^{ \frac{[T^*N]}{N} } ( 1 + \gamma _N(Z_{sN}) ) {\text {d}}\,\!{s} + M_{T^*}^N - z(T^*) \right) \nonumber \\&= \sqrt{N} \left( \int _0^{T^*} ( 1 + \gamma _N(Z_{sN}) ) {\text {d}}\,\!{s} + M_{T^*}^N - z(T^*) \right) + \sqrt{N} \Delta _{N,T^*} \end{aligned}$$
(109)

where \(M_{T^*}^N = M_{[T^*N]} / N\). Recall that the error \(\Delta _{N,T^*}\) introduced by replacing the upper integration boundary, can readily be bounded by \(|\Delta _{N,T^*}| \le (1+\bar{\gamma }_N)/N\), see (78).

Comparing (108) and (109), a subsequent natural series of steps would be to (i) add comparison terms \(\pm W_{T^*}^N\) and use the triangle inequality, and then (ii) substitute (109, use the triangle inequality, and upper bound \(|\Delta _{N,T^*}| \le (1+\bar{\gamma }_N)/N\), after which we arrive at

$$\begin{aligned}&\Bigl | \sqrt{N} \Bigl ( \frac{T_N^*}{N} - T^* \Bigr ) + W_{T^*} \Bigr | \nonumber \\&\quad \overset{\text {(i)}}{\le }\Bigl | \sqrt{N} \Bigl ( \int _0^{T^*} \gamma (z(s)) {\text {d}}\,\!{s} - \int _0^{\frac{T^*_N}{N}} \gamma _N(Z_{sN}) {\text {d}}\,\!{s} - M_{T_N^*/N}^N \Bigr ) + W_{T^*}^N \Bigr | + | W_{T^*} - W_{T^*}^N | \nonumber \\&\quad \overset{\text {(ii)}}{\le }\Bigl | \sqrt{N} \Bigl ( \int _0^{T^*} \gamma (z(s)) {\text {d}}\,\!{s} - \int _0^{\frac{T^*_N}{N}} \gamma _N(Z_{sN}) {\text {d}}\,\!{s} - M_{T_N^*/N}^N \Bigr ) \nonumber \\&\qquad + \sqrt{N} \Bigl ( \int _0^{T^*} ( 1 + \gamma _N(Z_{sN}) ) {\text {d}}\,\!{s} + M_{T^*}^N - z(T^*) \Bigr ) \Bigr | + | W_{T^*} - W_{T^*}^N | + \frac{1+\bar{\gamma }_N}{\sqrt{N}} \nonumber \\&\quad = \text {term I } + \text { term II } + \frac{1+\bar{\gamma }_N}{\sqrt{N}}. \end{aligned}$$
(110)

We will now proceed and bound term I and II.

Note that the expectation of term II can be directly bounded by Proposition 3.4, i.e., there exists a constant \(C_2\) such that

$$\begin{aligned} \mathbb {E} [ \text {term II} ] = \mathbb {E} [ | W_{T^*} - W_{T^*}^N | ] \le \mathbb {E}\big [{ \sup _{t \le 1} | W_{t} - W_t^N | }\big ] \le C_2 \frac{\log (N)}{\sqrt{N}}. \end{aligned}$$
(111)

Bounding term I requires more work. Using (109), the integral version of \(\dot{z} = 1 + \gamma (z)\), and the triangle inequality, we find that

$$\begin{aligned} \text {term I}&\le \sqrt{N} \Bigl | \int _0^{T^*} \gamma _N(Z_{sN}) {\text {d}}\,\!{s} - \int _0^{\frac{T^*_N}{N}} \gamma _N(Z_{sN}) {\text {d}}\,\!{s} \Bigr | + \sqrt{N} \big | M_{T^*}^N -M_{T_N^*/N}^N \big | \nonumber \\&= \text {term Ia} + \text {term Ib}, \end{aligned}$$
(112)

and we now proceed with bounding term Ia and Ib separately.

In order to bound term Ia, we add comparison terms \(\pm \gamma ( Z_{sN} / N )\) and \(\pm \gamma (z(s))\) and use the triangle inequality, so thatFootnote 3

$$\begin{aligned} \text {term Ia}&\le \sqrt{N} \left| \int _{\frac{T^*_N}{N}}^{T^*} \gamma _N(Z_{sN}) - \gamma \left( \frac{Z_{sN}}{N} \right) {\text {d}}\,\!{s} \right| + \sqrt{N} \left| \int _{\frac{T^*_N}{N}}^{T^*} \gamma \left( \frac{Z_{sN}}{N} \right) -\gamma (z(s)) {\text {d}}\,\!{s} \right| \nonumber \\&\quad + \sqrt{N} \left| \int _{\frac{T^*_N}{N}}^{T^*} \gamma (z(s)) {\text {d}}\,\!{s} \right| \end{aligned}$$
(113)

Then by approximating \(\gamma _N\) by \(\gamma \), using the Lipschitz continuity of \(\gamma \), and upper bounding the first two integrands, we find that

$$\begin{aligned} \text {term Ia}&\le \sqrt{N} \delta _N \left| \frac{T_N^*}{N} - T^* \right| + \sqrt{N}C_L \sup _{s \le 1} \left| \frac{Z_{sN}}{N} - z(s) \right| \left| \frac{T_N^*}{N} - T^* \right| \nonumber \\&\quad + \sqrt{N} \left| \int _{\frac{T^*_N}{N}}^{T^*} \gamma (z(s)) {\text {d}}\,\!{s} \right| \end{aligned}$$
(114)

Taking the expectation and using the triangle inequality, it follows that

$$\begin{aligned} \mathbb {E} [ \text {term Ia} ]&\le \sqrt{N}\delta _N \mathbb {E} \left[ \left| \frac{T_N^*}{N} - T^* \right| \right] + \sqrt{N} C_L \mathbb {E} \left[ \sup _{s \le 1} \left| \frac{Z_{sN}}{N} - z(s) \right| \left| \frac{T_N^*}{N} - T^* \right| \right] \nonumber \\&\quad + \sqrt{N} \mathbb {E} \left[ \int _{\frac{T^*_N}{N}}^{T^*} \left| \gamma (z(s)) \right| {\text {d}}\,\!{s} \right] . \end{aligned}$$
(115)

Applying Hölder’s inequality [26],

$$\begin{aligned} \mathbb {E} [ \text {term Ia} ]&\le \sqrt{N}\delta _N \left\| \frac{T_N^*}{N} - T^* \right\| _2 + \sqrt{N}C_L \left\| \sup _{s \le 1} \Bigl | \frac{Z_{sN}}{N} - z(s) \Bigr | \right\| _2 \left\| \frac{T_N^*}{N} - T^* \right\| _2 \nonumber \\&\quad + \sqrt{N} \mathbb {E} \left[ \int _{\frac{T^*_N}{N}}^{T^*} \left| \gamma (z(s)) \right| {\text {d}}\,\!{s} \right] , \end{aligned}$$
(116)

and finally Propositions 3.2 and 3.5, we end up with

$$\begin{aligned} \mathbb {E} [ \text {term Ia} ] \le \Omega _N \sqrt{N} ( \delta _N + C_L \omega _N ) + \sqrt{N} \mathbb {E} \left[ \int _{\frac{T^*_N}{N}}^{T^*} \left| \gamma (z(s)) \right| {\text {d}}\,\!{s} \right] . \end{aligned}$$
(117)

In order to deal with the last term in (117), we will use a Taylor expansion of order zero around \(T^*\). Specifically, we write

$$\begin{aligned} \gamma (z(s)) = \gamma (z(T^*))+ c (s-T^*) + R_2 = c(s-T^*) + R_2, \end{aligned}$$
(118)

where we have recalled that \(\gamma (1) = 0\) by assumption and \(z(T^*) = 1\). Then, by (i) the triangle inequality, (ii) upper bounding the integrand, and (iii) evaluating the integral, there exists a constant \(C_3\) so that

$$\begin{aligned}&\sqrt{N} \int _{\frac{T^*_N}{N}}^{T^*} | \gamma (z(s)) | {\text {d}}\,\!{s} \overset{ (\mathrm{i}) }{\le }\sqrt{N} \int _{\frac{T^*_N}{N}}^{T^*}c | s - T^* | + | R_2 | {\text {d}}\,\!{s} \nonumber \\&\overset{ (\mathrm{ii}) }{\le }\sqrt{N} \int _{\frac{T^*_N}{N}}^{T^*}c \left| \frac{T^*_N}{N}-T^* \right| + | R_2 | {\text {d}}\,\!{s} \overset{ (\mathrm{iii}) }{\le }C_3 \sqrt{N} \left| \frac{T^*_N}{N}-T^* \right| ^2, \end{aligned}$$
(119)

where for the second term we have used that \(| R_2 | = O( (s-T^*)^2 ) \), and that \( | s - T^* | \le 1\) for \(s \in [ T^*_N / N, T^* ]\). Therefore, by Proposition 3.5,

$$\begin{aligned} \sqrt{N} \mathbb {E} \left[ \int _{\frac{T^*_N}{N}}^{T^*} | \gamma (z(s)) | {\text {d}}\,\!{s} \right]&\le C_3 \sqrt{N} \mathbb {E} \left[ \left| \frac{T^*_N}{N} - T^* \right| ^2 \right] \nonumber \\&\le C_3 \sqrt{N} \left\| \frac{T_N^*}{N} - T^* \right\| _2 ^2 \le C_3 \Omega _N^2 \sqrt{N}. \end{aligned}$$
(120)

Ultimately bounding (117) using (120), we conclude that there exists a constant \(C_1\) such that

$$\begin{aligned} \mathbb {E} [ \text {term Ia} ] \le \Omega _N \sqrt{N} ( \delta _N + C_L \omega _N + C_3 \Omega _N ) \le C_1 \omega _N^2 \sqrt{N}. \end{aligned}$$
(121)

To finish the proof we still need to bound the expectation of term Ib, that is, \(\sqrt{N} \mathbb {E} [ | M_{T^*}^N - M_{T_N^*/N}^N | ] \). By (i) Cauchy–Schwarz’s inequality, (ii) definition of the scaled martingale, (iii) calculating the increasing process similar to (84)–(86), and (iv) \( \mathbb {E} [ ( M_t - M_s )^2 ] = \mathbb {E} [ M_t^2 ] - \mathbb {E} [ M_s^2 ] \) for \(t > s\) as a consequence of \(M_t\) being a martingale, we find

$$\begin{aligned}&\sqrt{N} \mathbb {E}\big [{ \big | M_{T_N^*/N}^N - M_{T^*}^N \big | }\big ] \overset{ (\mathrm{i}) }{\le }\sqrt{N} \mathbb {E}\big [{ \big | M_{T_N^*/N}^N - M_{T^*}^N \big |^2 }\big ]^{\frac{1}{2}} \overset{ (\mathrm{ii}) }{=} \frac{1}{\sqrt{N}} \mathbb {E}\big [{ \big | M_{T_N^*} - M_{[T^*N]} \big |^2 }\big ]^{\frac{1}{2}} \nonumber \\&\quad \overset{ (\mathrm{iii}) }{=} \frac{1}{\sqrt{N}} \mathbb {E}\big [{\langle M_{T_N^*} - M_{[T^*N]} \rangle }\big ]^{\frac{1}{2}} \overset{ (\mathrm{iv}) }{=} \mathbb {E} \left[ \frac{1}{N} \sum _{i = T_N^* \wedge [ T^* N ] }^{ T_N^* \vee [ T^* N ] } \psi _N(Z_i) \right] ^{\frac{1}{2}}. \end{aligned}$$
(122)

Then (v) upper bounding \(\psi _N(Z_i) \le \bar{\psi }_N\), (vi) adding compensation terms \(\pm T^*\), applying the triangle inequality and upper bounding \(|T^*N-[T^*N]| \le 1\), it follows (vii) from Proposition 3.5 that

$$\begin{aligned} \mathbb {E} [ \text {term Ib} ]&= \sqrt{N} \mathbb {E}\big [{ \big | M_{T_N^*/N}^N - M_{T^*}^N \big | }\big ] \overset{ (\mathrm{v}) }{\le } \mathbb {E} \left[ \bar{\psi }_N \Bigl | \frac{T_N^*}{N} - \frac{[T^*N]}{N} \Bigr | \right] ^{\frac{1}{2}} \nonumber \\&\overset{ (\mathrm{vi}) }{\le }\left( \bar{\psi }_N \left\| \frac{T_N^*}{N} - T^* \right\| _1 + \frac{\bar{\psi }_N}{N} \right) ^{\frac{1}{2}} \overset{ (\mathrm{vii}) }{\le }\Bigl ( \bar{\psi }_N \Omega _N + \frac{\bar{\psi }_N}{N} \Bigr )^{\frac{1}{2}}. \end{aligned}$$
(123)

Finally, we combine all bounds, resulting in

$$\begin{aligned}&\mathbb {E} \left[ \Bigl | \sqrt{N} \Bigl ( \frac{T_N^*}{N} - T^* \Bigr ) + W_{T^*} \Bigr | \right] \le \mathbb {E} [ \text {term Ia} ] + \mathbb {E} [ \text {term Ib} ] + \mathbb {E} [ \text {term II} ] + \frac{1+\bar{\gamma }_N}{\sqrt{N}} \nonumber \\&\quad \le C_1 \omega _N^2 \sqrt{N} + \Bigl ( \bar{\psi }_N \Omega _N + \frac{\bar{\psi }_N}{N} \Bigr )^{\frac{1}{2}} + C_2 \frac{\log (N)}{\sqrt{N}} + \frac{1 + \bar{\gamma }_N}{\sqrt{N}}. \end{aligned}$$
(124)

If the distribution of the number of neighbors is such that \(\delta _N = o( 1 / \sqrt{N} ) \), \(\bar{\gamma }_N = o( \sqrt{N} ) \) and \(\bar{\psi }_N = o( N^{1/4} ) \), then \(\omega _N = o( 1 / N^{3/8} ) \) and \(\Omega _N = o( 1 / N^{3/8} ) \), and all the product-terms in (124) converge to 0 as \(N \rightarrow \infty \). We have thus proven that under these conditions, the limit is a Gaussian random variable with variance

$$\begin{aligned} \sigma ^2 = \mathbb {E} [ W_{T^*}^2 ] . \end{aligned}$$
(125)

Defining \(m(t) = \mathbb {E} [ W_{t}^2 ] \), and using Itô’s formula [12], note that

$$\begin{aligned} \mathbb {E} [ W_t^2 ] = \mathbb {E} \left[ \int _0^t 2 W_s d W_s + \frac{1}{2} 2 \beta _{t} \right] = 2\int _0^{t} \gamma '(z(s)) \mathbb {E} [ W_s^2 ] {\text {d}}\,\!{s} + \beta (t), \end{aligned}$$
(126)

and hence m(t) satisfies the differential system

$$\begin{aligned} \dot{m} = -2 \dot{\gamma }(z(t)) m(t) + \dot{\beta }, \quad \text {with} \quad m_0 = 0. \end{aligned}$$
(127)

This finishes the proof. \(\square \)

E Proof of Corollary 4.4

Proof

Consider the expansion \(u(t) = t + \sum _{i = 1}^\infty c^i u_i(t)\). Substitute into (59), and Taylor expand the right-hand side to obtain

$$\begin{aligned} 1 + c u_1'(t) + O( c^2 )&= 1 + c \exp { \left( - \int _0^t \frac{ds}{ 1 - u(s) } \right) } \nonumber \\&= 1 + c \exp { \left( - \int _0^t \frac{1}{1-s} + \sum _{i=1}^\infty \frac{c^i u_i(s)}{ (1-s)^{i+1} } {\text {d}}\,\!{s} \right) } \nonumber \\&= 1 + c (1-t) \exp { \left( \sum _{i=1}^\infty \frac{c^i u_i(s)}{ (1-s)^{i+1} } {\text {d}}\,\!{s} \right) } = 1 + c (1-t) ( 1 + O(c) ). \end{aligned}$$
(128)

Comparing terms we find that \(u_1'(t) = (1-t)\) with initial condition \(u_1(0) = 0\), leading to the conclusion that \(u_1(t) = t ( 1 - \frac{1}{2} t )\). Therefore,

$$\begin{aligned} u(t) = t + c t ( 1 - \tfrac{1}{2} t ) + O(c^2) = (1+c) t - \tfrac{1}{2} c t^2 + O(c^2) . \end{aligned}$$
(129)

Exactly the same expansion is obtained for l(t) when applying the approach up to and including order \( O(c) \). Since u(t) is an upper bound and l(t) is a lower bound for the fluid limit z(t) of the spatial process, and both bounds have the same asymptotic behavior as \(c \downarrow 0\), this completes the proof. \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bermolen, P., Jonckheere, M. & Sanders, J. Scaling Limits and Generic Bounds for Exploration Processes. J Stat Phys 169, 989–1018 (2017). https://doi.org/10.1007/s10955-017-1902-z

Download citation

Keywords

  • Random sequential adsorption
  • Scaling limits
  • Random graphs