1 Introduction

A classical result of Glivenko and Cantelli in 1933 states that the sequence of empirical distribution functions associated to i.i.d. copies of some real random variable converges uniformly to its cumulative distribution function, almost surely. Nearly 20 years later, Donsker determined the asymptotic behavior of the fluctuations; let us recall his result. Let \(U_1, U_2, \ldots \) be i.i.d. uniform random variables on [0, 1]; then the sequence of (uniform) empirical processes,

$$\begin{aligned} G_n(x)=\frac{1}{\sqrt{n}} \sum _{i=1}^n\left( {\mathbf{1}}_{U_i \le x}-x\right) , \qquad x\in [0,1], \end{aligned}$$

converges in distribution as \(n\rightarrow \infty \) in the sense of Skorokhod towards a Brownian bridge \((G(x))_{0\le x \le 1}\). The purpose of the present work is to analyze how Donsker’s Theorem is affected by an elementary random reinforcement algorithm that we shall now describe.

Consider a sequence \(\varepsilon _2, \varepsilon _3, \ldots \) of i.i.d. Bernoulli variables with fixed parameter \(p\in (0,1)\). These variables determine when repetitions occur, in the sense that the n-th step of the algorithm is a repetition if \(\varepsilon _n=1\), and an innovation if \(\varepsilon _n=0\). For every \(n\ge 2\), let also v(n) be a uniform random variable on \(\{1, \ldots , n-1\}\) such that \(v(2), v(3), \ldots \) are independent; these variables specify which of the preceding items is copied when a repetition occurs. More precisely, we set \(\varepsilon _1=0\) for definitiveness and construct recursively a sequence of random variables \({\hat{U}}_1, {\hat{U}}_2, \ldots \) by setting

$$\begin{aligned} {\hat{U}}_n=\left\{ \begin{matrix} {\hat{U}}_{v(n)}&{}\quad \text { if }\varepsilon _n=1, \\ U_{{\mathrm i}(n)}&{}\quad \text { if }\varepsilon _n=0,\\ \end{matrix} \right. \end{aligned}$$


$$\begin{aligned} {\mathrm i}(n)=\sum _{j=1}^n(1-\varepsilon _j)\qquad \text {for }n\ge 1 \end{aligned}$$

denotes the total number of innovations after n steps. We always assume without further mention that the sequences \((v(n))_{n\ge 2}\), \((\varepsilon _n)_{n\ge 2}\), and \((U_j)_{j\ge 1}\) are independent.

This random algorithm has been introduced by Simon [28], who singled out in this setting a remarkable one-parameter family of power tail distributions on \(\mathbb {N}\) that arise in a variety of data. Nowadays, Simon’s algorithm should be viewed as a linear reinforcement procedure, in the sense that, provided that \({\mathrm i}(n)\ge j\) (i.e. the variable \(U_j\) has already appeared at the n-th step of the algorithm), the probability that \(U_j\) is repeated at the \((n+1)\)-th step is proportional to the number of its previous occurrences. In this direction, we refer henceforth to the parameter p of the Bernoulli variables \(\varepsilon _n\) as the reinforcement parameter.

Obviously, each variable \({\hat{U}}_n\) has the uniform distribution on [0, 1]; note however that the reinforced sequence \(({\hat{U}}_n)_{n\ge 1}\) is clearly not stationary, and is not exchangeable or even partially exchangeable either. We shall furthermore point out in Remark 2.3 that mixing also does not hold, and the study of the empirical distribution functions for the reinforced sequence does not fit the framework developed for weakly dependent processes (see for instance Chapter 7 in Rio [26] and references therein). It is easy to show that nonetheless, the conclusion of the Glivenko–Cantelli theorem is still valid in our setting:

Proposition 1.1

With probability one, it holds that

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{0\le x \le 1} \left| \frac{1}{n} \sum _{i=1}^n\left( {\mathbf{1}}_{{\hat{U}}_i\le x}-x\right) \right| =0. \end{aligned}$$

We are chiefly interested in the empirical processes \({\hat{G}}_n\) associated to the reinforced sequence

$$\begin{aligned} {\hat{G}}_n(x)=\frac{1}{\sqrt{n}} \sum _{i=1}^n\left( {\mathbf{1}}_{{\hat{U}}_i \le x}-x\right) , \qquad x\in [0,1]. \end{aligned}$$

Our main result shows that their asymptotic behavior as \(n\rightarrow \infty \) exhibits a phase transition for the critical parameter \(p_c=1/2\). Roughly speaking, when the reinforcement parameter p is smaller than 1/2, then the analog of Donsker’s Theorem holds for \({\hat{G}}_n\), except that the limit is now only proportional to the Brownian bridge. At criticality, i.e. for \(p=1/2\), convergence in distribution to the Brownian bridge holds after an additional rescaling of \({\hat{G}}_n\) by a factor \(1/\sqrt{\log n}\). Finally for \(p>1/2\), \(n^{-p+1/2}{\hat{G}}_n\) now converges in probability and its limit is described in terms of some bridge with exchangeable increments and discontinuous sample paths.

Here is a precise statement, where the needed background on bridges with exchangeable increments in the supercritical case \(p>1/2\) is postponed to the next section. Recall that \(G=(G(x))_{0\le x \le 1}\) denotes the standard Brownian bridge. We further write \({\mathbb D}\) for the space of càdlàg paths \(\omega : [0,1]\rightarrow \mathbb {R}\) endowed with the Skorokhod topology (see Chapter 3 in [8] or Chapter VI in [17]). The notation \( \Rightarrow \) is used to indicate convergence in distribution of a sequence of processes in \({\mathbb D}\).

Theorem 1.2

The following convergences hold as \(n\rightarrow \infty \):

  1. (i)

    If \(p<1/2\), then

    $$\begin{aligned} {\hat{G}}_n \ \Longrightarrow \ \frac{1}{\sqrt{1-2p}}\, G. \end{aligned}$$
  2. (ii)

    If \(p=1/2\), then

    $$\begin{aligned} \frac{1}{\sqrt{\log n}}\, {\hat{G}}_n \ \Longrightarrow \ G. \end{aligned}$$
  3. (iii)

    If \(p>1/2\), then

    $$\begin{aligned} \lim _{n\rightarrow \infty } n^{-p+1/2}{\hat{G}}_n = B^{(p)} \qquad \text { in probability on }{\mathbb D}, \end{aligned}$$

    where \(B^{(p)}=(B^{(p)}(x))_{0\le x \le 1}\) is the bridge with exchangeable increments described in the forthcoming Definition 2.4.

Our approach to Theorem 1.2 relies, at least in part, on a natural interpretation of Simon’s algorithm in terms of Bernoulli bond percolation on random recursive trees. Specifically, we view \(\{1,2,\ldots ,n\}\) as a set of vertices and each pair (jv(j)) for \(j=2, \ldots , n\) as edges; the resulting graph \({\mathbb T}_n\) is known as the random recursive tree of size n, see Section 1.3 and Chapter 7 in [13]. We next delete each edge (jv(j)) if and only if \(\varepsilon _j=0\), in other words we perform a Bernoulli bond percolation with parameter p on \({\mathbb T}_n\). The percolation clusters are then given by subsets of indices at which the same variable is repeated, namely \(\{i\le n: {\hat{U}}_i=U_j\}\) for \(j=1, \ldots , {\mathrm i}(n)\).

The sum of the squares of the cluster sizes

$$\begin{aligned} {\mathcal {S}}^2(n)=\sum _{j\ge 1}N_j(n)^2, \qquad \text {with }N_j(n) =\#\{i\le n: {\hat{U}}_i=U_j\}, \end{aligned}$$

lies at the heart of the analysis of the reinforced empirical process \({\hat{G}}_n\). We shall see that its asymptotic behavior is given by

$$\begin{aligned} {\mathcal {S}}^2(n)\sim \left\{ \begin{array}{ll} n/(1-2p) &{}\text { if }p<1/2,\\ n\log n &{}\text { if }p=1/2,\\ n^{2p}R&{}\text { if }p>1/2,\\ \end{array} \right. \end{aligned}$$

where R is some non-degenerate random variable. A rough explanation for the phase transitionFootnote 1 in (2) is that the main contribution to \({\mathcal {S}}^2(n)\) is due to a large number of microscopic clusters in the sub-critical case \(p<1/2\), and rather to a few mesoscopic clusters of size \(\approx n^p\) in the super-critical case \(p>1/2\). Even though (2) is not quite sufficient to establish Theorem 1.2, it is nonetheless a major step for its proof. More precisely, we shall rely on general results due to Kallenberg [19] on the structure of processes with exchangeable increments and explicit criteria for the weak convergence of sequences of the latter, and (2) appears as a key element in this setting.

The rest of this work is organized as follows. Section 2 is devoted to several preliminaries. We shall first present some key results due to Kallenberg on bridges with exchangeable increments and their canonical representations. We shall then recall a limit theorem for the numbers of occurrences \(N_j(n)\) which have been obtained in the framework of Bernoulli percolation on random recursive trees as well as the fundamental result of H.A. Simon about the frequency of microscopic clusters. Last, we shall compute explicitly the average \(\mathbb {E}({\mathcal {S}}^2(n))\) using a simple recurrence identity and establish Proposition 1.1 on our way. Theorem 1.2 is then proven in Sect. 3. Finally, in Sect. 4, we discuss some connections between Theorem 1.2 and closely related results in the literature on step-reinforced random walks, including correlated Bernoulli processes and the so-called elephant random walk.

2 Preliminaries

2.1 Bridges with exchangeable increments

This section is adapted from Kallenberg [19], who rather uses the terminology interchangeable instead of exchangeable, and whose results are given in a more general setting. We also refer to [20] for many interesting properties of the sample paths of such processes.

Let \(B=(B(x))_{0\le x \le 1}\) be a real valued process with càdlàg sample paths, and which is continuous in probability. We say that B has exchangeable increments if for every \(n\ge 2\), the sequence of its increments \(B(k/n)-B((k-1)/n)\) for \( k=1, \ldots , n\), is exchangeable, i.e. its distribution is invariant by permutations. We further say that B is a bridge provided that \(B(0)=B(1)=0\) a.s.

According to Theorem 2.1 in [19], any bridge with exchangeable increments can be expressed in the form

$$\begin{aligned} B(x) = \sigma G(x) + \sum _{j=1}^{\infty }\beta _j({\mathbf{1}}_{U_j\le x} - x), \qquad x\in [0,1], \end{aligned}$$

where \(\sigma \) is a nonnegative random variable, G a Brownian bridge, \(\varvec{\beta }=(\beta _j)_{j\ge 1}\) a sequence of real random variables with \(\sum _{j=1}^{\infty } \beta _j^2<\infty \) a.s., and \(\varvec{U}=(U_j)_{j\ge 1}\) a sequence of i.i.d. uniform random variables, such that \(\sigma , G, \varvec{\beta }\) and \(\varvec{U}\) are independent. More precisely, if we further assume that the sequence \((|\beta _j|)_{j\ge 1}\) is nonincreasing, which induces no loss of generality, then the series in (3) converges a.s. uniformly on [0, 1].

One calls \(\sigma , \varvec{\beta }\) the canonical representation of B. Roughly speaking, (3) shows that the continuous part of B is a mixture of Brownian bridges (parametrized by the standard deviation), with mixture weights given by the random variable \(\sigma \), and \(\varvec{\beta }\) describes the sequence of the jumps of B, each of them taking place uniformly at random on [0, 1] and independently of the others. The laws of \(\sigma \) and of \(\varvec{\beta }\) then entirely determine that of B.

The next lemma plays a key role in the proof of Theorem 1.2; it states two criteria that are tailored for our purposes, for the convergence of a sequence of bridges with exchangeable increments in terms of the canonical representations. The first part is a special case of Theorem 2.3 in [19]. The second part can be seen as an immediate consequence of the first and the well-known facts that Skorokhod’s topology is metrizable and that convergence of a sequence of functions in \({\mathbb D}\) to a continuous limit is equivalent to convergence for the supremum distance (see e.g. Section VI.1 in [17]); it can also be checked by direct calculation.

Lemma 2.1

For each \(n\ge 1\), let \(B_n\) denote a bridge with exchangeable increments and canonical representation \(\sigma _n=0\) and \(\varvec{\beta }_n=(\beta _{n,j})_{j\ge 1}\).

  1. (i)

    Suppose that

    $$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{j\ge 1} |\beta _{n,j}|=0 \text { in probability,} \end{aligned}$$

    and that

    $$\begin{aligned} \sum _{j=1}^{\infty } \beta _{n,j}^2 \ \Longrightarrow \ \sigma ^2 \end{aligned}$$

    for some random variable \(\sigma \ge 0\). Then there is the convergence in distribution

    $$\begin{aligned} B_n \ \Longrightarrow \ \sigma G, \end{aligned}$$

    where G is a standard Brownian bridge independent of \(\sigma \).

  2. (ii)


    $$\begin{aligned} \lim _{n\rightarrow \infty } \sum _{j=1}^{\infty } \beta _{n,j}^2=0\quad \text {in probability}, \end{aligned}$$


    $$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{0\le x \le 1}|B_n(x)| = 0 \quad \text {in probability}. \end{aligned}$$

2.2 Asymptotic behavior of occurrences numbers

Recall that the reinforcement parameter \(p\in (0,1)\) in Simon’s algorithm is fixed; for the sake of simplicity, it will be omitted from several notations even though it always plays an important role.

Recall from (1) that for every \(j\in \mathbb {N}\), \(N_j(n)\) denotes the number of occurrences of the variable \(U_j\) up to the n-th step of the algorithm. Plainly \(N_j(n)=0\) if and only if the number of innovations up to the n-th step is less than j, i.e. \({\mathrm i}(n)<j\).

The starting point of our analysis is that the reinforced empirical process can be expressed in the form

$$\begin{aligned} {\hat{G}}_n(x)= \frac{1}{\sqrt{n}} \sum _{j=1}^{\infty } N_j(n) ({\mathbf{1}}_{U_j\le x}-x), \quad x\in [0,1]. \end{aligned}$$

Hence \({\hat{G}}_n\) is a bridge with exchangeable increments, with canonical representation 0 and \(\varvec{\beta }_n=(\beta _{n,j})_{j\ge 1}\), where \(\beta _{n,j}=N_j(n)/\sqrt{n}\). We aim to determine its asymptotic behavior as \(n\rightarrow \infty \) by applying Lemma 2.1. In this direction, the interpretation of Simon’s algorithm as a Bernoulli bound percolation on a random recursive tree, as it has been sketched in the Introduction, enables us to lift from [1] the following result about the asymptotic behavior of mesoscopic clusters.

Lemma 2.2

The limit

$$\begin{aligned} \lim _{n\rightarrow \infty } n^{-p} N_j(n) {=}{:}X^{(p)}_j \end{aligned}$$

exists a.s. for every \(j\ge 1\). For \(p>1/2\), there is furthermore the identity

$$\begin{aligned} \mathbb {E}\left( \sum _{j=1}^{\infty } (X^{(p)}_j)^2\right) = \frac{1}{(2p-1)\Gamma (2p)}. \end{aligned}$$


Simon’s algorithm induces a natural partition \(\mathbb {N}=\bigsqcup _{j\ge 1} \Pi _j\) of the set of positive integers into blocks \(\Pi _j=\{k\in \mathbb {N}: {\hat{U}}_k=U_j\}\) which we can see as the result of a Bernoulli bond percolation with parameter p on the (infinite) random recursive tree. In this setting, we have \(N_j(n)=\#(\Pi _j\cap \{1, \ldots , n\})\), and the first claim of the statement has been observed in Section 3.2 of [1], right after the proof of Lemma 3.3 thereFootnote 2. Moreover Equation (3.4) there shows that for every \(q>1/p\), there is the identity

$$\begin{aligned} \mathbb {E}\left( \sum _{j=1}^{\infty } (X^{(p)}_j)^q\right) = \frac{\Gamma (q)}{\Gamma (pq)} + \frac{q(1-p)\Gamma (q)}{(pq-1)\Gamma (pq)}. \end{aligned}$$

Specializing this for \(q=2\) yields our second claim. \(\square \)

We write \({\mathbf{X}}^{(p)}=(X^{(p)}_j)_{j\ge 1}\), where the \(X^{(p)}_j\) are defined by (5). It is known that \(X^{(p)}_1\) has the Mittag-Leffler distribution with parameter p (see Theorem 3.1 in [1] and also [24]); nonetheless the law of the whole sequence \({\mathbf{X}}^{(p)}\) does not seem to have any simple expression (see Proposition 3.7 in [1]).

Remark 2.3

Denote by \(J*={\mathrm {argmax }}\, {\mathbf{X}}^{(p)}\) the index at which the sequence \((X^{(p)}_j)_{j\ge 1}\) is maximal (it follows from Section 3.2 of [1] that \(J*<\infty \) and is unique a.s.). We deduce from (5) that for every \(k\ge 1\), the probability that \(U_{J*}\) is the most repeated quantity in the sequence \({\hat{U}}_k, {\hat{U}}_{k+1}, \ldots , {\hat{U}}_{k+n}\) tends to 1 as \(n\rightarrow \infty \). Hence the variable \(U_{J*}\) is measurable with respect to the tail sigma-algebra

$$\begin{aligned} \hat{\mathcal {G}}=\cap _{k\ge 1}\sigma ({\hat{U}}_n: n\ge k). \end{aligned}$$

On the other hand, it is readily checked that \(\mathbb {P}(J*=1)>0\); as a consequence, \(\hat{\mathcal {G}}\) and \(U_1\) are not independent and the mixing property does not hold for the reinforced sequence \(({\hat{U}}_n)_{n\ge 1}\).

When \(p>1/2\), Lemma 2.2 enables us to view \({\mathbf{X}}^{(p)}\) as a random variable with values in the space \(\ell ^2(\mathbb {N})\) of square summable series, and this leads us to the following definition of the process \(B^{(p)}\) that appears as a limit in Theorem 1.2(iii).

Definition 2.4

For \(p>1/2\), we define \(B^{(p)}=(B^{(p)}(x))_{0\le x \le 1}\) as the bridge with exchangeable increments with canonical representation 0 and \({\mathbf{X}}^{(p)}\). That is

$$\begin{aligned} B^{(p)}(x) = \sum _{j=1}^{\infty }X^{(p)}_j({\mathbf{1}}_{U_j\le x} - x), \qquad x\in [0,1], \end{aligned}$$

where \({\mathbf{U}}=(U_j)_{j\ge 1}\) is a sequence of i.i.d. uniform variables, independent of \({\mathbf{X}}^{(p)}\).

We conclude this section recalling the key result of Simon [28] about the asymptotic frequency of microscopic percolation clusters. Note that the number of innovations up to the n-th step is approximately \((1-p)n\) for \(n\gg 1\).

Lemma 2.5

For each \(k\ge 1\), write

$$\begin{aligned} C_k(n) {:}{=}\# \{j\ge 1: N_j(n)=k\}, \end{aligned}$$

for the number of variables \(U_j\) which have occurred exactly k times at the n-th step of Simon’s algorithm. Then

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{C_k(n)}{(1-p)n} = \frac{1}{p} {\mathrm {B}}(k, 1+1/p) \qquad \text {in probability for every }k\in \mathbb {N}, \end{aligned}$$

where \({\mathrm {B}}\) denotes the beta function.

The right-hand side of (6) is a probability measure on \(\mathbb {N}\) which is known as the Yule-Simon distribution with parameter 1/p. Actually, it is only proved in [28] that

$$\begin{aligned} \mathbb {E}(C_k(n))\sim \frac{1-p}{p} {\mathrm {B}}(k, 1+1/p)n \qquad \text { as } n\rightarrow \infty ; \end{aligned}$$

nonetheless the stronger statement (6) is known to hold; see e.g. Section 3.1 and more specifically Equation (3.10) in [25].

2.3 A first moment calculation

Recall that we want to apply Lemma 2.1 to investigate the asymptotic behavior of reinforced empirical processes. In this direction, (4) incites us to introduce for every \(n\ge 1\)

$$\begin{aligned} {\mathcal {S}}^2(n)=\sum _{j=1}^{\infty } N_j(n)^2. \end{aligned}$$

The proof of Theorem 1.2 will use the following explicit calculation for the expectation of this quantity, which already points at the same direction as (2).

Lemma 2.6

For every \(n\ge 1\), we have

$$\begin{aligned} \mathbb {E}({\mathcal {S}}^2(n)) =\frac{\Gamma (n+2p)}{\Gamma (n)}\sum _{i=1}^n \frac{\Gamma (i)}{\Gamma (i+2p)} . \end{aligned}$$

As a consequence, we have as \(n\rightarrow \infty \) that

$$\begin{aligned} \mathbb {E}({\mathcal {S}}^2(n))\sim \left\{ \begin{array}{ll}n/(1-2p) &{}\text { if }p<1/2,\\ n\log n &{}\text { if }p=1/2,\\ ((2p-1)\Gamma (2p))^{-1} n^{2p} &{}\text { if }p>1/2.\\ \end{array} \right. \end{aligned}$$


Write \({\mathcal {F}}_n\) for the sigma-field generated by \(((\varepsilon _i,v(i)): 2\le i \le n)\). Plainly, \(\sum _{j\ge 1} N_j(n)=n\), and we see from the very definition of Simon’s algorithm that

$$\begin{aligned} \mathbb {E}({\mathcal {S}}^2(n+1)\mid {\mathcal {F}}_n)&= {\mathcal {S}}^2(n) + p\left( \frac{1}{n} \sum _{j\ge 1} N_j(n)(2N_j(n)+1) \right) + (1-p)\\&=(1+2p/n){\mathcal {S}}^2(n) + 1. \end{aligned}$$

This yields the recurrence equation for the first moments

$$\begin{aligned} \mathbb {E}({\mathcal {S}}^2(n+1))=(1+2p/n)\mathbb {E}({\mathcal {S}}^2(n))+1. \end{aligned}$$

To solve the latter, we set \(a(n)=\Gamma (n+2p)/\Gamma (n)\), so that \(a(n+1)/a(n)= 1+2p/n\), and then

$$\begin{aligned} \mathbb {E}({\mathcal {S}}^2(n+1))/a(n+1)=\mathbb {E}({\mathcal {S}}^2(n))/a(n)+1/a(n+1). \end{aligned}$$

Since \({\mathcal {S}}^2(1)=1\), we arrive at

$$\begin{aligned} \mathbb {E}({\mathcal {S}}^2(n))= a(n)\sum _{i=1}^{n}\frac{1}{a(i)}, \end{aligned}$$

which is the identity of our claim.

In turn, the estimate as \(n\rightarrow \infty \) in the statement follows immediately from the facts that \(\Gamma (n+2p)/\Gamma (n) \sim n^{2p}\), and that when \(p>1/2\), one has

$$\begin{aligned} \sum _{i=1}^{\infty } \frac{\Gamma (i)}{\Gamma (i+2p)}&= \frac{1}{\Gamma (2p)} \sum _{i=1}^{\infty } {\mathrm B}(i,2p)\nonumber \\&= \frac{1}{\Gamma (2p)}\int _0^1 \left( \sum _{i=1}^{\infty } x^{i-1}\right) (1-x)^{2p-1} \hbox {d}x \nonumber \\&= \frac{1}{\Gamma (2p)}\int _0^1 (1-x)^{2p-2}\hbox {d}x \nonumber \\&= \frac{1}{(2p-1)\Gamma (2p)}. \end{aligned}$$

The proof is now complete. \(\square \)

As a first application, we establish the reinforced version of the Glivenko–Cantelli theorem.

Proof of Proposition 1.1

The proof is classically reduced to establishing the following reinforced version of the strong law of large numbers,

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{i=1}^n {\mathbf{1}}_{{\hat{U}}_i\le x}=x \qquad \text { a.s. for each }x\in [0,1]. \end{aligned}$$

Indeed, the almost sure convergence in (8) holds simultaneously for all dyadic rational numbers, and uniform convergence on [0, 1] then can be derived by a monotonicity argument à la Dini.

So fix \(x\in [0,1]\) and set

$$\begin{aligned} \Sigma (n)= \sum _{i=1}^n {\mathbf{1}}_{{\hat{U}}_i\le x}= \sum _{j=1}^{\infty } N_j(n){\mathbf{1}}_{U_j\le x} . \end{aligned}$$

Clearly, \(\mathbb {E}(\Sigma (n))=nx\), and, by conditioning on \({\mathcal {F}}_n\), we get

$$\begin{aligned} {\mathrm {Var}}(\Sigma (n)) = (x-x^2)\mathbb {E}({\mathcal {S}}^2(n)). \end{aligned}$$

From Lemma 2.6 and Chebychev’s inequality, we now see that we can choose \(r>1\) sufficiently large such that

$$\begin{aligned} \sum _{k=1}^{\infty } \mathbb {P}(|\Sigma (k^r)-k^rx|>k^{r-1})<\infty . \end{aligned}$$

One concludes from the Borel-Cantelli lemma that (8) holds along the subsequence \(n=k^r\), and the general case follows by another argument of monotonicity. \(\square \)

3 Proof of Theorem 1.2

As its title indicates, the purpose of this section is to establish Theorem 1.2 in each of the three regimes.

3.1 Subcritical regime \(p<1/2\)

Throughout this section, we assume that the reinforcement parameter satisfies \(p<1/2\). Our approach in the subcritical regime relies on the following strengthening of Lemma 2.5 (recall the notation there).

Lemma 3.1

Define for every \(i\ge 1\)

$$\begin{aligned} c_i^{(p)}= \frac{(1-p) }{p} {\mathrm {B}}(i, 1+1/p). \end{aligned}$$

Then we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}\left( \sum _{i=1}^{\infty } i^2 \left| \frac{C_i(n)}{n} -c_i^{(p)}\right| \right) =0. \end{aligned}$$


For each \(n=1,2, \ldots \), write \({\mathbf{C}}(n)= (C_i(n))_{i\ge 1}\) and view \({\mathbf{C}}(n)\) as a function on the space \(\Omega \times \mathbb {N}\) endowed with the product measure \(\mathbb {P}\otimes \#^2\), where \(\#^2\) stands for the measure on \(\mathbb {N}\) which assigns mass \(i^2\) to every \(i\in \mathbb {N}\). Consider an arbitrary subsequence excerpt from \(({\mathbf{C}}(n))_{n\ge 1}\). From Lemma 2.5 and an argument of diagonal extraction, we can construct a further subsequence, say indexed by \(\ell (n)\) for \(n=1,2, \ldots \), such that

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{{\mathbf{C}}(\ell (n))}{\ell (n)} = {\mathbf{c}}^{(p)} \qquad (\mathbb {P}\otimes \#^2) \text {-almost everywhere,} \end{aligned}$$

where \({\mathbf{c}}^{(p)}=(c_i^{(p)})_{i\ge 1}\).

On the one hand, we observe that

$$\begin{aligned}\sum _{i=1}^{\infty } i^2{\mathrm {B}}(i, 1+1/p)&=\int _0^1 \left( \sum _{i=1}^{\infty } i^2 x^{i-1}\right) (1-x)^{1/p}\hbox {d}x\\&=\int _0^1(1+x)(1-x)^{-3+1/p}\hbox {d}x\\&=\frac{p}{(1-p)(1-2p)}, \end{aligned}$$

so that

$$\begin{aligned} \sum _{i=1}^{\infty } i^2 c_i^{(p)}= \frac{1}{1-2p}. \end{aligned}$$

On the other hand, we note the basic identity

$$\begin{aligned} {\mathcal {S}}^2(n) = \sum _{j=1}^{\infty } N_j(n)^2= \sum _{i=1}^{\infty } i^2 C_i(n). \end{aligned}$$

Since \(\Gamma (n+2p)/\Gamma (n)\sim n^{2p}\) and \(2p<1\), we see from Lemma 2.6 and (11) that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}\left( \sum _{i=1}^{\infty } i^2 \frac{C_i(n)}{n}\right) = \frac{1}{1-2p}. \end{aligned}$$

Thanks to (10), we deduce from the Vitali-Scheffé theorem (see e.g. Theorem 2.8.9 in [9]) that the convergence (9) also holds in \(L^1(\mathbb {P}\otimes \#^2)\), that is

$$\begin{aligned}\lim _{n\rightarrow \infty } \mathbb {E}\left( \sum _{i=1}^{\infty } i^2 \left| \frac{C_i(\ell (n))}{\ell (n)} -c_i^{(p)}\right| \right) =0.\end{aligned}$$

Since the convergence above holds for any (initial) subsequence, our claim is proven. \(\square \)

We next point at the following consequence of Lemma 3.1.

Corollary 3.2

We have

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{{\mathcal {S}}^2(n)}{n}=\frac{1}{1-2p} \qquad \text {in }L^1(\mathbb {P}), \end{aligned}$$


$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{j\ge 1} \frac{N_j(n)}{\sqrt{n}} =0\qquad \text {in probability}. \end{aligned}$$


Observe from (10), (11), and the triangle inequality that

$$\begin{aligned} \mathbb {E}\left( \left| \frac{{\mathcal {S}}^2(n)}{n}-\frac{1}{1-2p}\right| \right)&= \mathbb {E}\left( \left| \sum _{i=1}^{\infty } i^2 \frac{C_i(n)}{n} -\sum _{i=1}^{\infty } i^2 c_i^{(p)}\right| \right) \\&\le \mathbb {E}\left( \sum _{i=1}^{\infty } i^2 \left| \frac{C_i(n)}{n} -c_i^{(p)}\right| \right) . \end{aligned}$$

Our first assertion thus follows from Lemma 3.1.

For the second assertion, observe that

$$\begin{aligned} \sup _{j\ge 1} N_j(n)=\sup \{i\ge 1: C_i(n)\ge 1\}. \end{aligned}$$

We then have for every \(\eta >0\) arbitrarily small

$$\begin{aligned} \mathbb {P}\left( \sup _{j\ge 1}N_j(n)>\eta \sqrt{n} \right)&=\mathbb {P}(\exists i\ge \eta \sqrt{n}: C_i(n)\ge 1)\\&\le \frac{1}{\eta ^2 n} \mathbb {E}\left( \sum _{i\ge \eta \sqrt{n}} i^2 C_i(n)\right) . \end{aligned}$$

It follows from Lemma 3.1 that the right-hand side converges to 0 as \(n\rightarrow \infty \), and the proof is now complete. \(\square \)

Theorem 1.2(i) now derives immediately from (4), Lemma 2.1(i) and Corollary 3.2 by setting \(\beta _{n,j}=N_j(n)/\sqrt{n}\) for every \(j\ge 1\).

3.2 Critical regime \(p=1/2\)

Throughout this section, we assume that the reinforcement parameter is \(p=1/2\). Recall from Lemma 2.6 that \(\mathbb {E}\left( {\mathcal {S}}^2(n) \right) \sim n \log n\) as \(n\rightarrow \infty .\) We establish now a stronger version of this estimate.

Lemma 3.3

One has

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{{\mathcal {S}}^2(n)}{n\log n} =1\qquad \text {almost surely}. \end{aligned}$$


It has been observed in [7] that, in the study of reinforcement induced by Simon’s algorithm, it may be convenient to perform a time-substitution based on a Yule process. We shall use this idea here again, and introduce a standard Yule process \(Y=(Y_t)_{t\ge 0}\), which we further assume to be independent of the preceding variables. Recall that Y is a pure birth process in continuous time started from \(Y_0=1\) and with birth rate n from any state \(n\ge 1\); in particular, for every function \(f: \mathbb {N}\rightarrow \mathbb {R}\), say such that \(f(n)=O(n^r)\) for some \(r >0\), the process

$$\begin{aligned} f(Y_t)-\int _0^t \left( f(Y_s+1)-f(Y_s)\right) Y_s \hbox {d}s \end{aligned}$$

is a martingale.

Consider the time changed process \({\mathcal {S}}^2\circ Y\). Applying the observation above to \(f= {\mathcal {S}}^2\) and then projecting on the natural filtration of \({\mathcal {S}}^2\circ Y\), the same calculation as in the proof of Lemma 2.6 shows that

$$\begin{aligned}&{\mathcal {S}}^2(Y_t) - \int _0^t \left( \frac{1}{2} + \sum _{i=1}^{\infty } \frac{N_i(Y_s)}{2Y_s} (2N_i(Y_s)+1) \right) Y_s \hbox {d}s \\&= {\mathcal {S}}^2(Y_t) -\int _0^t({\mathcal {S}}^2(Y_s)+Y_s)\hbox {d}s \end{aligned}$$

is a martingale. By elementary stochastic calculus, the same holds for

$$\begin{aligned} M_t= {\mathrm e}^{-t} {\mathcal {S}}^2(Y_t) -\int _0^t {\mathrm e}^{-s}Y_s \hbox {d}s. \end{aligned}$$

We shall now show that M is bounded in \(L^2(\mathbb {P})\) by checking that its quadratic variation \([M]_{\infty }\) has a finite expectation. Plainly, M is purely discontinuous; its jumps can arise either due to an innovation event (whose instantaneous rate at time t equals \(\frac{1}{2}Y_{t-}\)), and then \(\Delta M_t=M_t-M_{t-}={\mathrm e}^{-t}\), or by a repetition of the j-th item for some \(j\ge 1\) (whose instantaneous rate at time t equals \(\frac{1}{2}N_j(Y_{t-}))\), and then \(\Delta M_t={\mathrm e}^{-t}(2N_j(Y_{t-})+1)\). We thus find by a standard calculation of compensation that

$$\begin{aligned}\mathbb {E}([M]_{\infty })&=\mathbb {E}\left( \sum _{t>0} (\Delta M_t)^2\right) \\&= \mathbb {E}\left( \int _0^{\infty } {\mathrm e}^{-2t} \left( \frac{1}{2} Y_t + \frac{1}{2}\sum _{j\ge 1} N_j(Y_t) (2N_j(Y_t)+1)^2\right) \hbox {d}t \right) \\&= \int _0^{\infty } \mathbb {E}\left( Y_t + 2 \sum _{j\ge 1} (N_j(Y_t)^3 + N_j(Y_t)^2) \right) {\mathrm e}^{-2t} \hbox {d}t. \end{aligned}$$

First, recall that \(Y_t\) has the geometric distribution with parameter \({\mathrm e}^{-t}\), in particular \(\int _0^{\infty } \mathbb {E}( Y_t ){\mathrm e}^{-2t}\hbox {d}t = 1\). Second, \(\sum _{j\ge 1} N_j(Y_t)^2 = {\mathcal {S}}^2(Y_t)\), and since \(\mathbb {E}({\mathcal {S}}^2(n))\sim n\log n\) (see Lemma 2.6) and the processes S and Y are independent, we have also

$$\begin{aligned} \int _0^{\infty } \mathbb {E}\left( \sum _{j\ge 1} N_j(Y_t)^2\right) {\mathrm e}^{-2t} \hbox {d}t <\infty . \end{aligned}$$

Third, consider \(T(Y_t)=\sum _{j\ge 1} N_j(Y_t)^3\). By calculations similar to those for \(M_t\), one sees that the process

$$\begin{aligned} {\mathrm e}^{-3t/2} T(Y_t) - \int _0^t {\mathrm e}^{-3s/2} (Y_s + {\mathcal {S}}^2(Y_s)) \hbox {d}s, \qquad t\ge 0 \end{aligned}$$

is a local martingale. Just as above, one readily checks that

$$\begin{aligned} \int _0^{\infty } {\mathrm e}^{-3s/2} \mathbb {E}(Y_s + {\mathcal {S}}^2(Y_s)) \hbox {d}s<\infty , \end{aligned}$$

and hence \(\mathbb {E}(T(Y_t))=O({\mathrm e}^{3t/2})\). As a consequence,

$$\begin{aligned} \int _0^{\infty } \mathbb {E}\left( \sum _{j\ge 1} N_j(Y_t)^3\right) {\mathrm e}^{-2t} \hbox {d}t <\infty , \end{aligned}$$

and putting the pieces together, we have checked that \(\mathbb {E}([M]_{\infty })<\infty \).

We now know that \(\lim _{t\rightarrow \infty } M_t=M_{\infty }\) a.s. and in \(L^2(\mathbb {P})\), and recall the classical feature that \(\lim _{t\rightarrow \infty } {\mathrm e}^{-t}Y_t =W\) a.s., where W has the standard exponential distribution. In particular \(\int _0^t {\mathrm e}^{-s} Y_s \hbox {d}s \sim tW\) as \(t\rightarrow \infty \), so that

$$\begin{aligned} {\mathcal {S}}^2(Y_t) = t{\mathrm e}^{t} W + o({\mathrm e}^t),\qquad \text {a.s.} \end{aligned}$$

Using again \(Y_t= {\mathrm e}^t W+ o({\mathrm e}^t)\), we conclude that \({\mathcal {S}}^2(n) = n\log n + O(n)\) a.s., which implies our claim. \(\square \)

Remark 3.4

The first part of Corollary 3.2 and Lemma 3.3 seem to be of the same nature. Actually, one can also establish the former by adapting the proof of the latter, therefore circumventing the appeal to Lemma 2.5. There is nonetheless a fundamental difference between these two results: although the microscopic clusters (i.e. of size O(1)) determine the asymptotic behavior of \({\mathcal {S}}^2(n)\) in the sub-critical case, they have no impact in the critical case as it is seen from Lemma 2.5.

Thanks to Lemmas 2.1(ii) and 3.3, the following statement is the final piece of the proof of Theorem 1.2(ii).

Lemma 3.5

One has

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{j\ge 1} \frac{N_j(n)}{\sqrt{n\log n}}=0\qquad \text {in probability}.\end{aligned}$$


We shall show that there is some numerical constant b such that

$$\begin{aligned} \mathbb {E}(N_j(n)^3)\le b(n/j)^{3/2} \qquad \text {for all }j,n\ge 1. \end{aligned}$$

Then, by Markov’s inequality, we have that for any \(\eta >0\)

$$\begin{aligned} \mathbb {P}\left( N_j(n)>\sqrt{\eta n\log n}\right) \le b(\eta n\log n)^{-3/2} (n/j)^{3/2}, \end{aligned}$$

and by the union bound

$$\begin{aligned} \mathbb {P}\left( \exists j\le n: N_j(n)>\sqrt{\eta n\log n}\right) \le b(\eta \log n)^{-3/2}\sum _{j\ge 1}j^{-3/2}, \end{aligned}$$

which proves our claim.

For \(i=1,2,3\), set \(a_i(n)=\Gamma (n+i/2)/\Gamma (n)\), so \(a_i(n) \sim n^{i/2}\) and actually \(a_2(n)= n\). Recall that \({\mathrm i}(n)\) denotes the number of innovations up to the n-step of Simon’s algorithm. Take any \(j\ge 1\) and, just as in the proof of Lemma 2.6, observe that on the event \({\mathrm i}(n)\ge j\), one has

$$\begin{aligned} \mathbb {E}\left( \frac{N_{j}(n+1)}{ a_1(n+1)} \mid {\mathcal {F}}_n\right)&= \frac{N_{j}(n)}{ a_1(n)},\\ \mathbb {E}\left( \frac{N_{j}(n+1)^2}{ a_2(n+1) }\mid {\mathcal {F}}_n\right)&= \frac{N_{j}(n)^2}{ a_2(n)} +\frac{N_j(n)}{2na_2(n+1)},\\ \mathbb {E}\left( \frac{N_{j}(n+1)^3}{ a_3(n+1)} \mid {\mathcal {F}}_n\right)&= \frac{N_{j}(n)^3}{ a_3(n)} + \frac{3N_{j}(n)^2}{ 2na_3(n+1)}+ \frac{N_{j}(n)}{ 2na_3(n+1)}.\\ \end{aligned}$$

The trivial bound \({\mathrm i}(j)\le j\) then yields for any \(n\ge j\)

$$\begin{aligned}\mathbb {E}(N_j(n))\le a_1(n)/a_1(j) \le b_1\sqrt{n/j}.\end{aligned}$$

Then we have

$$\begin{aligned}\mathbb {E}(N_j(n)^2)\le \frac{a_2(n)}{a_2(j)} + \frac{b_1}{2na_2(n)} \sum _{k=j}^n \sqrt{k/j}\le b_2 n/j,\end{aligned}$$

and finally also

$$\begin{aligned}\mathbb {E}(N_j(n)^3)\le \frac{a_3(n)}{a_3(j)} + \frac{3b_2}{2na_3(n)} \sum _{k=j}^n k/j+ \frac{b_1}{2na_3(n)} \sum _{k=j}^n \sqrt{k/j}\le b_3(n/j)^{3/2}, \end{aligned}$$

where \(b_1, b_2\) and \(b_3\) are numerical constants. This establishes (12) and completes the proof. \(\square \)

3.3 Supercritical regime \(p>1/2\)

Throughout this section, we assume that the reinforcement parameter satisfies \(p>1/2\). We first point at the following strengthening of Lemma 2.2 (in particular, recall the notation (5) there).

Corollary 3.6

We have

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}\left( \sum _{j=1}^{\infty } \left| \frac{ N_j(n)}{n^p} - X^{(p)}_j\right| ^2\right) =0. \end{aligned}$$

This result has been already observed by Businger, see Equation (6) in [10]. For the sake of completeness, we present here an alternative and shorter proof along the same line as for Lemma 3.1.


We view \({\mathbf{X}}^{(p)}=(X^{(p)}_j)_{j\ge 1}\) and \({\mathbf{N}}(n)=(N_j(n))_{j\ge 1}\) for each \(n\ge 1\) as functions on the space \(\Omega \times \mathbb {N}\) endowed with the product measure \(\mathbb {P}\otimes \#\), where \( \#\) denotes the counting measure on \(\mathbb {N}\). Since we already know from Lemma 2.2 that \(n^{-p} {\mathbf{N}}(n)\) converges as \(n\rightarrow \infty \) to \({\mathbf{X}}^{(p)}\) almost everywhere, in order to establish our claim, it suffices to verify that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}\left( \sum _{j=1}^{\infty } \frac{ N_j(n)^2}{n^{2p}} \right) = \mathbb {E}\left( \sum _{j=1}^{\infty } (X^{(p)}_j)^2 \right) ; \end{aligned}$$

see e.g. Proposition 4.7.30 in [9].

Recall from Lemma 2.6 that

$$\begin{aligned} \mathbb {E}\left( \sum _{j=1}^{\infty } \frac{ N_j(n)^2}{n^{2p}} \right) = \frac{\Gamma (n+2p)}{n^{2p}\Gamma (n)}\sum _{i=1}^n \frac{\Gamma (i)}{\Gamma (i+2p)} . \end{aligned}$$

On the one hand, we know that

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{\Gamma (n+2p)}{n^{2p}\Gamma (n)}=1, \end{aligned}$$

and on the other hand, we recall from (7) that

$$\begin{aligned} \sum _{i=1}^{\infty } \frac{\Gamma (i)}{\Gamma (i+2p)}= \frac{1}{(2p-1)\Gamma (2p)}. \end{aligned}$$

We conclude from Lemma 2.2 that indeed

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}\left( \sum _{j=1}^{\infty } \frac{ N_j(n)^2}{n^{2p}} \right) = \frac{1}{(2p-1)\Gamma (2p)}= \mathbb {E}\left( \sum _{j=1}^{\infty } (X^{(p)}_j)^2 \right) \end{aligned}$$

and the proof is complete. \(\square \)

Theorem 1.2 can now be deduced from (4), Lemma 2.1(ii), and Corollary 3.6.

4 Relation to step reinforced random walks

It is interesting to combine Donsker’s Theorem with the continuous mapping theorem; notably considering the overall supremum of paths yields the well-known Kolmogorov–Smirnov test. In this direction, linear mappings of the type \(\omega \mapsto \int _{[0,1]} \omega (x) m(\hbox {d}x)\), where m is some finite measure on [0, 1], are amongst the simplest functionals on \({\mathbb D}\). Writing \(\bar{m}(x) =m((x,1])\) for the tail distribution function and \(\mu =\int _{[0,1]} x m(\hbox {d}x)\) for the mean, this leads us to consider the variables

$$\begin{aligned} \xi _j= \bar{m}(U_j)-\mu \quad \text {and} \quad {\hat{\xi }}_j=\bar{m}({\hat{U}}_j)-\mu \qquad \text {for }j\ge 1. \end{aligned}$$

So \((\xi _j)_{j\ge 1}\) is an i.i.d. sequence and \(({\hat{\xi }}_j)_{j\ge 1}\) can be viewed as the reinforced sequence resulting from Simon’s algorithm. All these variables have the same distribution, they are bounded and centered with variance

$$\begin{aligned} \varsigma ^2=\int _{[0,1]}\int _{[0,1]} (x\wedge y -xy)m(\hbox {d}x) m(\hbox {d}y)= {\mathrm {Var}}\left( \int _{[0,1]} G(x) m(\hbox {d}x)\right) , \end{aligned}$$

where \(G=(G(x))_{0\le x \le 1}\) is a Brownian bridge. In this setting, we have

$$\begin{aligned} \int _{[0,1]} {\hat{G}}_n(x) m(\hbox {d}x) = \frac{{\hat{\xi }}_1+ \cdots + {\hat{\xi }}_n}{ \sqrt{n}}. \end{aligned}$$

The process of the partial sums

$$\begin{aligned} \hat{S}(n)= {\hat{\xi }}_1+ \cdots + {\hat{\xi }}_n, \qquad n\ge 0 \end{aligned}$$

is called a step reinforced random walk. We now immediately deduce from Theorem 1.2 and the continuous mapping theorem that its asymptotic behavior is given by:

  1. (i)

    if \(p<1/2\), then

    $$\begin{aligned} n^{-1/2}\hat{S}(n) \ \Longrightarrow \ {\mathcal {N}}(0,\varsigma ^2/(1-2p)); \end{aligned}$$
  2. (ii)

    if \(p=1/2\), then

    $$\begin{aligned} (n\log n)^{-1/2} \hat{S}(n) \ \Longrightarrow \ {\mathcal {N}}(0,\varsigma ^2); \end{aligned}$$
  3. (iii)

    if \(p>1/2\), then

    $$\begin{aligned} \lim _{n\rightarrow \infty } n^{-p}\hat{S}(n) = \sum _{j\ge 1} \xi _j X^{(p)}_j\qquad \text { in probability,} \end{aligned}$$

    where \({\mathbf{X}}^{(p)}=(X^{(p)}_j)_{j\ge 1}\) has been defined in Lemma 2.2 and is independent of the \(\xi _j\).

Although this argument only enables us to deal with real bounded random variables \(\xi \), we stress that more generally, the assertions (i), (ii) and (iii) still hold when the generic step \(\xi \) is an arbitrary square integrable and centered variable in \(\mathbb {R}^d\) (for \(d\ge 2\), \(\varsigma ^2\) is then of course the covariance matrix of \(\xi \)). Specifically, (i) follows from the invariance principle for step reinforced random walks (see Theorem 3.3 in [7]), whereas (iii) is Theorem 3.2 in the same work; see also [6]. In the critical case \(p=1/2\), (ii) can be deduced from the basic identity

$$\begin{aligned} \hat{S}(n) = \sum _{j=1}^{\infty } N_j(n) \xi _j, \end{aligned}$$

the Lévy–Lindeberg theorem (see e.g. Theorem 5.2 of Chapter VII in [17]), and Lemmas 3.3 and 3.5.

In this vein, we mention that when \(\xi \) has the Bernoulli distribution, (i–iii) are due originally to Heyde [16] in the setting of the so-called correlated Bernoulli processes, see also [18, 29, 30]. These results have also appeared more recently in the framework of the so-called elephant random walk, a random walk with memory which has been introduced by Schütz and Trimper [27]. See notably [2, 3, 11, 12], and also [4, 5, 14, 15] and references therein for some further developments. We mention that Kürsten [23] first pointed at the role of Bernoulli bond percolation on random recursive trees in this framework, see also [10]. It is moreover interesting to recall that, for the elephant random random walk, Kubota and Takei [22] have established that the fluctuations corresponding to (iii) are Gaussian. Whether or not the same holds for general step reinforced random walks is still open; this also suggests that for \(p>1/2\), Theorem 1.2 (iii) might be refined and yield a second order weak limit theorem involving again a Brownian bridge in the limit.