How linear reinforcement affects Donsker’s theorem for empirical processes

<jats:p>A reinforcement algorithm introduced by Simon (Biometrika 42(3/4):425–440, 1955) produces a sequence of uniform random variables with long range memory as follows. At each step, with a fixed probability <jats:inline-formula><jats:alternatives><jats:tex-math>$$p\in (0,1)$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
                  <mml:mrow>
                    <mml:mi>p</mml:mi>
                    <mml:mo>∈</mml:mo>
                    <mml:mo>(</mml:mo>
                    <mml:mn>0</mml:mn>
                    <mml:mo>,</mml:mo>
                    <mml:mn>1</mml:mn>
                    <mml:mo>)</mml:mo>
                  </mml:mrow>
                </mml:math></jats:alternatives></jats:inline-formula>, <jats:inline-formula><jats:alternatives><jats:tex-math>$${\hat{U}}_{n+1}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
                  <mml:msub>
                    <mml:mover>
                      <mml:mi>U</mml:mi>
                      <mml:mo>^</mml:mo>
                    </mml:mover>
                    <mml:mrow>
                      <mml:mi>n</mml:mi>
                      <mml:mo>+</mml:mo>
                      <mml:mn>1</mml:mn>
                    </mml:mrow>
                  </mml:msub>
                </mml:math></jats:alternatives></jats:inline-formula> is sampled uniformly from <jats:inline-formula><jats:alternatives><jats:tex-math>$${\hat{U}}_1, \ldots , {\hat{U}}_n$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
                  <mml:mrow>
                    <mml:msub>
                      <mml:mover>
                        <mml:mi>U</mml:mi>
                        <mml:mo>^</mml:mo>
                      </mml:mover>
                      <mml:mn>1</mml:mn>
                    </mml:msub>
                    <mml:mo>,</mml:mo>
                    <mml:mo>…</mml:mo>
                    <mml:mo>,</mml:mo>
                    <mml:msub>
                      <mml:mover>
                        <mml:mi>U</mml:mi>
                        <mml:mo>^</mml:mo>
                      </mml:mover>
                      <mml:mi>n</mml:mi>
                    </mml:msub>
                  </mml:mrow>
                </mml:math></jats:alternatives></jats:inline-formula>, and with complementary probability <jats:inline-formula><jats:alternatives><jats:tex-math>$$1-p$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
                  <mml:mrow>
                    <mml:mn>1</mml:mn>
                    <mml:mo>-</mml:mo>
                    <mml:mi>p</mml:mi>
                  </mml:mrow>
                </mml:math></jats:alternatives></jats:inline-formula>, <jats:inline-formula><jats:alternatives><jats:tex-math>$${\hat{U}}_{n+1}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
                  <mml:msub>
                    <mml:mover>
                      <mml:mi>U</mml:mi>
                      <mml:mo>^</mml:mo>
                    </mml:mover>
                    <mml:mrow>
                      <mml:mi>n</mml:mi>
                      <mml:mo>+</mml:mo>
                      <mml:mn>1</mml:mn>
                    </mml:mrow>
                  </mml:msub>
                </mml:math></jats:alternatives></jats:inline-formula> is a new independent uniform variable. The Glivenko–Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when <jats:inline-formula><jats:alternatives><jats:tex-math>$$p<1/2$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
                  <mml:mrow>
                    <mml:mi>p</mml:mi>
                    <mml:mo><</mml:mo>
                    <mml:mn>1</mml:mn>
                    <mml:mo>/</mml:mo>
                    <mml:mn>2</mml:mn>
                  </mml:mrow>
                </mml:math></jats:alternatives></jats:inline-formula>, and that a further rescaling is needed when <jats:inline-formula><jats:alternatives><jats:tex-math>$$p>1/2$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
                  <mml:mrow>
                    <mml:mi>p</mml:mi>
                    <mml:mo>></mml:mo>
                    <mml:mn>1</mml:mn>
                    <mml:mo>/</mml:mo>
                    <mml:mn>2</mml:mn>
                  </mml:mrow>
                </mml:math></jats:alternatives></jats:inline-formula> and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlated Bernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.</jats:p>


Introduction
A classical result of Glivenko and Cantelli in 1933 states that the sequence of empirical distribution functions associated to i.i.d. copies of some real random variable converges uniformly to its cumulative distribution function, almost surely. Nearly 20 years later, Donsker determined the asymptotic behavior of the fluctuations; let us recall his result. Let U 1 , U 2 , . . . be i.i.d. uniform random * Institute of Mathematics, University of Zurich, Switzerland, jean.bertoin@math.uzh.ch variables on [0, 1]; then the sequence of (uniform) empirical processes, converges in distribution as n → ∞ in the sense of Skorokhod towards a Brownian bridge (G(x)) 0≤x≤1 . The purpose of the present work is to analyze how Donsker's Theorem is affected by an elementary random reinforcement algorithm that we shall now describe.
Consider a sequence ε 2 , ε 3 , . . . of i.i.d. Bernoulli variables with fixed parameter p ∈ (0, 1). These variables determine when repetitions occur, in the sense that the n-th step of the algorithm is a repetition if ε n = 1, and an innovation if ε n = 0. For every n ≥ 2, let also v(n) be a uniform random variable on {1, . . . , n − 1} such that v(2), v(3), . . . are independent; these variables specify which of the preceding items is copied when a repetition occurs. More precisely, we set ε 1 = 0 for definitiveness and construct recursively a sequence of random variablesÛ 1 ,Û 2 , . . . by settinĝ (1 − ε j ) for n ≥ 1 denotes the total number of innovations after n steps. We always assume without further mention that the sequences (v(n)) n≥2 , (ε n ) n≥2 , and (U j ) j≥1 are independent.
This random algorithm has been introduced in 1955 by Herbert A. Simon [27], who singled out in this setting a remarkable one-parameter family of power tail distributions on N that arise in a variety of data. Nowadays, Simon's algorithm should be viewed as a linear reinforcement procedure, in the sense that, provided that i(n) ≥ j (i.e. the variable U j as already appeared at the n-th step of the algorithm), the probability that U j is repeated at the (n + 1)-th step is proportional to the number of its previous occurrences. In this direction, we refer henceforth to the parameter p of the Bernoulli variables ε n as the reinforcement parameter.
Obviously, each variableÛ n has the uniform distribution on [0, 1]; note however that the reinforced sequence (Û n ) n≥1 is clearly not stationary, and is not exchangeable or even partially exchangeable either. It is easy to show that nonetheless, the conclusion of the Glivenko-Cantelli theorem is still valid in this framework: We are chiefly interested in the empirical processesĜ n associated to the reinforced sequencê Our main result shows that their asymptotic behavior as n → ∞ exhibits a phase transition for the critical parameter p c = 1/2. Roughly speaking, when the reinforcement parameter p is smaller than 1/2, then the analog of Donsker's Theorem holds forĜ n , except that the limit is now only proportional to the Brownian bridge. At criticality, i.e. for p = 1/2, convergence in distribution to the Brownian bridge holds after an additional rescaling ofĜ n by a factor 1/ √ log n. Finally for p > 1/2, n −p+1/2Ĝ n now converges in probability and its limit is described in terms of some bridge with exchangeable increments and discontinuous sample paths.
Here is a precise statement, where the needed background on bridges with exchangeable increments in the supercritical case p > 1/2 is postponed to the next section. Recall that G = (G(x)) 0≤x≤1 denotes the standard Brownian bridge. We further write D for the space of càdlàg paths ω : [0, 1] → R endowed with the Skorokhod topology (see Chapter 3 in [8] or Chapter VI in [17]). The notation ⇒ is used to indicate convergence in distribution of a sequence of processes in D.
Theorem 1.2. The following convergences hold as n → ∞: where B (p) = (B (p) (x)) 0≤x≤1 is the bridge with exchangeable increments described in the forthcoming Definition 2.3.
Our approach to Theorem 1.2 relies, at least in part, on a natural interpretation of Simon's algorithm in terms of Bernoulli bond percolation on random recursive trees. Specifically, we view {1, 2, . . . , n} as a set of vertices and each pair (j, v(j)) for j = 2, . . . , n as edges; the resulting graph T n is known as the random recursive tree of size n, see Section 1.3 and Chapter 7 in [13]. We next delete each edge (j, v(j)) if and only if ε j = 0, in other words we perform a Bernoulli bond percolation with parameter p on T n . The percolation clusters are then given by subsets of indices at which the same variable is repeated, namely {i ≤ n :Û i = U j } for j = 1, . . . , i(n).
The sum of the squares of the cluster sizes lies at the heart of the analysis of the reinforced empirical processĜ n . We shall see that its asymptotic behavior is given by where R is some non-degenerate random variable. A rough explanation for the phase transition 1 in (1) is that the main contribution to S 2 (n) is due to a large number of microscopic clusters in the sub-critical case p < 1/2, and rather to a few mesoscopic clusters of size ≈ n p in the super-critical case p > 1/2. Even though (1) is not quite sufficient to establish Theorem 1.2, it is nonetheless a major step for its proof. More precisely, we shall rely on general results due to Kallenberg [19] on the structure of processes with exchangeable increments and explicit criteria for the weak convergence of sequences of the latter, and (1) appears as a key element in this setting.
The rest of this work is organized as follows. Section 2 is devoted to several preliminaries. We shall first present some key results due to Kallenberg on bridges with exchangeable increments and their canonical representations. We shall then recall a limit theorem for the numbers of occurrences N j (n) which have been obtained in the framework of Bernoulli percolation on random recursive trees as well as the fundamental result of H.A. Simon about the frequency of microscopic clusters. Last, we shall compute explicitly the average E(S 2 (n)) using a simple recurrence identity and establish Proposition 1.1 on our way. Theorem 1.2 is then proven in Section 3. Finally, in Section 4, we discuss some connections between Theorem 1.2 and closely related results in the literature on step-reinforced random walks, including correlated Bernoulli processes and the so-called elephant random walk.

Bridges with exchangeable increments
This section is adapted from Kallenberg [19], who rather uses the terminology interchangeable instead of exchangeable, and whose results are given in a more general setting. We also refer to [20] for many interesting properties of the sample paths of such processes.
Let B = (B(x)) 0≤x≤1 be a real valued process with càdlàg sample paths, and which is continuous in probability. We say that B has exchangeable increments if for every n ≥ 2, the sequence of its increments B(k/n) − B((k − 1)/n) for k = 1, . . . , n, is exchangeable, i.e. its distribution is invariant by permutations. We further say that B is a bridge provided that B(0) = B(1) = 0 a.s.
According to Theorem 2.1 in [19], any bridge with exchangeable increments can be expressed in the form where σ is a nonnegative random variable, G a Brownian bridge, β = (β j ) j≥1 a sequence of real random variables with ∞ j=1 β 2 j < ∞ a.s., and U = (U j ) j≥1 a sequence of i.i.d. uniform random variables, such that σ, G, β and U are independent. More precisely, if we further assume that the sequence (|β j |) j≥1 is nonincreasing, which induces no loss of generality, then the series in (2) converges a.s. uniformly on [0, 1].
One calls σ, β the canonical representation of B. Roughly speaking, (2) shows that the continuous part of B is a mixture of Brownian bridges (parametrized by the standard deviation), with mixture weights given by the random variable σ, and β describes the sequence of the jumps of B, each of them taking place uniformly at random on [0, 1] and independently of the others. The laws of σ and of β then entirely determine that of B.
The next lemma plays a key role in the proof of Theorem 1.2; it states two criteria that are tailored for our purposes, for the convergence of a sequence of bridges with exchangeable increments in terms of the canonical representations. The first part is a special case of Theorem 2.3 in [19]. The second part can be seen as an immediate consequence of the first and the well-known facts that Skorokhod's topology is metrizable and that convergence of a sequence of functions in D to a continuous limit is equivalent to convergence for the supremum distance (see, e.g. Section VI.1 in [17]); it can also be checked by direct calculation.
Lemma 2.1. For each n ≥ 1, let B n denote a bridge with exchangeable increments and canonical representation σ n = 0 and β n = (β n,j ) j≥1 .
where G is a standard Brownian bridge.

Asymptotic behavior of occurrences numbers
Recall that the reinforcement parameter p ∈ (0, 1) in Simon's algorithm is fixed; for the sake of simplicity, it will be omitted from several notations even though it always plays an important role.
For every j ∈ N, we set that is N j (n) is the number of occurrences of the variable U j up to the n-th step of the algorithm. Plainly N j (n) = 0 if and only if the number of innovations up to the n-th step is less than j, i.e. i(n) < j.
The starting point of our analysis is that the reinforced empirical process can be expressed in the form HenceĜ n is a bridge with exchangeable increments, with canonical representation 0 and β n = (β n,j ) j≥1 , where β n,j = N j (n)/ √ n. We aim to determine its asymptotic behavior as n → ∞ by applying Lemma 2.1. In this direction, the interpretation of Simon's algorithm as a Bernoulli bound percolation on a random recursive tree, as it has been sketched in the Introduction, enables us to lift from [1] the following result about the asymptotic behavior of mesoscopic clusters.
Proof. Simon's algorithm induces a natural partition N = j≥1 Π j of the set of positive integers into blocks Π j = {k ∈ N :Û k = U j } which we can see as the result of a Bernoulli bond percolation with parameter p on the (infinite) random recursive tree. In this setting, we have N j (n) = #(Π j ∩ {1, . . . , n}), and the first claim of the statement has been observed in Section 3.
Specializing this for q = 2 yields our second claim.
We write X (p) = (X has the Mittag-Leffler distribution with parameter p (see Theorem 3.1 in [1] and also [24]); nonetheless the law of the whole sequence X (p) does not seem to have any simple expression (see Proposition 3.7 in [1]).
When p > 1/2, Lemma 2.2 enables us to view X (p) as a random variable with values in the space ℓ 2 (N) of square summable series, and this leads us to the following definition of the process B (p) that appears as a limit in Theorem 1.2(iii). Definition 2.3. For p > 1/2, we define B (p) = (B (p) (x)) 0≤x≤1 as the bridge with exchangeable increments with canonical representation 0 and X (p) . That is We conclude this section recalling the key result of Simon [27] about the asymptotic frequency of microscopic percolation clusters. Note that the number of innovations up to the n-th step is approximately (1 − p)n for n ≫ 1.
for the number of variables U j which have occurred exactly k times at the n-th step of Simon's algorithm. Then where B denotes the beta function.
The right-hand side of (5) is a probability measure on N which is known as the Yule-Simon distribution with parameter 1/p. Actually, it is only proved in [27] that nonetheless the stronger statement (5)

A first moment calculation
Recall that we want to apply Lemma 2.1 to investigate the asymptotic behavior of reinforced empirical processes. In this direction, (3) incites us to introduce for every n ≥ 1 The proof of Theorem 1.2 will use the following explicit calculation for the expectation of this quantity, which already points at the same direction as (1).
As a consequence, we have as n → ∞ that Proof. Write F n for the sigma-field generated by ((ε i , v(i)) : 2 ≤ i ≤ n). Plainly, j≥1 N j (n) = n, and we see from the very definition of Simon's algorithm that = (1 + 2p/n)S 2 (n) + 1.
Since S 2 (1) = 1, we arrive at which is the identity of our claim.
In turn, the estimate as n → ∞ in the statement follows immediately from the facts that Γ(n + 2p)/Γ(n) ∼ n 2p , and that when p > 1/2, one has .
The proof is now complete.
As a first application, we establish the reinforced version of the Glivenko-Cantelli theorem.
Proof of Proposition 1.1. The proof is classically reduced to establishing the following reinforced version of the strong law of large numbers, Indeed, the almost sure convergence in (7) holds simultaneously for all dyadic rational numbers, and uniform convergence on [0, 1] then can be derived by a monotonicity argumentà la Dini.
From Lemma 2.5 and Chebychev's inequality, we now see that we can choose r > 1 sufficiently large such that ∞ k=1 P(|Σ(k r ) − k r x| > k r−1 ) < ∞.
One concludes from the Borel-Cantelli lemma that (7) holds along the subsequence n = k r , and the general case follows by another argument of monotonicity.

Proof of Theorem 1.2
As its title indicates, the purpose of this section is to establish Theorem 1.2 in each of the three regimes.

Subcritical regime p < 1/2
Throughout this section, we assume that the reinforcement parameter satisfies p < 1/2. Our approach in the subcritical regime relies on the following strengthening of Lemma 2.4 (recall the notation there). Then we have Proof. For each n = 1, 2, . . ., write C(n) = (C i (n)) i≥1 and view C(n) as a function on the space Ω × N endowed with the product measure P ⊗ # 2 , where # 2 stands for the measure on N which assigns mass i 2 to every i ∈ N. Consider an arbitrary subsequence excerpt from (C(n)) n≥1 . From Lemma 2.4 and an argument of diagonal extraction, we can construct a further subsequence, say indexed by ℓ(n) for n = 1, 2, . . ., such that where On the one hand, we observe that On the other hand, we note the basic identity Since Γ(n + 2p)/Γ(n) ∼ n 2p and 2p < 1, we see from Lemma 2.5 and (10) that Thanks to (9), we deduce from the Vitali-Scheffé theorem (see, e.g. Theorem 2.8.9 in [9]) that the convergence (8) also holds in L 1 (P ⊗ # 2 ), that is Since the convergence above holds for any (initial) subsequence, our claim is proven.
We next point at the following consequence of Lemma 3.1.

Corollary 3.2. We have
Proof. Observe from (9), (10), and the triangle inequality that Our first assertion thus follows from Lemma 3.1.
For the second assertion, observe that We then have for every η > 0 arbitrarily small It follows from Lemma 3.1 that the right-hand side converges to 0 as n → ∞, and the proof is now complete. Theorem 1.2(i) now derives immediately from (3), Lemma 2.1(i) and Corollary 3.2 by setting β n,j = N j (n)/ √ n for every j ≥ 1.

Critical regime p = 1/2
Throughout this section, we assume that the reinforcement parameter is p = 1/2. Recall from Lemma 2.5 that E S 2 (n) ∼ n log n as n → ∞. We establish now a stronger version of this estimate. Proof. It has been observed in [7] that, in the study of reinforcement induced by Simon's algorithm, it may be convenient to perform a time-substitution based on a Yule process. We shall use this idea here again, and introduce a standard Yule process Y = (Y t ) t≥0 , which we further assume to be independent of the preceding variables. Recall that Y is a pure birth process in continuous time started from Y 0 = 1 and with birth rate n from any state n ≥ 1; in particular, for every function f : N → R, say such that f (n) = O(n r ) for some r > 0, the process Consider the time changed process S 2 • Y . Applying the observation above to f = S 2 and then projecting on the natural filtration of S 2 • Y , the same calculation as in the proof of Lemma 2.5 show that is a martingale. By elementary stochastic calculus, the same holds for We shall now show that M is bounded in L 2 (P) by checking that its quadratic variation [M ] ∞ has a finite expectation. Plainly, M is purely discontinuous; its jumps can arise either due to an innovation event (whose instantaneous rate at time t equals 1 2 Y t− ), and then ∆M t = M t − M t− = e −t , or by a repetition of the j-th item for some j ≥ 1 (whose instantaneous rate at time t equals 1 2 N j (Y t− )), and then ∆M t = e −t (2N j (Y t− ) + 1). We thus find by a standard calculation of compensation that First, recall that Y t has the geometric distribution with parameter e −t , in particular , and since E(S 2 (n)) ∼ n log n (see Lemma 2.5) and the processes S and Y are independent, we have also 3 . By calculations similar to those for M t , one sees that the process is a local martingale. Just as above, one readily checks that and hence E(T (Y t )) = O(e 3t/2 ). As a consequence, and putting the pieces together, we have checked that E([M ] ∞ ) < ∞.
We now know that lim t→∞ M t = M ∞ a.s. and in L 2 (P), and recall the classical feature that lim t→∞ e −t Y t = W a.s., where W has the standard exponential distribution. In particular t 0 e −s Y s ds ∼ tW as t → ∞, so that a.s.
Using again Y t = e t W + o(e t ), we conclude that S 2 (n) = n log n + O(n) a.s., which implies our claim.
Remark 3.4. The first part of Corollary 3.2 and Lemma 3.3 seem to be of the same nature. Actually, one can also establish the former by adapting the proof of the latter, therefore circumventing the appeal to Lemma 2.4. There is nonetheless a fundamental difference between these two results: although the microscopic clusters (i.e. of size O(1)) determine the asymptotic behavior of S 2 (n) in the sub-critical case, they have no impact in the critical case as it is seen from Lemma 2.4.
Thanks to Lemmas 2.1(ii) and 3.3, the following statement is the final piece of the proof of Theorem 1.2(ii). Proof. We shall show that there is some numerical constant b such that Then, by Markov's inequality, we have that for any η > 0 P N j (n) > ηn log n ≤ b(ηn log n) −3/2 (n/j) 3/2 , and by the union bound which proves our claim.
For i = 1, 2, 3, set a i (n) = Γ(n + i/2)/Γ(n), so a i (n) ∼ n i/2 and actually a 2 (n) = n. Recall that i(n) denotes the number of innovations up to the n-step of Simon's algorithm. Take any j ≥ 1 and, just as in the proof of Lemma 2.5, observe that on the event i(n) ≥ j, one has .
The trivial bound i(j) ≤ j then yields for any n ≥ j E(N j (n)) ≤ a 1 (n)/a 1 (j) ≤ b 1 n/j.

Then we have
where b 1 , b 2 and b 3 are numerical constants. This establishes (11) and completes the proof.

Supercritical regime p > 1/2
Throughout this section, we assume that the reinforcement parameter satisfies p > 1/2. We first point at the following strengthening of Lemma 2.2 (in particular, recall the notation (4) there).
Corollary 3.6. We have This result has been already observed by Businger, see Equation (6) in [10]. For the sake of completeness, we present here an alternative and shorter proof along the same line as for Lemma 3.1.
Proof. We view X (p) = (X (p) j ) j≥1 and N(n) = (N j (n)) j≥1 for each n ≥ 1 as functions on the space Ω × N endowed with the product measure P ⊗ #, where # denotes the counting measure on N. Since we already know from Lemma 2.2 that n −p N(n) converges as n → ∞ to X (p) almost everywhere, in order to establish our claim, it suffices to verify that see e.g. Proposition 4.7.30 in [9].
Recall from Lemma 2.5 that .
On the one hand, we know that lim n→∞ Γ(n + 2p) n 2p Γ(n) = 1, and on the other hand, we recall from (6) We conclude from Lemma 2.2 that indeed

Relation to step reinforced random walks
It is interesting to combine Donsker's Theorem with the continuous mapping theorem; notably considering the overall supremum of paths yields the wellknown Kolmogorov-Smirnov test. In this direction, linear mappings of the type ω → [0,1] ω(x)m(dx), where m is some finite measure on [0, 1], are amongst the simplest functionals on D. Writingm(x) = m((x, 1]) for the tail distribution function and µ = [0,1] xm(dx) for the mean, this leads us to consider the variables So (ξ j ) j≥1 is an i.i.d. sequence and (ξ j ) j≥1 can be viewed as the reinforced sequence resulting from Simon's algorithm. All these variables have the same distribution, they are bounded and centered with variance where G = (G(x)) 0≤x≤1 is a Brownian bridge. In this setting, we have [0,1]Ĝ n (x)m(dx) =ξ 1 + · · · +ξ n √ n .
The process of the partial sumŝ S(n) =ξ 1 + · · · +ξ n , n ≥ 0 is called a step reinforced random walk. We now immediately deduce from Theorem 1.2 and the continuous mapping theorem that its asymptotic behavior is given by: j ) j≥1 has been defined in Lemma 2.2 and is independent of the ξ j .
Although this argument only enables us to deal with real bounded random variables ξ, we stress that more generally, the assertions (i), (ii) and (iii) still hold when the generic step ξ is an arbitrary square integrable and centered variable in R d (for d ≥ 2, ς 2 is then of course the covariance matrix of ξ). Specifically, (i) follows from the invariance principle for step reinforced random walks (see Theorem 3.3 in [7]), whereas (iii) is Theorem 3.2 in the same work; see also [6]. In the critical case p = 1/2, (ii) can be deduced from the basic identityŜ (n) = ∞ j=1 N j (n)ξ j , the Lévy-Lindeberg theorem (see, e.g. Theorem 5.2 of Chapter VII in [17]), and Lemmas 3.3 and 3.5.
In this vein, we mention that when ξ has the Bernoulli distribution, (iiii) are due originally to Heyde [16] in the setting of the so-called correlated Bernoulli processes, see also [18,28,29]. These results have also appeared more recently in the framework of the so-called elephant random walk, a random walk with memory which has been introduced by Schütz and Trimper [26]. See notably [2,3,11,12], and also [4,5,14,15] and references therein for some further developments. We mention that Kürsten [23] first pointed at the role of Bernoulli bond percolation on random recursive trees in this framework, see also [10]. It is moreover interesting to recall that, for the elephant random random walk, Kubota and Takei [22] have established that the fluctuations corresponding to (iii) are Gaussian. Whether or not the same holds for general step reinforced random walks is still open; this also suggests that for p > 1/2, Theorem 1.2 (iii) might be refined and yield a second order weak limit theorem involving again a Brownian bridge in the limit.