Scaling exponents of step-reinforced random walks

Let X1,X2,…\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_1, X_2, \ldots $$\end{document} be i.i.d. copies of some real random variable X. For any deterministic ε2,ε3,…\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon _2, \varepsilon _3, \ldots $$\end{document} in {0,1}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{0,1\}$$\end{document}, a basic algorithm introduced by H.A. Simon yields a reinforced sequence X^1,X^2,…\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{X}_1, \hat{X}_2 , \ldots $$\end{document} as follows. If εn=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon _n=0$$\end{document}, then X^n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \hat{X}_n$$\end{document} is a uniform random sample from X^1,…,X^n-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{X}_1, \ldots , \hat{X}_{n-1}$$\end{document}; otherwise X^n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \hat{X}_n$$\end{document} is a new independent copy of X. The purpose of this work is to compare the scaling exponent of the usual random walk S(n)=X1+⋯+Xn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S(n)=X_1+\cdots + X_n$$\end{document} with that of its step reinforced version S^(n)=X^1+⋯+X^n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{S}(n)=\hat{X}_1+\cdots + \hat{X}_n$$\end{document}. Depending on the tail of X and on asymptotic behavior of the sequence (εn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varepsilon _n)$$\end{document}, we show that step reinforcement may speed up the walk, or at the contrary slow it down, or also does not affect the scaling exponent at all. Our motivation partly stems from the study of random walks with memory, notably the so-called elephant random walk and its variations.


Introduction
In 1955, Herbert A. Simon [24] introduced a simple reinforcement algorithm that runs as follows. Consider a deterministic sequence (ε n ) in {0, 1} with ε 1 = 1. The n-th step of the algorithm corresponds to an innovation if ε n = 1, and to a repetition if ε n = 0. Specifically, denote the number of innovations after n steps by for n ≥ 1, and let also X 1 , X 2 , . . . denote a sequence of different items (in [24], these items are words). One constructs recursively a random sequence of itemsX 1  Simon was especially interested in regimes where either ε n converges in Césaro's mean to some limit q ∈ (0, 1), which we will refer to as steady innovation with rate q, or σ (n) grows like 1 n ρ for some ρ ∈ (0, 1) as n → ∞, which we will call slow innovation with exponent ρ. By analyzing the frequencies of words with some fixed number of occurrences, he pointed out that these regimes yield a remarkable one-parameter family of power tail distributions, that are known nowadays as the Yule-Simon laws and arise in a variety of empirical data. This is also closely related to preferential attachment dynamics, see e.g. [10] for an application to the World Wide Web. Clearly, repetitions in Simon's algorithm should be viewed as a linear reinforcement, the probability that a given item is repeated being proportional to the number of its previous occurrences.
In the present work, the items X 1 , X 2 , . . . are i.i.d. copies of some real random variable X , which we further assume to be independent of the uniform variables U (2), U (3), . . .. Note that although the variableX n has the same distribution as X for every n ≥ 0, the reinforced sequence (X n ) is not stationary; it can also be seen that its tail sigma-field is not even independent ofX 1 . Picking up on a key question in the general area of reinforced processes (see notably the survey [21] by Pemantle, and also some more recent works [2,14,15,18,22] and references therein), our purpose is to analyze how reinforcement affects the growth of partial sums. Specifically, we write S(n) = X 1 + · · · + X n for the usual random walk with step distribution X , and S(n) =X 1 + · · · +X n for its reinforced version, and we would like to compareŜ(n) and S(n) when n 1. The main situation of interest is when S has a scaling exponent α ∈ (0, 2], in the sense that where Y denotes an α-stable variable. Recall that this holds if and only if the typical step X belongs to the domain of normal attraction (without centering) of a stable distribution, in the terminology of Gnedenko and Kolmogorov [16]. We shall refer to (1) as an instance of α-diffusive asymptotic behavior, the usual diffusive situation corresponding to α = 2. The asymptotic behavior of the step-reinforced random walkŜ has been considered previously in the literature when ε 2 , ε 3 , . . . are random and given by i.i.d. samples of the Bernoulli law with parameter q ∈ (0, 1). This is of course a most important case of a steady regime with innovation rate q a.s. It has been shown recently in [8] that when X ∈ L 2 (P), the asymptotic growth ofŜ exhibits a phase transition at q c = 1/2. Specifically, assuming for simplicity that X is centered, then on the one hand for q < 1/2, there is some non-degenerate random variable V such that In other words, (2) shows that for q < 1/2,Ŝ has scaling exponentα = 1/(1 − q), or equivalently, grows with exponent 1/α, and in particular is super-diffusive since 1/α > 1/2. On the other hand, the step-reinforced random walk remains diffusive for q > 1/2, in the sense that n −1/2Ŝ (n) converges in law to some Gaussian variable. This phase transition was first established when X is a Rademacher variable, i.e. P(X = 1) = P(X = −1) = 1/2. Indeed, Kürsten [20] observed thatŜ is then a version of the so-called elephant random walk, a nearest neighbor process with memory which was introduced by Schütz and Trimper [23] and has then raised much interest. The description of the asymptotic behavior of the elephant random walk has motivated many works, see notably [3,5,6,12,13,19]. Further, when the typical step X has a symmetric stable distribution with index α ∈ (0, 2],Ŝ is the so-called shark random swim, which has been studied in depth by Businger [11]. Its large time asymptotic behavior exhibits a similar phase transition for α > 1, now for the critical parameter q c = 1 − 1/α. When α ≤ 1, there is no such phase transition andŜ has the same scaling exponent α as S. See also [7] for related results in the setting of Lévy processes. The results that we just recalled suggest that, more generally, for any steady innovation regime and any typical step X belonging to the domain of normal attraction of an α-stable distribution (i.e. such that (1) is fulfilled), then the following should hold. First, for α ∈ (0, 1), the random walk S and its step-reinforced versionŜ should have the same scaling exponentα = α, independently of the innovation rate. Second, for α ∈ (1, 2], if the innovation rate q is larger than q c = 1 − 1/α, then again the scaling exponent of S andŜ should coincide, whereas if q < 1 − 1/α, then the super-α-diffusive behavior (2) should hold andŜ should thus have scaling exponent α = 1/(1 − q) < α. We shall see in Theorems 2 and 4 that this guess is indeed correct. In particular, the weaker the innovation (or equivalently the stronger the reinforcement), the faster the step-reinforced random walkŜ grows.
An informal explanation for this phase transition is as follows. When α ∈ (1, 2], the α-diffusive behavior (1) of S relies on some kind of balance between its positive and negative steps (recall that X must be centered, i.e. E(X ) = 0). The reinforcement effect of Simon's algorithm for sufficiently small innovation rates q yields certain steps to be repeated much more often than the others, up to the point that this balance is disrupted. More precisely, we shall see that in a steady regime with innovation rate q, the maximal number of repetitions of a same item up to the n-th step of Simon's algorithm grows with exponent 1 − q. For q > q c = 1 − 1/α, this is smaller than the growth exponent 1/α of S and repetitions have only a rather limited impact on the asymptotic behavior ofŜ. At the opposite, for q < q c , some increments have been repeated much more often and the growth ofŜ is then rather governed by the latter, yielding (3).
We now turn our attention to regimes with slow innovation. Extrapolating from the steady regime, we might expect that reducing the innovation should again speed up the step-reinforced random walk. This intuition turns out to be wrong, and we will see that at the opposite, in slow regimes, diminishing the innovation actually slows down the walk. More precisely, there is another phase transition when α ∈ (0, 1), occurring now for the critical innovation exponent ρ c = α. Specifically, if ρ < α, then we shall see in Theorem 1 thatŜ has always scaling exponentα = 1 (i.e. a ballistic asymptotic behavior which contrasts with the growth with exponent 1/α > 1 for S), whereas for ρ > α, we will see in Theorem 3 thatŜ has rather scaling exponentα = α/ρ > α, that it grows now with exponent 1/α = ρ/α > 1, but nonetheless still significantly slower than S. On the other hand, when α ≥ 1, there is no phase transition for slow innovation regimes andŜ has always scaling exponentα = 1.
This apparently surprising feature can be explained informally as follows. As it was argued above, for α ∈ (1, 2], E(X ) = 0 and the super-diffusive regime (2) results from the disruption of the balance between positive and negative steps when certain steps are repeated much more than others. At the opposite, for α ∈ (0, 1), the typical step X has a heavy tail distribution with E(|X |) = ∞. In this situation, it is wellknown that for n 1, |S(n)| has roughly the same size as its largest step up to time n, max{|X i | : 1 ≤ i ≤ n}. Regimes with slow innovation delay the occurence of rare events at which steps are exceptionally large. Therefore they induce a slow down effect for the step-reinforced random walk, up to the point that when the innovation exponent drops below a critical value,Ŝ has merely a ballistic growth. This aspect will be further discussed quantitatively in Sect. 5.
A somewhat simpler version of the main results of our work are summarized in Fig. 1. It expresses the scaling exponentα ofŜ in terms of the scaling exponent α ∈ (0, 2] of S and the innovation parameter ρ > 0. The slow regime corresponds to ρ ∈ (0, 1) and ρ is then the innovation exponent as usual. The steady regime corresponds to ρ > 1, and then the rate of innovation is given by q = 1 − 1/ρ. This new parametrization for steady regimes of innovation may seem artificial; nonetheless we stress that the same is actually used for the definition of the one-parameter family of Yule-Simon distributions; see Lemma 3.
The cornerstone of our approach is provided by Lemma 2, where we observe that the process that counts the number of occurrences of a given item in Simon's algorithm can be turned into a square integrable martingale. The latter is a close relative to another martingale that occurs naturally in the setting of the elephant random walk; see [6,12,13,19], among others. The upshot of Lemma 2 is that this yield useful estimates for these numbers of occurrences and their asymptotic behaviors, which hold uniformly for all items.
The plan for the rest of this article is as follows. Section 2 is devoted to preliminaries on the stable central limit theorem, on martingales induced by occurrence counting processes in Simon's algorithm, and on the Yule-Simon distributions. We state and prove our main results in Sects. 3 and 4. Finally, several comments are given in Sect. 5.
Scaling exponentα of a step-reinforced random walk in terms of the innovation parameter ρ and the scaling exponent α of the original random walk. Results above the diagonal ρ = α are strong (convergence in probability), those below the diagonal are weak (convergence in distribution)

Preliminaries
Given two sequences a(n) and b(n) of positive real numbers, it will be convenient to use the following notation throughout this work:

Background on the stable central limit theorem
We assume in this section that the step distribution belongs to the domain of normal attraction (without centering) of some stable distribution, i.e. that (1) holds for some α ∈ (0, 2]. The Cauchy case α = 1 has some peculiarities and for the sake of simplicity, it will be ruled out from time to time. We present some classical results in this framework that will be useful later on. We start by recalling that for α = 2, (1) holds if and only if X is centered with finite variance; see Theorem 4 on p. 181 in [16]. For α ∈ (0, 1), (1) is equivalent to for some nonnegative constants c + and c − with c + + c − > 0. Finally, for α ∈ (1, 2), (1) holds if and only if the same as above is fulfilled and furthermore X is centered. See Theorem 5 on p. 181-2 in [16].
We denote the characteristic function of X by and the characteristic exponent of the stable variable Y by ϕ α , that is ϕ α : R → C is the unique continuous function with ϕ α (0) = 0 such that In particular, ϕ α is homogeneous with degree α in the sense that In this setting, (1) can be expressed classically as but we shall rather use a logarithmic version of (3). Pick r > 0 sufficiently small so that |1 − (θ)| < 1 whenever |θ | ≤ r , and then define ϕ : [−r , r ] → C as the continuous determination of the logarithm of on [−r , r ], i.e. the unique continuous function with ϕ(0) = 0 and such that (θ) = exp(−ϕ(θ)) for all θ ∈ [−r , r ]. Theorem 2.6.5 in Ibragimov and Linnik [17] entails that (3) can be rewritten in the form We stress that the parameter t in (4) is real, whereas n in (3) is an integer, and as a consequence, we have also that

Martingales in Simon's algorithm
Recall Simon's algorithm from the Introduction, and in particular that σ (n) stands for the number of innovations up to the n-th step. In this work, we will be mostly concerned with the cases where either the sequence σ (·) is regularly varying with exponent ρ ∈ (0, 1), that is It is easily checked that (6) implies σ (n) ∼ qn, and conversely, (6) holds whenever σ (n)/n = q + O(log −β n) for some β > 1. We refer to (5) as the slow regime with innovation exponent ρ ∈ (0, 1), and to (6) as the steady regime with innovation rate q ∈ (0, 1). Often, it is convenient to set ρ = 1/(1 − q) for q ∈ (0, 1) and then view ρ ∈ (0, 1) ∪ (1, ∞) as a parameter for the innovation, with ρ > 1 corresponding to steady regimes. Several of our results however rely on much weaker assumptions; in any case we shall always assume at least that the total number of innovations is infinite and that the number of repetitions is not sub-linear, i.e.
Simon's algorithms induces a natural partition of the set of indices In words, B j is the set of steps of Simon's algorithm at which the j-th item X j is repeated. We consider for every n ∈ N the restriction of the preceding partition to plainly B j (n) is nonempty if and only if j ≤ σ (n). Last, we set for the number of elements of B j (n), and arrive at the following basic expression for the step-reinforced random walk: where the X j are i.i.d. copies of X , and further independent of the random coefficients |B j (n)|.
The identity (8) incites us to investigate the asymptotic behavior of the coefficients |B j (n)|. In this direction, we introduce the quantities and the times of innovation We stress that these quantities are deterministic, since the sequence (ε n ) is deterministic.
We start with a simple lemma:
Proof We have from the definition of π(n) that Next we observe by summation by parts that and (i) follows. Next, when (6) holds, we write The second series in the right-hand side converges absolutely; as a consequence, and (ii) follows. Finally, assume (7). There is a < 1 such that σ (k) ≤ ak for all k sufficiently large. It follows that there is some b > 0 such that for all n, We conclude that 1/(nπ(n)) = O(n a−2 ), which entails the last claim.
The next result determines the asymptotic behavior of the sequences |B j (·)| for all j ∈ N, and will play therefore a key role in our analysis. (7). For every j ∈ N, the process started at time τ ( j),
We next have to check that the mean of the quadratic variation of the martingale thanks to Lemma 1, the remaining assertions are then immediate.
In this direction, we first note that the terms in the sum on the left-hand side above that correspond to an innovation (i.e. ε n+1 = 1) are zero and can thus be discarded. Let ε n+1 = 0, so that π(n + 1) = π(n)(1 + 1/n). We then have On the one hand, since we deduce from (10) the bound (τ ( j)) .
The proof of the statement is now complete.
As an immediate consequence, we point at the following handier estimate for the second moment of j . (7) and further that π(n) n a for some a > 0. Then

Corollary 1 Assume
Proof On the one hand, there is the lower bound E( 2 j ) ≥ E( j ) 2 . On the other hand, our assumption also entails that for some b, b > 0, we have and we conclude with Lemma 2.

Yule-Simon distributions
Recall that the slow and the steady regimes have been defined by (5) and (6), respectively. Simon [24] observed that in each regime, the empirical measure of the sizes of the blocks |B j (n)| converges to a deterministic distribution. [24]) Let ρ > 0. For 0 < ρ < 1, consider the regime (5) of slow innovation with exponent ρ, whereas for ρ > 1, set q = 1−1/ρ ∈ (0, 1) and consider the regime (6) of steady innovation with rate q. In both regimes, for every k ∈ N, we have

Lemma 3 (Simon
where B is the Beta function and the convergence holds in L p for any p ≥ 1.
The limiting distribution in the statement is called the Yule-Simon distribution with parameter ρ. Strictly speaking, Simon only established the stated converge in expectation. A classical argument of propagation of chaos yields the stronger convergence in probability; see e.g. Section 5 in [4], and since the random variables in the statement are obviously bounded by 1, convergence in L p also holds for any p ≥ 1.
The next lemma will be needed to check some uniform integrability properties.
for any β < ρ, in agreement with Fatou's lemma and Lemmas 3 and 4.
Proof (i) Recall from Lemma 1 that in the slow regime, there are the bounds n/c ≤ π(n) ≤ cn for all n ∈ N, where c > 1 is some constant. Since, from Lemma 2, we get by Jensen's inequality that where for the O upperbound, we used the fact that the inverse function τ of σ is regularly varying with exponent 1/ρ (Theorem 1.5.12 in [9]), and Proposition 1.5.8 in [9] since −β/ρ > −1. On the other hand, since τ is the right-inverse of σ , we have τ (σ (n)) ≤ n ≤ τ (σ (n) + 1), so again by regular variation, τ (σ (n)) ∼ n. Finally as we wanted to verify.
(ii) The proof is similar to (i), using now that there exists c > 0 such that as it is readily seen from Corollary 1.

Strong limit theorems
In this section, we will establish two strong limit theorems for step-reinforced random walks, the first concerns slow innovation regimes, and the second steady ones.

Theorem 1 Suppose that
for some ρ ∈ (0, 1), and that for some β > ρ. Then where V is some non-degenerate random variable.
We will deduce Theorem 1 by specializing the following more general result. (7) and set *

Lemma 5 Assume
we have Proof Thanks to (11), the claim follows from (8) and Lemma 2 by dominated convergence.

Proof of Theorem 1
Recall from Lemma 1(i) that π(n) ≈ n. From Lemma 5, it thus suffices to check that since then, the condition (11) follows. Without loss of generality, we may assume that β < 1. Pick a > 0 sufficiently large so that Since X j is a copy of X which is independent of * j , we have Recall from Lemma 2 that |B j (·)|/π(·) is a closed martingale with terminal value j . Then by Doob's maximal inequality, there is some numerical constant c β > 0 such that E(( * j ) β )) ≤ c β E( j ) β , and hence again from Lemma 2, Finally, since τ ( j) ≥ ( j/a) 1/ρ , we conclude that which ensures (12) since β > ρ.

Super-˛-diffusive behavior
We next turn our attention to the steady regime. where V is some non-degenerate random variable.
The proof of Theorem 2 relies on the following martingale convergence result. (7) and let β ∈ (1,2]. Suppose that X ∈ L β (P) with E(X ) = 0, and further that

Lemma 6 Assume
The process is then a martingale bounded in L β (P); we write V ∞ for its terminal value. We have lim n→∞Ŝ (n)/π(n) = V ∞ in L β (P) and a.s.

Proof
The assertion that the process V n is a martingale is straightforward since the variables X j are i.i.d., centered, and independent of the j . The assertion of boundedness in L β (P) then follows from the assumption (13), the Burkholder-Davis-Gundy inequality, and the fact that, for any sequence (y j ) j∈N of nonnegative real numbers, since β ≤ 2, The convergence ofŜ(n)/π(n) in L β (P) is proven similarly. Specifically, we observe from (8) that and recall that the variables X j are independent of those appearing in Simon's algorithm. By the Burkholder-Davis-Gundy inequality, there exists a constant c β ∈ (0, ∞) such that We know from Lemma 2 that for each j ≥ 1, lim n→∞ E | j − |B j (n)|/π(n)| β = 0, and further by Jensen's inequality, that The assumption (13) enables us to complete the proof of convergence of the sequence (Ŝ(n)/π(n)) in L β (P) by dominated convergence.
The almost sure convergence then follows from the observation that the procesŝ S(n)/π(n) is a martingale (in the setting of the elephant random walk, a similar property has been pointed at in [6,12,13,19]). Indeed, we see from Simon's algorithm and the assumption E(X ) = 0 that This immediately entails our assertion.

Proof of Theorem 2
Recall that we assume that E(|X | β ) < ∞ for some β > 1/(1−q). Since q < 1/2, we can further suppose without loss of generality that β ≤ 2. Then, by Jensen's inequality, we have and we just need to check that the right-hand side is finite, as then an appeal to Lemma 6 completes the proof. It follows from (6) and Lemma 1(ii) that τ (n) ∼ n/q and π(n) n 1−q , and then from Corollary 1 that E( 2 j ) j −2+2q . Since β − qβ > 1, the series j≥1 j −β+qβ converges, and the proof is finished.

Weak limit theorems
In this section, we will establish two weak limit theorems for step-reinforced random walks, depending on the innovation regimes.
Proof Note first that, since ρ > α, nσ (n) −1/α goes to 0 as n → ∞, and a fortiori so does |B j (n)|σ (n) −1/α uniformly for all j ∈ N. We fix θ ∈ R and get from (8) that for n sufficiently large We focus on the sum in the right-hand side, and first consider the terms with |B j (n)| ≤ k for some fixed k ∈ N. Write where N (n) = Card{ j ≤ σ (n) : |B j (n)| = }. Next, recall from (4) that as n → ∞, We now deduce from Lemma 4 that for any fixed k ∈ N, there is the convergence for every p ≥ 1.
We can next complete the proof by an argument of uniform integrability. Recall that ϕ(λ) = O(|λ| α ) as λ → 0 and pick β ∈ (α, ρ). There exists a > 0 such that for all n sufficiently large and all k ≥ 1, there is the upper bound |B j (n)| β , and the same inequality holds with ϕ α replacing ϕ. We can then deduce from the preceding paragraph in combination with Lemma 4 that actually α ρB( , ρ + 1) in probability.
It now suffices to recall that ϕ ≥ 0, so by dominated convergence, which completes the proof.
Lemma 7 enables us to duplicate the argument for the proof of Theorem 3, as the reader will readily check.

Miscellaneous remarks
• Technically, the fact that the indices of the steps at which innovations occur are deterministic eases our approach by pointing right from the start at the relevant quantities. Although our statements are only given for deterministic sequences (ε n ), they also apply to random sequences (ε n ) independent of (X n ), provided of course that we can check that the requirements hold a.s. A basic example, which has been chiefly dealt with in the literature, is when the ε j are i.i.d. samples of the Bernoulli distribution with parameter q ∈ (0, 1), as then (6) obviously holds a.s. Plainly independence of the ε j is not a necessary assumption, and much less restrictive correlation structures suffice. For instance, if we merely suppose that each ε j has the Bernoulli law with parameter q j such that n≥2 n −2 | n j=2 (q j − q)| < ∞, and that |Cov(ε j , ε )| ≤ | j − | −a for some a > 0, then one readily verifies that (6) is fulfilled a.s. Similar examples can be developed to get slow innovation regimes, for instance assuming that each variable ε j has a Bernoulli law with q( j) ≈ j ρ−1 and again a mild condition on the correlation.
• Dwelling on an informal comment made in the Introduction, it may be interesting to compare the step-reinforced random walkŜ(n) with its maximal stepX * n = max 1≤ j≤n |X j |. Assume α ∈ (0, 2), and that P(|X | > x) ≈ x −α (recall Sect. 2.1 about characterization of stable domaines of normal attraction). Plainly, there is the identityX * n = X * σ (n) , where X * n = max 1≤ j≤n |X j |, from which we deduce that σ (n) −1/αX * n converges in distribution as n → ∞ to some Frechet variable. Comparing with the results in Sects. 3 and 4, we now see that in the slow regime with innovation exponent ρ ∈ (0, 1),Ŝ grows with the same exponent asX * when α > ρ, and with a strictly larger exponent if α < ρ. Similarly, in the steady regime with innovation rate q ∈ (0, 1),Ŝ grows with the same exponent asX * when α > ρ = 1/(1 − q) and with a strictly larger exponent if α < ρ. In other words, the maximal stepX * has a sensible impact in the strong limit theorems of Sect. 3, but its role is negligible for the weak limit theorems of Sect. 4.
• We have worked in the real setting for the sake of simplicity only; the arguments work as well for random walks in R d with d ≥ 2. In this direction, one notably needs a multidimensional version of (4), which can be found in Section 2 of Aaronson and Denker [1]. The same sake of simplicity (possibly combined with the author's lazyness) motivated our choice of working with domains of normal attraction rather than with domains of attraction. Most likely, dealing with this more general setting would only require very minor modifications of the present arguments and results. • It would be interesting to complete the strong limit results (Theorems 1 and 2) and investigate the fluctuations n −1/αŜ (n) − V as n → ∞. In the setting of the elephant random walk, Kubota and Takei [19] have recently established that these fluctuations are Gaussian. • The case where the generic step X has the standard Cauchy distribution is remarkable, due to the feature that for any a, b > 0, a X 1 + bX 2 has the same distribution as (a + b)X , where X 1 and X 2 are two independent copies of X . It follows that n −1Ŝ (n) has the standard Cauchy distribution for all n, independently of the choice of the sequence (ε n ). This agrees of course with Theorems 1 and 4.
Funding Open access funding provided by University of Zurich.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.