1 Introduction

The aim of this paper is to study a vertex reinforced random walk (VRRW) on the integer lattice \(\mathbb{Z }\) with weight sequence \((w(n),n\ge 0) \in (0,\infty )^\mathbb{N }\), that is, a stochastic process \(X\) with transition probabilities given by

$$\begin{aligned} \mathbb{P }\{X_{n+1}=X_n-1 \mid X_0,X_1,\ldots ,X_n\}&= 1-\mathbb{P }\{X_{n+1}=X_n+1 \mid X_0,X_1,\ldots ,X_n\}\\&= \frac{w(Z_n(X_n-1))}{w(Z_n(X_n-1))+w(Z_n(X_n+1))}, \end{aligned}$$

where \(Z_n(x)\) denotes the number of visits of \(X\) to site \(x\) up to time \(n\). Assuming that the sequence \(w\) is non-decreasing, the walk has a tendency to favour sites previously visited multiple times before which justifies the denomination “reinforced”.

This process was introduced by Pemantle [7] in \(1992\) and subsequently studied by several authors (see for instance [913] as well as Pemantle’s survey [8] and the references therein). A particularly interesting feature of the model is that the walk may get stuck on a finite set provided that the weight sequence \(w\) grows sufficiently fast. For instance, in the linear case \(w(n) =n+1\), it was proved in [9, 11] that the walk ultimately localizes, almost surely, on five consecutive sites. Furthermore, if the weight sequence is non-decreasing and grows even faster (namely \(\sum 1/w(n) < \infty \)), then the walk localizes almost surely on two sites c.f. [12]. On the other hand, if the weight sequence is regularly varying at infinity with index strictly smaller than \(1\), Volkov [13] proved that the walk cannot get stuck on any finite set (see also [10] for refined results in this case).

These previous studies left open the critical case where the index of regular variation of \(w\) is equal to \(1\) (except for linear reinforcement). In a recent paper [1], the authors studied the VRRW with super-linear weights and showed that the walk may localize on \(4\) or \(5\) sites depending on a simple criterion on the weight sequence. In this paper, we consider the remaining case where the weight function grows sub-linearly. We are interested in finding whether the walk localizes and, if so, to estimate the size of the localization set. More precisely, in the rest of the paper, we will consider weight sequences which satisfy the following properties:

Assumption 1.1

  1. (i)

    The sequence \((w(n))_{n\ge 0}\) is positive, non-decreasing, sub-linear and regularly varying with index \(1\) at infinity. Therefore, it can be written in the form:

    $$\begin{aligned} w(n) \!:=\! \frac{n}{\ell (n)}\quad \text{ where } \text{ the } \text{ sequence }\, \ell (n)\, \text{ satisfies } \left\{ \! \begin{array}{l} \lim _{n\rightarrow \infty }\ell (cn)/\ell (n) \!=\! 1 \text{ for } \text{ all } c\!>\!0\\ \lim _{n\rightarrow \infty }\ell (n) = \infty . \end{array} \right. \end{aligned}$$
  2. (ii)

    The sequence \(\ell (n)\) is eventually non-decreasing.

Remark 1.2

Part (i) of the assumption is quite natural. It states that the reinforcement is sub-linear yet close enough to linear so that it is not covered by Volkov’s paper [13]. It would certainly be nice to relax the assumption of regular variation on \(w\) but the techniques used in this article crucially need it. On the contrary, (ii) is of a technical nature and is only required for proving the technical (yet essential) Lemma 2.3. We believe that it does not play any significant role and that the results obtained in this paper should also hold without this assumption.

It is convenient to extend a weight sequence \(w\) into a function so that we may consider \(w(n)\) for non-integer values of \(n\). Thus, in the following, we will call weight function any continuous, non-decreasing function \(w:[0,\infty )\rightarrow (0,\infty )\). Given a weight function, we associate the weight sequence obtained by taking its restriction to the set \(\mathbb{N }\) of integers. Conversely, to any weight sequence \(w\), we associate the weight function, still denoted \(w\), obtained by linear interpolation. It is straightforward to check that, if a sequence \(w\) fulfills Assumption 1.1, then its associated weight function satisfies

  1. (i)

    \(w:[0,\infty )\rightarrow (0,\infty )\) is a continuous, non-decreasing, sub-linear function which is regularly varying with index \(1\) at infinity. In particular, we can write \(w\) in the form:

    $$\begin{aligned} w(x) := \frac{x}{\ell (x)}\qquad \text{ where }\quad \left\{ \begin{array}{l} \lim _{x\rightarrow \infty }\ell (cx)/\ell (x) = 1 \text{ for } \text{ all }\,\, c>0,\\ \lim _{x\rightarrow \infty }\ell (x) = \infty . \end{array} \right. \end{aligned}$$
  2. (ii)

    The function \(\ell \) is eventually non-decreasing.

Therefore, in the rest of the paper, we will say that a weight function satisfies Assumption 1.1 whenever it fulfills (i) and (ii) above. In order to state the main results of the paper, we need to introduce some notation. To a weight function \(w\), we associate \(W:[0,\infty )\rightarrow [0,\infty )\) defined by

$$\begin{aligned} W(x) := \int \limits _0^x \frac{1}{w(u)}\, du. \end{aligned}$$
(1)

Under Assumption 1.1, we have \(\lim _{x\rightarrow \infty } W(x) = \infty \) so that \(W\) is an increasing homeomorphism on \([0,\infty )\) whose inverse will be denoted by \(W^{-1}\). Consider the operator \(G\) which, to each measurable non-negative function \(f\) on \(\mathbb{R }_+\), associates the function \(G(f)\) defined by

$$\begin{aligned} G(f)(x):=\int \limits _0^x\frac{w(W^{-1}(f(u))}{w(W^{-1}(u))}\, du. \end{aligned}$$
(2)

We denote by \(G^{(n)}\) the \(n\)-fold of \(G\). For \(\eta \in (0,1)\), define the parameter:

$$\begin{aligned} i_\eta (w):=\inf \left\{ n\ge 2\; : \; G^{(n-1)}(\eta \text{ Id }) \ \mathrm{is\ bounded} \right\} , \end{aligned}$$
(3)

where \(\text{ Id }\) stands for the identity function with the convention \(\inf \emptyset = +\infty \). Since \(w\) is non-decreasing, the map \(\eta \mapsto i_\eta (w)\) is also non-decreasing. So we can define \(i_-(w)\) and \(i_+(w)\) respectively as the left and right limits at \(1/2\):

$$\begin{aligned} i_\pm (w):=\lim _{\eta \rightarrow {\frac{1}{2}}^\pm } i_\eta (w). \end{aligned}$$
(4)

As we shall see later, the numbers \(i_+(w)\) and \(i_-(w)\) are either both infinite or both finite and in the latter case, we have \(i_+(w)- i_-(w)\in \{0,1\}\). Let us also mention that, although there exist weight functions for which \(i_+(w)\ne i_-(w)\), those cases are somewhat exceptional and correspond to critical cases for the asymptotic behaviour of the VRRW (see Remark 2.8). We say that a walk localizes if its range, i.e. the set of sites which are visited, is finite. Our main theorem about localization of a VRRW on \(\mathbb{Z }\) is the following.

Theorem 1.3

Let \(X\) be a VRRW on \(\mathbb{Z }\) with weight \(w\) satisfying Assumption 1.1. We have the equivalence

$$\begin{aligned} i_{+}(w)<\infty&\Longleftrightarrow i_{-}(w)<\infty \Longleftrightarrow X \text{ localizes } \text{ with } \text{ positive } \text{ probability } \\&\Longleftrightarrow X \text{ localizes } \text{ a.s. } \end{aligned}$$

Let \(R\) be the random set of sites visited infinitely often by the walk and denote by \(|R|\) its cardinality. When localization occurs (i.e. \(i_\pm (w)<\infty \)) we have

$$\begin{aligned}&(i)\;|R| > i_-(w) + 1 \text{ almost } \text{ surely, }\\&(ii)\;\mathbb{P }\big \{ 2i_-(w)+1 \le |R|\le 2i_+(w)+1 \big \}>0. \end{aligned}$$

The lower bound on \(|R|\) given in (\(i\)) can be slightly improved for small values of \(i_-(w)\) using a different approach which relies on arguments similar to those introduced by Tarrès in [11, 12].

Proposition 1.4

Assume that \(w\) satisfies Assumption 1.1.

  1. (i)

    If \(i_-(w) = 2\) then \(|R| > 4\) almost surely.

  2. (ii)

    If \(i_-(w) = 3\) then \(|R| > 5\) almost surely.

Let us make some comments. The first part of the theorem identifies weight functions for which the walk localizes. However, although we can compute \(i_\pm (w)\) for several examples, deciding the finiteness of these indexes is usually rather challenging. Therefore, it would be interesting to find a simpler test concerning the operator \(G\) to check whether its iterates \(G^{(n)}(\eta \text{ Id })\) are ultimately bounded. For instance, does there exist a simple integral test on \(w\) characterizing the behaviour of \(G\) ?

The second part of the theorem estimates the size of the localization interval. According to Proposition 1.5 stated below, (\(i\)) shows that there exist walks which localize only on arbitrarily large subsets but this lower bound is not sharp as Proposition 1.4 shows. In fact, we expect the correct lower bound to be the one given in (\(ii\)). More precisely we conjecture that, when localization occurs,

$$\begin{aligned} 2i_-(w)+1\; \le \; |R|\; \le \; 2i_+(w)+1 \qquad \text{ almost } \text{ surely. } \end{aligned}$$

In particular, when \(i_+(w) = i_-(w)\), the walk should localize a.s. on exactly \(2i_\pm (w) +1\) sites. However, we have no guess as to whether the cardinality of \(R\) may be random when the indexes \(i_\pm (w)\) differ. Let us simply recall that, for super-linear reinforcement of the form \(w(n) \sim n\log \log n\), the walk localizes on \(4\) or \(5\) sites so that \(|R|\) is indeed random in that case, c.f. [1]. Yet, the localization pattern for super-linear weights is quite specific and may not apply in the sub-linear case considered here.

Let us also remark that the trapping of a self-interacting random walk on an arbitrary large subset of \(\mathbb{Z }\) was previously observed by Erschler, Tóth and Werner [4, 5] who considered a model called stuck walks which mixes both repulsion and attraction mechanisms. Although stuck walks and VRRWs both localize on large sets, the asymptotic behaviours of these processes are very different. For instance, the local time profile of a stuck walk is such that it spends a positive fraction of time on every site visited infinitely often. On the contrary, the VRRW exhibits localization patterns where the walk spends most of its time on three consecutive sites and only a negligible fraction of time on the other sites of \(R\) (c.f. Sect. 8 for a more detailed discussion on this subject).

As we already mentioned, we can compute \(i_\pm (w)\) for particular classes of weight functions. The case where the slowly varying function \(\ell (x)\) is of order \(\exp (\log ^\alpha (x))\) turns out to be particularly interesting.

Proposition 1.5

Let \(w\) be a non-decreasing weight sequence such that

$$\begin{aligned} w(k)\; \underset{k\rightarrow \infty }{\sim } \; \frac{k}{\exp (\log ^\alpha k)}\qquad \text{ for } \text{ some } \,\alpha \in (0,1). \end{aligned}$$

Then \(i_-(w)\) and \(i_+(w)\) are both finite. Moreover, for \(n\in \mathbb{N }^*\), we have

$$\begin{aligned} \alpha \in \left( \frac{n-1}{n},\frac{n}{n+1}\right) \Longrightarrow i_-(w)=i_+(w)=n+1. \end{aligned}$$

The proposition implies that, for any odd number \(N\) larger than or equal to \(5\), there exists a VRRW which localizes on exactly \(N\) sites with positive probability. It is also known from previous results [1, 13] that a VRRW may localize on \(2\) or \(4\) sites (but it cannot localize on \(3\) sites). We wonder whether there exist any other admissible values for \(|R|\) apart from \(2,4,5,7,9,\ldots \) Let us also mention that, using monotonicity properties of \(i_\pm \), it is possible to construct a weight function \(w\), regularly varying with index \(1\), which is growing slower than \(x/\exp (\log ^\alpha (x))\) for any \(\alpha <1\) such that \(i_\pm (w) =\infty \). For example, this is the case if

$$\begin{aligned} w(x) \sim \frac{x}{\exp \left( \frac{\log x}{\log \log x}\right) } \end{aligned}$$

c.f. Corollary 2.9. Hence, a walk with such reinforcement does not localize. However, we expect it to have a very unusual behaviour: we conjecture it is recurrent on \(\mathbb{Z }\) but spends asymptotically all of its time on only three sites.

Let us give a quick overview of the strategy for the proof of Theorem 1.3. The main part consists in establishing a similar result for a reflected VRRW \(\bar{X}\) on the half-line \(\{-1,0,\ldots \}\). In order to do so, we introduce two alternative self-interacting random walks \(\widetilde{X}\) and \(\widehat{X}\) which, in a way, surround the reflected walk \(\bar{X}\). The transition mechanisms of these two walks are such that, at each time step, they jump to their left neighbour with a probability proportional to a function of the site local time on their left, whereas they jump to the right with a probability proportional to a function of the edge local time on their right. It is well known that an edge reinforced random walk on \(\mathbb{Z }\) (more generally, on any acyclic graph) may be constructed from a sequence of i.i.d. urn processes, see for instance Pemantle [6]. Subsequently, in the case of vertex reinforced random walks, Tarrès [11] introduced martingales attached to each site, which play a similar role as urns, but a major difficulty is that they are, in that case, strongly correlated. Considering walks \(\widetilde{X},\widehat{X}\) with a mixed site/edge reinforcement somehow gives the best of both worlds: it enables to simplify the study of these walks by creating additional structural independence (in one direction) while still preserving the flavor and complexity of the site reinforcement scheme. In particular, \(\widetilde{X}\),\(\widehat{X}\) have the nice restriction property that their laws on a finite set do not depend upon the path taken by the walks on the right of this set. Considering reflected walks, we can then work by induction and prove that when the critical indexes \(i_\pm \) are finite, \(\widetilde{X}\),\(\widehat{X}\) localize on roughly \(i_{\pm } +1\) sites. Then, in turn, using a coupling argument we deduce a similar criterion for the reflected VRRW \(\bar{X}\). The last step consists in transferring these results to the non-reflected VRRW on \(\mathbb{Z }\). The key point here being that the localization pattern for \(\widetilde{X}\),\(\widehat{X}\) has a particular shape where the urn located at the origin is balanced, i.e. sites \(1\) and \(-1\) are visited about half as many times as the origin. This fact permits to use symmetry arguments to construct a localization pattern for the non reflected walk of size of order \(2 i_{\pm } +1\).

The rest of the paper is organized as follows. In Sect. 2, we prove Proposition 1.5 and collect several results concerning the critical indexes which we will need later on during the proof of the theorem. In Sect. 3, we introduce the three walks \(\widetilde{X}\),\(\widehat{X}\) and \(\bar{X}\) mentioned above and we prove coupling properties between these processes. Sections 4 and 5 are respectively devoted to studying the walks \(\widetilde{X}\) and \(\widehat{X}\). In Sect. 6, we rely on the results obtained in the previous sections to describe the asymptotic behaviour of \(\bar{X}\). The proof of Theorem 1.3 is then carried out in Sect. 7 and followed in Sect. 8 by a discussion concerning the shape of the asymptotic local time profile. Finally we provide in the appendix a proof of Proposition 1.4 which, as we already mentioned, uses fairly different technics but is still included here for the sake of completeness.

2 Preliminaries: properties of \(W\) and \(i_\pm (w)\)

The purpose of this section is to study the operator \(G\) and collect technical results from real analysis concerning regularly varying functions. As such, this section is not directly related with VRRW and does not involve probability theory. The reader interested in the main arguments used for proving Theorem 1.3 may wish to continue directly to Sect. 3 after simply reading the statement of the results of this section.

2.1 Some properties of the slowly varying function \(W\)

From now on, we assume that all the weight functions considered satisfy Assumption 1.1 (i).

Lemma 2.1

The function \(W\) defined by (1) is slowly varying i.e.

$$\begin{aligned} W(cx)\underset{x\rightarrow \infty }{\sim } W(x) \qquad \text{ for } \text{ any }\, c>0. \end{aligned}$$

Moreover, given two positive functions \(f\) and \(g\) with \(\lim _{x\rightarrow \infty } f(x)=\lim _{x\rightarrow \infty } g(x)=+\infty \), we have

$$\begin{aligned} \limsup _{x\rightarrow \infty }\frac{W(f(x))}{W(g(x))} <1&\Longrightarrow&\lim _{x\rightarrow \infty } \frac{f(x)}{g(x)}= 0,\end{aligned}$$
(5)
$$\begin{aligned} \sup _{x\ge 0} \big ( W(f(x))-W(g(x)) \big ) <\infty&\Longrightarrow&\limsup _{x\rightarrow \infty } \frac{f(x)}{g(x)} \le 1,\end{aligned}$$
(6)
$$\begin{aligned} \sup _{x\ge 0} \big | W(f(x))-W(g(x)) \big | <\infty&\Longrightarrow&\lim _{x\rightarrow \infty } \frac{f(x)}{g(x)} = 1. \end{aligned}$$
(7)

Proof

The fact that \(W\) is slowly varying follows from Proposition \(1.5.9a\) of [2]. Assume now that \(\limsup f/g >\lambda >0\). Then, there exists an increasing sequence \((x_n)\) such that

$$\begin{aligned} \limsup _{x\rightarrow \infty }\frac{W(f(x))}{W(g(x))}\ \ge \ \lim _{n\rightarrow \infty }\frac{W(\lambda g(x_n))}{W(g(x_n))}\, =\, 1. \end{aligned}$$

which proves (5). Concerning the second assertion, the uniform convergence theorem for regularly varying functions shows that, for \(\lambda > 0\) (c.f. [2] p.\(127\) for details),

$$\begin{aligned} \lim _{x \rightarrow \infty }\, \frac{W(\lambda x)-W(x)}{\ell (x)}=\log \lambda , \end{aligned}$$

where \(\ell \) is the slowly varying function associated with \(w\). Therefore, if \(\limsup f/g > \lambda > 1\), there exist arbitrarily large \(x\)’s such that

$$\begin{aligned} W(f(x))-W(g(x))\ge W(\lambda g(x))-W(g(x))\ge \frac{1}{2}\log (\lambda )\ell (g(x)), \end{aligned}$$

which implies that \(W(f(\cdot ))-W(g(\cdot ))\) is unbounded from above. Finally, Assertion (7) follows from (6) by symmetry.\(\square \)

Given a measurable, non negative function \(\psi :\mathbb{R }_+\rightarrow \mathbb{R }_+\), we introduce the notation \(W_\psi \) to denote the function

$$\begin{aligned} W_\psi (x):=\int \limits _0^x \frac{du}{w(u+\psi (u))}. \end{aligned}$$
(8)

In the linear case \(\psi (u)=\eta u\) with \(\eta >0\), we shall simply write \(W_\eta \) instead of \(W_\psi \) (note that \(W_0 = W\)). The next result is a slight refinement of (7).

Lemma 2.2

Let \(\psi \) be a measurable non-negative function such that

$$\begin{aligned} W(x)-W_{\psi }(x)=o(\ell (x)) \quad \text{ as }\, x\rightarrow \infty . \end{aligned}$$

Then, for any positive functions \(f\) and \(g\) with \(\lim _{x\rightarrow \infty } f(x) =\lim _{x\rightarrow \infty } g(x) =+\infty \), we have

$$\begin{aligned} \sup _{x\ge 0}\big | W(f(x))-W_{\psi }(g(x))\big | < \infty \ \Longrightarrow \ \lim _{x\rightarrow \infty }\, \frac{f(x)}{g(x)}= 1. \end{aligned}$$

Proof

Since \(\psi \) is non-negative, we have \(W_\psi \le W\) thus Lemma 2.1 yields \(\limsup f/g\le 1\). Fix \(0<\lambda <1\). We can write

$$\begin{aligned} W(\lambda g(x))-W(f(x))&= W(\lambda g(x))-W(g(x)) + W(g(x))-W_\psi (g(x)) \\&+ W_\psi (g(x))-W(f(x)). \end{aligned}$$

Using the facts that

$$\begin{aligned} W(x)-W_{\psi }(x)=o(\ell (x))\qquad \text{ and } \qquad W(\lambda x)-W(x)\sim \log (\lambda )\ell (x), \end{aligned}$$

we deduce that, if \(W_\psi (g(\cdot ))-W(f(\cdot ))\) is bounded from above, then \(W(\lambda g(\cdot ))-W(f(\cdot ))\) is also bounded from above. In view of Lemma 2.1, this yields \(\limsup g/f \le 1/\lambda \) and we conclude the proof of the lemma letting \(\lambda \) tend to \(1\).\(\square \)

We conclude this subsection by showing that the function

$$\begin{aligned} \Phi _{\eta ,2}(x):= W^{-1}(\eta W(x/\eta )) \end{aligned}$$
(9)

satisfies the hypothesis of the previous lemma for any \(\eta \in (0,1)\). As we have already mentioned in the introduction, the following lemma is the only place in the paper where we require \(\ell \) to be eventually non-decreasing.

Lemma 2.3

Assume that \(w\) also satisfies (ii) of Assumption 1.1. Let \(\eta \in (0,1)\), we have

$$\begin{aligned} W(x)-W_{\Phi _{\eta ,2}}(x)=o(\ell (x)) \quad \text{ as }\, x\rightarrow \infty . \end{aligned}$$
(10)

Furthermore, there exists a non-decreasing function \(f_\eta :[0,\infty )\rightarrow [0,\infty )\) such that

$$\begin{aligned} (a)&f_\eta \ge \Phi _{\eta ,2}\\ (b)&f_\eta = o(x) \\ (c)&W(x) - W_{f_\eta }(x) = o(\ell (x))\\ (d)&\lim _{x\rightarrow +\infty } W(x) - W_{f_\eta }(x) = +\infty . \end{aligned}$$

Proof

Choose \(x_0\) large enough such that \(\ell \) is non-decreasing on \([x_0,\infty )\). Let \(C:= W(x_0)-W_{\Phi _{\eta ,2}}(x_0)\). For \(x\ge x_0\), we get

$$\begin{aligned} W(x)-W_{\Phi _{\eta ,2}}(x)&= C+\int \limits _{x_0}^x \left( \frac{\ell (u)}{u}- \frac{\ell (u+\Phi _{\eta ,2}(u))}{u+\Phi _{\eta ,2}(u)}\right) \, du\\&\le C+ \int \limits _{x_0}^x \frac{\ell (u)\Phi _{\eta ,2}(u)}{u^2} \, du \\&= C+ \int \limits _{x_0}^x \frac{W^{-1}(\eta W(u/\eta ))}{ w(u) u}\, du\\&\le C^{\prime } + \frac{2}{\eta } \int \limits _{x_0}^x \frac{W^{-1}(\eta W(u/\eta ))}{ w(u/\eta ) u}\,du, \end{aligned}$$

where we used \(\eta w(u/\eta )\sim w(u)\) as \(u\rightarrow \infty \) and where \(C^{\prime }\) is a finite constant. From the change of variable \(t = W(u/\eta )\), it follows that

$$\begin{aligned} W(x)-W_{\Phi _{\eta ,2}}(x)\le C^{\prime } + \frac{2}{\eta }\int \limits _{W(x_0/\eta )}^{W(x/\eta )}\frac{W^{-1}(\eta t)}{W^{-1}(t)}\,dt. \end{aligned}$$

Now let

$$\begin{aligned} J_\eta (x) := \int \limits _0^x \frac{W^{-1}(\eta u)}{W^{-1}(u)}\, du, \end{aligned}$$

which is well-defined since \(\lim _{u\rightarrow 0} W^{-1}(\eta u)/W^{-1}(u) = \eta \). It remains to prove that

$$\begin{aligned} J_\eta (x) = o(\ell (W^{-1}(x)))\quad \text{ when } x\rightarrow \infty \text{, } \end{aligned}$$
(11)

as this will entail

$$\begin{aligned} W(x)-W_{\Phi _{\eta ,2}}(x) \le C^{\prime } + \frac{2}{\eta }J_\eta (W(x/\eta )) = o(\ell (x/\eta )) = o(\ell (x)). \end{aligned}$$

In order to establish (11), we consider the function \(h(x) := \log W^{-1}(x) \). This function is non-decreasing and

$$\begin{aligned} h^{\prime }(x)=\frac{w(W^{-1}(x))}{W^{-1}(x)}=\frac{1}{\ell (W^{-1}(x))}. \end{aligned}$$

Thus, we need to prove that

$$\begin{aligned} \lim _{x\rightarrow \infty }\, h^{\prime }(x) J_\eta (x)\, =\, \lim _{x\rightarrow \infty }\, h^{\prime }(x)\int \limits _0^x e^{h(\eta u)-h(u)}\, du\, =\, 0. \end{aligned}$$

Choosing \(x_1\) large enough such that \(h^{\prime }\) is non-increasing on \([\eta x_1,\infty )\), we get, for any \(x\ge x_1\) and any \(A\in [x_1,x]\),

$$\begin{aligned} J_\eta (x)&\le J_\eta (x_1)+\int \limits _{x_1}^x e^{-(1-\eta )uh^{\prime }(u)}\, du \\&\le J_\eta (x_1)+\int \limits _0^A e^{-(1-\eta )uh^{\prime }(A)}\, du+ \int \limits _A^\infty e^{-(1-\eta )uh^{\prime }(x)}\, du \\&= J_\eta (x_1)+\frac{1}{(1-\eta )h^{\prime }(A)}+ \frac{e^{-(1-\eta )Ah^{\prime }(x)}}{(1-\eta )h^{\prime }(x)}. \end{aligned}$$

According to Equation \(1.5.8\) of [2] p.\(27\), we have \(\ell (x)=o(W(x))\) hence

$$\begin{aligned} 1/h^{\prime }(x)=\ell (W^{-1}(x))=o(x)\quad \text{ as } x\rightarrow \infty \text{. } \end{aligned}$$

Fix \(\varepsilon >0\) and set \(A:= A(x) = 1/(\sqrt{\varepsilon }h^{\prime }(x))\). Then, for all \(x\) large enough such that \(1/h^{\prime }(A)\le \varepsilon A\), we get

$$\begin{aligned} (1-\eta )h^{\prime }(x) J_\eta (x)\le (1-\eta )h^{\prime }(x) J_\eta (x_1)+ \sqrt{\varepsilon }+e^{-(1-\eta )/\sqrt{\varepsilon }}, \end{aligned}$$

which completes the proof of (10).

Concerning the second part of the lemma, it follows from Lemma 2.1 that \(\Phi _{\eta ,2}(x) = o(x)\) for any \(0<\eta <1\) (see also Lemma 2.5). Hence, if \(\lim _{x\rightarrow \infty } W(x) -W_{\Phi _{\eta ,2}}(x) = \infty \), then we can simply choose \(f_\eta = \Phi _{\eta ,2}\). Otherwise, we can always construct a positive non-decreasing function \(h\) such that \(f_\eta := \Phi _{\eta ,2} + h\) is a solution (for instance, one can construct \(h\) continuous with \(h(0)=0\), piecewise linear, flat on intervals \([x_{2n},x_{2n+1}]\) and with slope \(1/n\) on the intervals \([x_{2n+1},x_{2n+2}]\) where \((x_i)_{i\ge 0}\) is a suitably chosen increasing sequence). The technical details are left to the reader.\(\square \)

2.2 Properties of the indexes \(i_\pm (w)\)

Recall the construction of the family \((i_\eta (w),\eta \in (0,1) )\) from the operator \(G\) defined in (2). In this subsection, we collect some useful results concerning this family. We show in particular that the map \(\eta \mapsto i_\eta (w)\) can take at most two different (consecutive) values. In order to do so, we provide an alternative description of these parameters in term of another family \((j_\eta ,\eta \in (0,1) )\) defined using another operator \(h\) whose probability interpretation will become clear in the next sections. More precisely, let \(H\) be the operator which, to each homeomorphism \(f: [0,\infty ) \rightarrow [0,\infty )\), associates the function \(H(f): [0,\infty ) \rightarrow [0,\infty )\) defined by

$$\begin{aligned} H(f)(x):=W^{-1}\left( \int \limits _0^x\frac{du}{w(f^{-1}(u))}\right) \qquad \text{ for }\quad x\ge 0, \end{aligned}$$
(12)

where \(f^{-1}\) stands for the inverse of \(f\). If \(H(f)\) is unbounded, then it is itself an homeomorphism. Thus, for each \(\eta \in (0,1)\), we can define by induction the (possibly finite) sequence of functions \((\Phi _{\eta ,j}, 1 \le j\le j_\eta (w) )\) by

$$\begin{aligned} \left\{ \begin{array}{ll} \Phi _{\eta ,1}:= \eta \text{ Id }&{} \\ \Phi _{\eta ,j+1}:= H(\Phi _{\eta ,j}) &{} \text{ if } \Phi _{\eta ,j}\; \text{ is } \text{ unbounded, } \end{array}\right. \end{aligned}$$
(13)

where

$$\begin{aligned} j_\eta (w):=\inf \{j\ge 1\; :\; \Phi _{\eta ,j}\ \; \mathrm{is~bounded}\}. \end{aligned}$$

We use the convention \(\Phi _{\eta ,j} = 0\) for \(j > j_\eta (w)\). Let us remark that this definition of \(\Phi _{\eta ,2}\) coincides with the previous definition given in (9). In particular, \(\Phi _{\eta ,2}\) is always unbounded, which implies

$$\begin{aligned} j_\eta (w) \in [\![3,+\infty ]\!]\end{aligned}$$

(throughout the paper we use the notation \([\![a,b]\!]= [a,b]\cap (\mathbb{Z }\cup \{\pm \infty \})\)).

Lemma 2.4

The operator \(H\) is monotone in the following sense:

  1. (i)

    If \(f\le g\), then \(H(f) \le H(g)\).

  2. (ii)

    If \(f(x)\le g(x)\), for all \(x\) large enough and \(H(f)\) is unbounded, then \(\limsup H(f)/H(g) \le 1\).

The proof of the lemma is straightforward so we omit it. The following technical results will be used in many places throughout the paper.

Lemma 2.5

Let \(0<\eta <\eta ^{\prime }<1\) and \(\lambda >0\). For all \(j\in [\![2, j_\eta (w)-1]\!]\), we have, as \(x\rightarrow \infty \),

$$\begin{aligned}&\text{(i) } \quad \Phi _{\eta ,j}(x)=o(x),\\&\text{(ii) }\quad \Phi _{\eta ,j}(\lambda x)= o(\Phi _{\eta ^{\prime },j}(x)). \end{aligned}$$

Proof

As we already mentioned, we have \(W(\Phi _{\eta ,2}(x))=\eta W(x/\eta )\) hence Lemma 2.1 implies that \(\Phi _{\eta ,2}(x)=o(x)\) and (i) follows from Lemma 2.4. We prove (ii) by induction on \(j\). Recalling that \(W\) is slowly varying, we have

$$\begin{aligned} \limsup _{x\rightarrow \infty } \frac{W(\Phi _{\eta ,2}(\lambda x))}{W(\Phi _{\eta ^{\prime },2}(x))}=\frac{\eta }{\eta ^{\prime }}\, \limsup _{x\rightarrow \infty } \frac{W(\lambda x/\eta )}{W(x/\eta ^{\prime })}=\frac{\eta }{\eta ^{\prime }}<1, \end{aligned}$$

which, by using Lemma 2.1, yields \(\Phi _{\eta ,2}(\lambda x)=o(\Phi _{\eta ^{\prime },2}(x))\). Let us now assume that for some \(j < j_\eta (w)-1\), \(\Phi _{\eta ,j}(x)=o(\Phi _{\eta ^{\prime },j}(x/\lambda ))\) for all \(\lambda >0\). Fix \(\delta >0\). Using again the monotonicity property of \(H\), we deduce that

$$\begin{aligned} \limsup _{x\rightarrow \infty } \frac{\Phi _{\eta ,j+1}(x)}{H\left( \delta \Phi _{\eta ^{\prime },j}\left( \frac{\cdot }{\lambda }\right) \right) (x)} \le 1. \end{aligned}$$

Notice that

$$\begin{aligned} H\left( \delta \Phi _{\eta ^{\prime },j}\left( \frac{\cdot }{\lambda }\right) \right) (x)&= W^{-1}\left( \int \limits _0^{x/\delta }\frac{\delta dt}{w(\lambda (\Phi _{\eta ^{\prime },j}^{-1}(t)))}\right) \\&\le W^{-1}\left( C+\frac{2\delta }{\lambda }W\left( \Phi _{\eta ^{\prime },j+1}\left( \frac{x}{\delta }\right) \right) \right) , \end{aligned}$$

where we used that \(w(\lambda x)\le 2\lambda w(x)\), for \(x\) large enough and where \(C\) is some positive constant. Moreover, Lemma 2.1 shows that, for \(C>0\), \(\varepsilon \in (0,1)\) and any positive unbounded function \(f\), we have

$$\begin{aligned} W^{-1}(\varepsilon f(x)+C)=o(W^{-1}( f(x)). \end{aligned}$$

Hence, choosing \(\lambda \) such that \(2\delta < \lambda \), we find that

$$\begin{aligned} H\left( \delta \Phi _{\eta ^{\prime },j}\left( \frac{\cdot }{\lambda }\right) \right) (x)=o\left( \Phi _{\eta ^{\prime },j+1}(\frac{x}{\delta })\right) , \end{aligned}$$

which concludes the proof of the lemma. \(\square \)

We can now prove the main result of this section which relates \(j_\eta (w)\) and \(i_\eta (w)\).

Proposition 2.6

The maps \(\eta \mapsto i_{\eta }(w)\) and \(\eta \mapsto j_{\eta }(w)\) are non-decreasing and take at most two consecutive values. Moreover, at each continuity point \(\eta \) of \(j_\eta (w)\), we have

$$\begin{aligned} j_{\eta }(w)=i_{\eta }(w)+1. \end{aligned}$$
(14)

Proof

It is clear that the monotonicity result of Lemma 2.4 also holds for the operator \(G\) defined by (2). Thus, both functions \(\eta \mapsto j_\eta (w)\) and \(\eta \mapsto i_\eta (w)\) are non-decreasing. Moreover, according to (i) of the previous lemma, we have \(\Phi _{\eta ^{\prime },2}=o(\Phi _{\eta ,1})\) for any \(\eta ,\eta ^{\prime } \in (0,1)\). Combining (ii) of Lemma 2.4 with (ii) of the previous lemma, we deduce that \(\Phi _{\eta ^{\prime },3}=o(\Phi _{\eta ,2})\) for any \(\eta ,\eta ^{\prime } \in (0,1)\). Repeating this argument, we conclude by induction that \(j_{\eta ^{\prime }}(w) \le j_{\eta }(w) +1\) which proves that \(\eta \mapsto j_{\eta }(w)\) takes at most two different values. The same property will also hold for \(i_\eta (w)\) as soon as we establish (14).

Define \(\varphi _{\eta ,j}:=W\circ \Phi _{\eta ,j}\circ W^{-1}\). Using the change of variable \(z=W(u)\) in (12), we find that, for \(j< j_\eta (w)\),

$$\begin{aligned} \varphi _{\eta ,j+1}(x)=\int \limits _0^x \frac{w\circ W^{-1}(z)}{w\circ W^{-1} \circ \varphi _{\eta ,j}^{-1}(z)}\, dz. \end{aligned}$$
(15)

Define by induction

$$\begin{aligned} \left\{ \begin{array}{ll} h_{\eta ,1} := \varphi _{\eta ,1},&{} \\ h_{\eta ,j+1}:=\varphi _{\eta ,j+1} \circ h_{\eta ,j} &{} \text{ for } j\ge 1\text{. }\\ \end{array} \right. \end{aligned}$$

We have \(h_{\eta ,j}=W\circ \Phi _{\eta ,j}\circ \ldots \circ \Phi _{\eta ,1}\circ W^{-1}\) thus

$$\begin{aligned} j_\eta (w)=\inf \{j\ge 3\; :\; h_{\eta ,j}\ \text{ is } \text{ bounded }\}. \end{aligned}$$

Note that \(h_{\eta ,2}(x)=\Phi _{\eta ,1}(x)=\eta x\). Furthermore, using the change of variable \(z=h_{\eta ,j}(u)\) in (15), it follows by induction that, for \(j < j_\eta (w)\),

$$\begin{aligned} h_{\eta ,j+1}(x)&= \eta \, \int \limits _0^x \frac{w\circ W^{-1}\circ h_{\eta ,j}(u)}{w(\eta W^{-1}(u))}\, du. \end{aligned}$$
(16)

Define also the sequence \((g_{\eta ,j})_{j\ge 1}\), by

$$\begin{aligned} g_{\eta ,j}:=G^{(j-1)}(\Phi _{\eta ,1}). \end{aligned}$$

Recall that, by definition,

$$\begin{aligned} i_\eta (w)=\inf \{j\ge 2 \; :\; g_{\eta ,j} \ \text{ is } \text{ bounded }\}. \end{aligned}$$

Using Lemma 2.1, it now follows by induction from (2) and (16) that for \(\alpha <\eta <\beta \) and \(j\ge 2\),

$$\begin{aligned} \left\{ \begin{array}{ll} g_{\alpha ,j}(x)=o(h_{\eta ,j+1}(x)) &{} \text{ as } \text{ long } \text{ as }\,\, h_{\eta ,j+1}\,\, \text{ is } \text{ unbounded },\\ h_{\eta ,j+1}(x)=o(g_{\beta ,j}(x)) &{} \text{ as } \text{ long } \text{ as }\,\, g_{\beta ,j}\,\, \text{ is } \text{ unbounded }. \end{array} \right. \end{aligned}$$
(17)

Therefore,

$$\begin{aligned} i_\alpha (w)+1\le j_\eta (w) \le i_\beta (w)+1\qquad \text{ for } \text{ all }\, \alpha <\eta <\beta , \end{aligned}$$

which proves that \(j_\eta (w)=i_\eta (w)+1\) if the map \(j_\eta (w)\) is continuous at point \(\eta \). \(\square \)

2.3 Proof of Proposition 1.5

For \(\eta \in (0,1)\), define

$$\begin{aligned} i_{\eta ,\pm }(w):=\lim _{\delta \rightarrow \eta ^\pm } i_\delta (w). \end{aligned}$$

In accordance with (4), we have \(i_{\pm }(w) = i_{1/2,\pm }(w)\). Given another weight function \(\tilde{w}\), we will use the notation \(\tilde{W},\tilde{\Phi },\ldots \) to denote the quantities \(W,\Phi ,\ldots \) constructed from \(\tilde{w}\) instead of \(w\). The following result compares the critical indexes \(i_{\eta ,\pm }\) of two weight functions.

Proposition 2.7

Let \(w,\tilde{w}\) denote two weight functions and let \(\eta \in (0,1)\).

  1. (i)

    If \(w(x)\sim \tilde{w}(x)\), then \(i_{\eta ,\pm }(w)= i_{\eta ,\pm }(\tilde{w})\).

  2. (ii)

    If the function \((w\circ W^{-1})/(\tilde{w}\circ \tilde{W}^{-1})\) is eventually non-decreasing, then \(i_{\eta ,\pm }(w) \le i_{\eta ,\pm }(\tilde{w})\).

Proof

Let us first establish (i). We prove by induction on \(j\) that, for all \(\beta \in (\eta ,1)\) and \(x\) large enough,

$$\begin{aligned} \Phi _{\eta ,j}(x)\le \tilde{\Phi }_{\beta ,j}(x)\qquad \text{ for } \text{ any } j< j_{\eta ,+}(\tilde{w}). \end{aligned}$$
(18)

The assumption that \(w(x)\sim \tilde{w}(x)\) implies that, for all \(\varepsilon >0\) and for \(x\) large enough,

$$\begin{aligned} \frac{1-\varepsilon }{\tilde{w}(x)} \le \frac{1}{ w(x)}\le \frac{1+\varepsilon }{\tilde{w}(x)} \quad \text{ and }\quad W^{-1}(x)\le \tilde{W}^{-1}((1+\varepsilon )x). \end{aligned}$$

Assume now that (18) holds for some \(j<j_{\eta ,+}(\tilde{w})-1\) and all \(\beta >\eta \). Then, for \(x\) large enough

$$\begin{aligned} \frac{1}{w(\Phi ^{-1}_{\eta ,j}(x))}\le \frac{1+\varepsilon }{\tilde{w}(\tilde{\Phi }^{-1}_{\beta ,j}(x))}, \end{aligned}$$

which yields, for \(x\) large enough,

$$\begin{aligned} \Phi _{\eta ,j+1}(x)=W^{-1}\left( \int \limits _{0}^x \frac{dt}{w(\Phi ^{-1}_{\eta ,j}(t))}\right) \le \tilde{W}^{-1}\left( (1+\varepsilon )^2\int \limits _{0}^x \frac{dt}{\tilde{w}(\tilde{\Phi }^{-1}_{\beta ,j}(t))} + C\right) , \end{aligned}$$

for some constant \(C >0\). On the other hand, thanks to Lemma 2.5, setting \(\beta ^{\prime }:=(1+\varepsilon )^3\beta \), we have,

$$\begin{aligned} \tilde{\Phi }^{-1}_{\beta ,j}(x)\ge (1+\varepsilon )^3 \tilde{\Phi }^{-1}_{\beta ^{\prime },j}(x). \end{aligned}$$

The regular variation of \(\tilde{w}\) now implies,

$$\begin{aligned} (1+\varepsilon )^2\int \limits _{0}^x \frac{dt}{\tilde{w}(\tilde{\Phi }^{-1}_{\beta ,j}(t))} + C \le \int \limits _{0}^x \frac{dt}{\tilde{w}(\tilde{\Phi }^{-1}_{\beta ^{\prime },j}(t))} \end{aligned}$$

(where we used the divergence at infinity of the integral on the r.h.s.) and therefore, for \(x\) large enough,

$$\begin{aligned} \Phi _{\eta ,j+1}(x)\le \tilde{\Phi }_{\beta ^{\prime },j+1}(x). \end{aligned}$$

This proves (18) by taking \(\varepsilon \) small enough. Applying (18) with \(j=j_{\eta ,+}(\tilde{w})-1\) and \(\beta >\eta \) such that \(j_{\eta ,+}(\tilde{w})=j_{\beta }(\tilde{w})\), we get, with similar arguments as before,

$$\begin{aligned} \Phi _{\eta ,j_{\eta ,+}(\tilde{w})}(x)\le \tilde{W}^{-1} \left( (1+\varepsilon )^2\int \limits _{0}^\infty \frac{dt}{\tilde{w}(\tilde{\Phi }^{-1}_{\beta ,j_{\eta ,+}(\tilde{w})-1}(t))}+C\right) <\infty , \end{aligned}$$

which implies \(j_{\eta }(w)\le j_{\eta ,+}(\tilde{w})\) and therefore \(j_{\eta ,+}(w)\le j_{\eta ,+}(\tilde{w})\). By symmetry, it follows that \(j_{\eta ,+}(w) = j_{\eta ,+}(\tilde{w})\). The same result also holds for \(j_{\eta ,-}\) using similar arguments. This completes the proof of (i).

We now prove (ii). To this end, we show by induction on \(n\) that, for any \(\eta <\eta ^{\prime }\), \(n< i_{\eta ^{\prime }}(\tilde{w})\) and \(x\) large enough:

$$\begin{aligned} G^{(n-1)}(\Phi _{\eta ,1})(x)\le \tilde{G}^{(n-1)}(\Phi _{\eta ^{\prime },1})(x), \end{aligned}$$
(19)

which, in view of (3) will imply \(i_\eta (w) \le i_{\eta ^{\prime }}(\tilde{w})\) and therefore \(i_{\eta ,\pm }(w) \le i_{\eta ,\pm }(\tilde{w})\). It is easy to check that

$$\begin{aligned} G^{(n-1)}(\Phi _{\eta ,1})(x)&\le x \\ \tilde{G}^{(n-1)}(\Phi _{\eta ,1})(x)&= o(\tilde{G}^{(n-1)}(\Phi _{\eta ^{\prime },1})(x))\quad \text{ for } \eta <\eta ^{\prime } \text{ and } n<i_{\eta ^{\prime }}(w). \end{aligned}$$

Thus, assuming that (19) holds for some \(n < i_{\eta ^{\prime }}(\tilde{w})-1\), we find that, for \(x\) large,

$$\begin{aligned} \frac{w\circ W^{-1}(G^{(n-1)}(\Phi _{\eta ,1})(x))}{w\circ W^{-1}(x)}&\le \frac{\tilde{w}\circ \tilde{W}^{-1}(G^{(n-1)}(\Phi _{\eta ,1})(x))}{\tilde{w}\circ \tilde{W}^{-1}(x)}\\&\le \frac{\tilde{w}\circ \tilde{W}^{-1}(\tilde{G}^{(n-1)}(\Phi _{\eta ^{\prime },1})(x))}{\tilde{w}\circ \tilde{W}^{-1}(x)}. \end{aligned}$$

By integrating, we get, for any \(\eta ^{\prime \prime }>\eta ^{\prime }\),

$$\begin{aligned} G^{(n)}(\Phi _{\eta ,1})(x)\ \le \ \tilde{G}^{(n)}(\Phi _{\eta ^{\prime },1})(x)+C \ \le \ \tilde{G}^{(n)}(\Phi _{\eta ^{\prime \prime },1})(x), \end{aligned}$$

which shows that (19) holds for \(n+1\), as wanted. \(\square \)

We now have all the tools needed for proving Proposition 1.5 which provides examples of weight sequences \(w\) with arbitrarily large critical indexes.

Proof of Proposition 1.5

Fix \(\alpha \in (0,1)\) and consider a weight function \(w\) such that

$$\begin{aligned} w(x):=x \exp (-(\log x)^\alpha ) \qquad \text{ for }\, x\ge e. \end{aligned}$$
(20)

An integration by part yields, for any \(\gamma \in (0,1)\) and \(x\) large enough

$$\begin{aligned} \gamma V(x)\le W(x)\le V(x)\quad \text{ where } \quad V(x):=\frac{1}{\alpha } (\log x)^{1-\alpha } \exp ((\log x)^\alpha ). \end{aligned}$$

Set \(\beta :=1/\alpha \) and define for \(\delta >0\),

$$\begin{aligned} U_\delta (x)=\exp \left( (\log x -(\beta -1)\log \log x +\log { \alpha \delta } )^{\beta }\right) . \end{aligned}$$

It is easily checked that, for \(x\) large enough, \(V\circ U_1(x)\le x\) and \(V\circ U_\delta (x)\sim \delta x\). This implies that, for \(x\) large enough,

$$\begin{aligned} U_1(x)\le W^{-1}(x)\le U_2(x). \end{aligned}$$

Let \(\eta \in (0,1)\) and define the sequence of functions \((g_{\eta ,k})_{k\ge 1}\) by

$$\begin{aligned} g_{\eta ,k} :=G^{(k-1)}(\eta \text{ Id }), \end{aligned}$$

where \(G\) is the operator defined by (2). We prove by induction that, if \(k\ge 1\) is such that \((k-1)(\beta -1)<1\), then there exist two positive constants \(c_1\) and \(c_2\) (depending on \(k\) and \(\eta \)), such that, for \(x\) large enough,

$$\begin{aligned} x\exp (-c_1(\log x)^{(k-1)(\beta -1)})\le g_{\eta ,k}(x) \le x\exp (-c_2(\log x)^{(k-1)(\beta -1)}), \end{aligned}$$
(21)

and that if \((k-1)(\beta -1)>1\), then \(g_{\eta ,k}\) is bounded. This result holds for \(k=1\). Assume now that (21) holds for some \(k\) such that \((k-1)(\beta -1)<1\). We have, for \(x\) large,

$$\begin{aligned} \log \left( \frac{ w\circ W^{-1}\circ g_{\eta ,k}(x)}{ w\circ W^{-1}(x)}\right)&\le \log \left( \frac{ w\circ U_2\circ g_{\eta ,k}(x)}{ w\circ U_1(x)}\right) \\&= \log \left( 2\left( \frac{\log g_{\eta ,k}(x)}{\log x}\right) ^{\beta -1} \frac{x}{g_{\eta ,k}(x)}\right) \\&+\log (U_2\circ g_{\eta ,k}(x))-\log U_1(x)\\&\le c_1(\log x)^{(k-1)(\beta -1)}+(\log x-c_2(\log x)^{(k-1)(\beta -1)})^{\beta }\\&-(\log x-\beta \log \log x)^\beta \\&\le \frac{-\beta c_2}{2}(\log x)^{(k-1)(\beta -1)-1+\beta }:=-c_2^{\prime }(\log x)^{\gamma }, \end{aligned}$$

with \(\gamma :=k(\beta -1)\). On the one hand, if \(\gamma >1\), then \(g_{\eta ,k+1}\) is bounded. On the other hand, if \(\gamma <1\), an integration by part yields

$$\begin{aligned} \int \limits _0^x \exp (-c_2^{\prime }(\log u)^{\gamma })du\sim x\exp (-c_2^{\prime }(\log x)^\gamma ), \end{aligned}$$

giving the desired upper bound for \(g_{\eta ,k+1}\) (if \(\gamma =1\), we easily check that either \(g_{\eta ,k+1}\) or \(g_{\eta ,k+2}\) is bounded). The lower bound is obtained by similar arguments. In particular, we have proved that if \(1/(\beta -1)\) is not an integer, then for any \(\eta \in (0,1)\), we have

$$\begin{aligned} i_\eta (w)&= \inf \{k\ge 2\; : \; g_{\eta ,k} \text{ is } \text{ bounded }\}\\&= \inf \{k\ge 2\; : \; (k-1)(\beta -1)>1\}, \end{aligned}$$

which implies Proposition 1.5. \(\square \)

Remark 2.8

Using similar arguments as the ones developed above, one can construct examples of weight functions \(w\) with \(i_-(w)\ne i_+(w)\). For instance, choosing \(w(k)\sim k\exp (-\sqrt{2\log 2\log k})\), it is not difficult to check that \(i_-(w)=2\) whereas \(i_+(w)=3\).

We conclude this section by providing an example of a weight sequence whose indexes \(i_\pm (w)\) are infinite.

Corollary 2.9

Let \(\tilde{w}\) be a weight function such that \(\tilde{w}(x):=x\exp (-\frac{\log x}{\log \log x})\) for \(x\) large enough. Then \(i_\pm (\tilde{w}) = +\infty \).

Proof

In view of Propositions 1.5 and 2.7, we just need to show that, for any \(\alpha \in (0,1)\), the function \(F:=(w\circ W^{-1})/(\tilde{w}\circ \tilde{W}^{-1})\) is eventually non-decreasing, where \(w\) is defined by (20). Computing the derivative of \(F\), we see that this property holds as soon as

$$\begin{aligned} \tilde{w}^{\prime }(x)\le w^{\prime }\circ W^{-1}\circ \tilde{W}(x)\quad \text{ for }\,\, x\,\, \text{ large } \text{ enough }. \end{aligned}$$

Using \(W^{-1}(x)\le U_2(x)\) and \(w^{\prime }\) non-increasing, we get

$$\begin{aligned} w^{\prime }\circ W^{-1}\circ \tilde{W}(x)\ge w^{\prime }\circ U_2\circ \tilde{W}(x)\ge \frac{\beta (\log \tilde{W}(x))^{\beta -1}}{4\tilde{W}(x)}\qquad \text{ with }\, \beta :=1/\alpha . \end{aligned}$$

Moreover, integrating by part, we get

$$\begin{aligned} \tilde{W}(x)\sim \exp \left( \frac{\log x}{\log \log x}\right) \log \log x. \end{aligned}$$

It follows that

$$\begin{aligned} \tilde{w}^{\prime }(x)\sim \exp \left( -\frac{\log x}{\log \log x}\right) \sim \frac{\log \log x}{\tilde{W}(x)}\le w^{\prime }\circ W^{-1}\circ \tilde{W}(x), \end{aligned}$$

which concludes the proof of the corollary. \(\square \)

3 Coupling of three walks on the half-line

In the rest of the paper, we assume that the weight function \(w\) satisfies Assumption 1.1 (i) and (ii) so we can use all the results of the previous section. In order to study the VRRW \(X\) on \(\mathbb{Z }\), we first look at the reflected VRRW \(\bar{X}\) on the positive half-line \([\![-1,\infty [\![\). The main idea is to compare this walk with two simpler self-interacting processes \(\tilde{X}\) and \(\widehat{X}\), which, in a way, “surround” the process we are interested in. The study of \(\tilde{X}\) and \(\widehat{X}\) is undertaken in Sects. 4 and 5. The estimates obtained concerning these two walks are then used in Sect. 6 to study the reflected VRRW \(\bar{X}\).

3.1 A general coupling result

During the proof of Theorem 1.3, we shall need to consider processes whose transition probabilities depend, not only on the adjacent site local time but also on its adjacent edge local time. Furthermore, it will also be convenient to define processes starting from arbitrary initial configurations of their edge/site local times. To make this rigorous, we define the notion of state.

Definition 3.1

We call state any sequence \(\mathcal{C }=(z(x),n(x,x+1))_{x\in \mathbb{Z }}\) of non-negative integers such that

$$\begin{aligned} n(x,x+1)\le z(x+1) \quad \text{ for } \text{ all }\,\,x\in \mathbb{Z }. \end{aligned}$$

Given \(\mathcal{C }\) and some nearest neighbour path \(X=(X_n,n\ge 0)\) on \(\mathbb{Z }\), we define its state \(\mathcal{C }_n:=(Z_n(x),N_n(x,x+1))_{x\in \mathbb{Z }}\) at time \(n\) by

$$\begin{aligned}&Z_n(x):=z(x) +\sum _{i=0}^n \mathbf{1}_{\{X_i=x\}} \quad \text{ and }\quad N_n(x,x+1):=n(x,x+1)\nonumber \\&\qquad \qquad \qquad +\sum _{i=0}^{n-1} \mathbf{1}_{\{X_{i}=x \text{ and } X_{i+1}=x+1\}}, \end{aligned}$$
(22)

and we say that \(\mathcal{C }\) is the initial state of \(X\). Thus \(Z_n(x)\) is the local time of \(X\) at site \(x\) and time \(n\) whereas \(N_n(x,x+1)\) corresponds to the local time on the oriented edge \((x,x+1)\) when we start from \(\mathcal{C }\) (notice that \(\mathcal{C }_0 \ne \mathcal{C }\) since the site local time differs at \(X_0\)). We say that \(\mathcal{C }\) is trivial (resp. finite) when all (resp. all but a finite number of) the local times are \(0\). Finally, we say that the state \(\mathcal C =(z(x),n(x,x+1))_{x\in \mathbb{Z }}\) is reachable if

$$\begin{aligned}&(1)&\{x\in \mathbb{Z }\; : \; n(x,x+1)>0\} = [\![a,b-1]\!] \text{ for } \text{ some } a\le 0 \le b\text{, }\\&(2)&z(x)= n(x,x+1)+n(x-1,x) \text{ for } \text{ all } x\in \mathbb{Z }\text{. } \end{aligned}$$

The terminology reachable is justified by the following elementary result, whose proof is left to the reader:

Lemma 3.2

A state \(\mathcal C \) is reachable i.f.f. it can be created from the trivial initial state by a finite path starting and ending at zero (not counting the last visit at the origin for the local time at site \(0\)).

In order to compare walks with different transition mechanisms it is convenient to construct them on the same probability space. To do so, we always use the same generic construction which we now describe. Consider a sequence \((U_i^x,x \in \mathbb{Z },i\ge 1)\) of i.i.d. uniform random variables on \([0,1]\) defined on some probability space \((\Omega ,\mathcal F ,\mathbb{P })\). Let \(\mathcal{C }\) be some fixed initial state. Let \(\mathbb Q \) be a probability measure on infinite nearest neighbour paths on \(\mathbb{Z }\) starting from \(0\) (which may depend on \(\mathcal{C }\)) and write \(\mathbb Q (x_0,\ldots ,x_n)\) for the probability that a path starts with \(x_0,\ldots ,x_n\). We construct on \((\Omega ,\mathcal F ,\mathbb{P })\) a random walk \(X\) with image law \(\mathbb Q \) by induction in the following way:

  • Set \(X_0=0\).

  • \(X_0,\ldots ,X_{n}\) being constructed, if \(Z_n(X_n)=i\), set

    $$\begin{aligned} X_{n+1}= \left\{ \begin{array}{ll} X_n-1&{}\quad \text{ if }\,\, U_i^{X_n} \le \mathbb Q (X_0,\ldots ,X_n,X_n-1\;|\;X_0,\ldots ,X_n),\\ X_n+1&{}\quad \text{ otherwise, } \end{array} \right. \end{aligned}$$

where \(Z_n\) stands for the local time of \(X\) with initial state \(\mathcal{C }\) as in Definition 3.1.

This construction depends of the choice of \(\mathcal{C }= (z(x),n(x,x+1))_{x\in \mathbb{Z }}\). In particular, if \(z(x)>0\) for some \(x\in \mathbb{Z }\), then the random variables \(U_1^{x},\ldots ,U^x_{z(x)}\) are not used in the construction.

In the rest of the paper, all the walks considered are constructed from the same sequence \((U_i^x)\) and with the same initial state \(\mathcal{C }\). Hence, with a slight abuse of notation, we will write \(\mathbb{P }_{\mathcal{C }}\) to indicate that the walks are constructed using the initial state \(\mathcal{C }\). Furthermore, if \(\mathcal{C }\) is the trivial state, we simply use the notation \(\mathbb{P }_0\). Finally, since all the walks considered in the paper start from \(0\), we do not indicate the starting point in the notation for the probability measure.

Given a walk \(X\), we denote its natural filtration by \(\mathcal F _n := \sigma (X_0,\ldots ,X_n)\). For \(i,j,n\ge 0\) and \(x\in \mathbb{Z }\), we define the sets

$$\begin{aligned} \mathcal A _{i,j}(n,x)&:= \{X_n=x,\ Z_n(x-1)\ge i,\ Z_n(x+1)\le j \}\nonumber \\ \mathcal B _{i,j}(n,x)&:= \{X_n=x,\ Z_n(x-1)\le i,\ Z_n(x+1)\ge j \}. \end{aligned}$$
(23)

We also consider the stopping time

$$\begin{aligned} \sigma (x,k):=\inf \{n\ge 0\; : \; Z_n(x)=k\}. \end{aligned}$$

The following technical, yet very natural result, which is mainly equivalent to Lemma 4.1 of [11] enables us to compare walks with different transition probabilities.

Lemma 3.3

Let \(\mathcal{C }\) be some initial state and let \(X,X^{\prime }\) be two nearest neighbours random walks (with possibly distinct mechanisms which may depend on \(\mathcal{C })\) constructed on \((\Omega ,\mathcal F ,\mathbb{P }_{\mathcal{C }})\). Assume that the laws of \(X\) and \(X^{\prime }\) are such that, for all \(i,j,n,m\ge 0\) and all \(x\in \mathbb{Z }\), we have, \(\mathbb{P }_{\mathcal{C }}\)-a.s.

$$\begin{aligned} \mathbb{P }_{\mathcal{C }}\{X_{n+1}=x+1 \mid \mathcal F _n,\ \mathcal A _{i,j}(n,x)\}\ \le \ \mathbb{P }_{\mathcal{C }}\{X^{\prime }_{m+1}=x+1 \mid \mathcal F ^{\prime }_m,\ \mathcal B ^{\prime }_{i,j}(m,x)\}\nonumber \\ \end{aligned}$$
(24)

(with the obvious \(^{\prime }\) notation for quantities related to \(X^{\prime }).\) Then, for all \(x\in \mathbb{Z }\) and all \(k\ge 0\) such that the stopping times \(\sigma (x,k)\) and \(\sigma ^{\prime }(x,k)\) are both finite, we have

$$\begin{aligned} Z_{\sigma (x,k)}(x-1)\ge Z^{\prime }_{\sigma ^{\prime }(x,k)}(x-1) \quad \mathrm{and} \quad Z_{\sigma (x,k)}(x+1)\le Z^{\prime }_{\sigma ^{\prime }(x,k)}(x+1), \end{aligned}$$
(25)

and

$$\begin{aligned} X_{\sigma (x,k)+1}=x+1 \Longrightarrow X^{\prime }_{\sigma ^{\prime }(x,k)+1}=x+1. \end{aligned}$$
(26)

In the sequel, when (25) and (26) hold, we will say that \(X\) is at the left of \(X^{\prime }\) and write \(X \prec X^{\prime }\).

Proof

In view of (24), if (25) holds for some \((x,k)\), then so does (26). Hence, it suffices to prove, by induction on \(n\ge 0\), the assertion

$$\begin{aligned} \text{`` } \forall x,k \text{ such } \text{ that } \sigma (x,k)\le n,\; (25)\text{ holds }. \text{'' } \end{aligned}$$
(27)

This assertion is trivial for \(n=0\) since both walks start with the same initial state. Let us now assume that (27) holds for some \(n\ge 0\). Let \((k_0,x_0)\) be such that \(\sigma (x_0,k_0)=n+1\) and assume that \(\sigma ^{\prime }(x_0,k_0)=m+1<\infty \). There are two cases. Either this is the first visit to \(x_0\) (i.e. \(k_0=Z_0(x_0)+1\)), then \(X_{n}=X^{\prime }_{m}\) since both walks have the same starting point. Otherwise, we are dealing with a subsequent visit to \(x_0\). Applying the recurrence hypothesis with \((k_0-1,x_0)\), it follows from (26) that

$$\begin{aligned} X_{\sigma (x_0,k_0-1)+1}=x_0+1 \Longrightarrow X^{\prime }_{\sigma ^{\prime }(x_0,k_0-1)+1}=x_0+1. \end{aligned}$$

Thus, in any case, we have

$$\begin{aligned} X_{n} \le X^{\prime }_{m}\in \{x_0\pm 1\}. \end{aligned}$$

If \(X_{n} <X^{\prime }_{m},\) then (25) clearly holds for \((x_0,k_0)\) since \(Z^{\prime }_{\sigma ^{\prime }(x_0,k_0)}(x_0-1)=Z^{\prime }_{\sigma ^{\prime }(x_0,k_0-1)}(x_0-1)\) and \(Z_{\sigma (x_0,k_0)}(x_0+1)=Z_{\sigma (x_0,k_0-1)}(x_0+1)\). Assume now that \(X_{n}= X^{\prime }_{m}=x_0-1\) (the case \(x_0+1\) being similar). Clearly, we have \(Z^{\prime }_{\sigma ^{\prime }(x_0,k_0)}(x_0+1)\ge Z_{\sigma (x_0,k_0)}(x_0+1)\). It remains to prove the converse inequality for \(x_0 - 1\). Denoting \(i:=Z_n(x_0-1)\) and applying (25) with \((x_0-1,i)\), we find that, when \(\sigma ^{\prime }(x_0-1,i)<\infty \),

$$\begin{aligned} k_0-1= Z_{\sigma (x_0-1,i)}(x_0) \le Z^{\prime }_{\sigma ^{\prime }(x_0-1,i)}(x_0). \end{aligned}$$

Hence

$$\begin{aligned} \sigma ^{\prime }(x_0,k_0-1)=m \le \sigma ^{\prime }(x_0-1,i). \end{aligned}$$

This inequality trivially holds when \(\sigma ^{\prime }(x_0-1,i)=\infty \) thus

$$\begin{aligned} Z^{\prime }_{\sigma ^{\prime }(x_0,k_0)}(x_0-1)=Z^{\prime }_{m}(x_0-1)\le i = Z_n(x_0-1)= Z_{\sigma (x_0,k_0)}(x_0-1). \end{aligned}$$

This completes the proof of the lemma. \(\square \)

Corollary 3.4

Let \(X,X^{\prime }\) be two random walks such that \(X \prec X^{\prime }\).

  1. (i)

    Let \(x_0:=\inf \{x \in \mathbb{Z }\; : \; Z^{\prime }_\infty (x)=\infty \}\). Then,

    $$\begin{aligned} Z_\infty (x)\le Z^{\prime }_\infty (x)\qquad { \text{ for } \text{ all } }\,x\ge x_0. \end{aligned}$$

    In particular, if \(X^{\prime }\) localizes on a finite subset \([\![a,b ]\!]\), then \(\limsup X\le b\).

  2. (ii)

    On the event \(\{\lim _{n\rightarrow \infty } X_n=+\infty \}\), we have

    $$\begin{aligned} Z^{\prime }_\infty (x)\le Z_\infty (x)\qquad {\text{ for } \text{ all }}\,\,x\in \mathbb{Z }. \end{aligned}$$

    In particular, if \(X^{\prime }\) is recurrent, then \(X\) cannot diverge to \(+\infty \).

Proof

(i) We prove the result by induction on \(x\ge x_0\). There is nothing to prove for \(x = x_0\) since \(Z^{\prime }_\infty (x_0)=\infty \). Let us now assume that the result holds for some \(x-1\ge x_0\). Letting \(k:=Z^{\prime }_\infty (x)\), we just need to prove that, on \(\{k < \infty \}\cap \{ \sigma (x,k)<\infty \}\), the walk \(X\) never visits site \(x\) after time \(\sigma (x,k)\). First, since \(x_0\) is visited infinitely often by \(X^{\prime }\), in view of (26), we find that \(X_{\sigma (x,k)+1} = X^{\prime }_{\sigma ^{\prime }(x,k)+1}=x-1\). Moreover, if \(n > \sigma (x,k)\) is such that \(X_n=x-1\) then \(n =\sigma (x-1,j)\) for some \(j\in [\![Z_{\sigma (x,k)}(x-1), Z_\infty (x-1)]\!]\subset [\![Z^{\prime }_{\sigma ^{\prime }(x,k)}(x-1), Z^{\prime }_\infty (x-1)]\!]\) where we used (25) and the recurrence hypothesis for the inclusion. Recalling that \(X^{\prime }\) does not visit site \(x\) after time \(\sigma ^{\prime }(x,k)\), we conclude, using (26) again, that \(X_{n+1}=X^{\prime }_{\sigma ^{\prime }(x-1,j)+1}=x-2\). This entails that \(X\) never visits site \(x\) after time \(\sigma (x,k)\).

(ii) By contradiction, assume that

$$\begin{aligned} n:=\inf \{i\ge 0\; : \; Z^{\prime }_i(x)>Z_\infty (x) \text{ for } \text{ some }\, x \}<\infty \end{aligned}$$

and let \(x_0 = X^{\prime }_n\). Two cases may occur:

  • \(X^{\prime }_{n-1}=x_0-1\). This means that \(X^{\prime }\) jumped from \(x_0\) to \(x_0-1\) at its previous visits to \(x_0\) (i.e. its \(Z_\infty (x_0)\)-th visit). On the other hand, since \(X\) is transient to the right, it jumps from \(x_0\) to \(x_0+1\) at its \(Z_\infty (x_0)\)-th visit to \(x_0\). This contradicts (26).

  • \(X^{\prime }_{n-1}=x_0+1\). By definition of \(n\) we have \(k:=Z^{\prime }_{n-1}(x_0+1)\le Z_\infty (x_0+1)\) hence \(\sigma (x_0+1,k)<\infty \). Using (25) we get \(Z_{\sigma (x_0+1,k)}(x_0)\ge Z^{\prime }_{\sigma ^{\prime }(x_0+1,k)}(x_0)=Z_\infty (x_0)\) whereas (26) gives \(X_{\sigma (x_0+1,k)+1}=X^{\prime }_{n}=x_0\). This yields \(Z_{\sigma (x_0+1,k)+1}(x_0) > Z_\infty (x_0)\) which is absurd.\(\square \)

3.2 The three walks \(\widetilde{X}\),\(\bar{X}\) and \(\widehat{X}\)

We define three nearest neighbour random walks on \([\![-1,\infty [\![\), starting from some initial state \(\mathcal{C }\), which are denoted respectively by \(\widetilde{X}, \bar{X}\) and \(\widehat{X}\). All the quantities referring to \(\widetilde{X}\) (resp. \(\bar{X}\), \(\widehat{X}\)) are denoted with a tilde (resp. bar, hat). The three walks are reflected at \(-1\) i.e.,

$$\begin{aligned}&\mathbb{P }_\mathcal C \{\bar{X}_{n+1}=0 \mid \bar{\mathcal{F }}_n, \bar{X}_n=-1\} =\mathbb{P }_\mathcal C \{\widetilde{X}_{n+1}\!=\!0 \mid \widetilde{\mathcal{F }}_n,\widetilde{X}_n\!=\!-\!1\} \\&\qquad =\mathbb{P }_\mathcal C \{\widehat{X}_{n+1}\!!0 \mid \widehat{\mathcal{F }}_n,\widehat{X}_n\!=\!-\!1\}=1 \end{aligned}$$

and the transition probabilities are given by the following rules:

  • The walk \(\bar{X}\) is a vertex reinforced random walk with weight \(w\) reflected at \(-1\), i.e. for all \(x\ge 0\),

    $$\begin{aligned} \mathbb{P }_{\mathcal{C }}\{\bar{X}_{n+1}=x-1 \mid {\mathcal{\bar{F} }}_n, \bar{X}_n =x\}=\frac{w(\bar{Z}_n(x-1))}{w(\bar{Z}_n(x-1))+w(\bar{Z}_n(x+1))}. \end{aligned}$$
    (28)
  • The walk \(\widetilde{X}\) is a “mix” between an oriented edge-reinforced and a vertex-reinforced random walk: when at site \(x\), the walk makes a jump to the left with a probability proportional to a function of the local time at the site \(x-1\) whereas it jumps to the right with a probability proportional to a function of the local time on the oriented edge \((x,x+1)\). More precisely, for \(x\ge 0\),

    $$\begin{aligned} \mathbb{P }_\mathcal C \{\widetilde{X}_{n+1}=x-1 \mid \widetilde{\mathcal{F }}_n, \widetilde{X}_n = x\}=\frac{w(\widetilde{Z}_n(x-1))}{w(\widetilde{Z}_n(x-1))+w(\widetilde{N}_n(x,x+1))}.\qquad \end{aligned}$$
    (29)
  • The transition mechanism of the third walk \(\widehat{X}\) is a bit more complicated. Similarly to the previous walk, \(\widehat{X}\) jumps to the left with a probability proportional to a function of the local time at the site on its left whereas it jumps to the right with a probability proportional to a (different) function of the local time on the oriented edge on its right. However, we do not directly use the weight function \(w\) because we want to increase the reinforcement induced by the local time of the right edge. In order to do so, we fix \(\varepsilon >0\) small enough such that \(i_+(w)=i_{1/2+3\varepsilon }(w)\). Next, we consider a function \(f := f_{1/2+2\varepsilon }\) as in Lemma 2.3 (i.e. a function satisfying (a)-(d) of Lemma 2.3 with \(\eta = 1/2+2\varepsilon \)). Given these two parameters, the transition probabilities of \(\widehat{X}\) are defined by

    $$\begin{aligned} \mathbb{P }_\mathcal C \{\widehat{X}_{n+1}=x-1 \mid \widehat{\mathcal{F }}_n, \widehat{X}_n = x\} = \left\{ \begin{array}{ll} \frac{w(\widehat{Z}_n(-1))}{w (\widehat{Z}_n(-1))+w\left( \widehat{N}_n(0,1)+f (\widehat{N}_n(0,1))\right) }&{} \text{ if } x = 0\text{, }\\ \frac{w(\widehat{Z}_n(x-1))}{w(\widehat{Z}_n(x-1))+w\left( (1+\varepsilon ) \widehat{N}_n(x,x+1)\right) }&{} \text{ if } x > 0\text{. }\\ \end{array} \right. \nonumber \\ \end{aligned}$$
    (30)

    Comparing these transition probabilities with those of \(\widetilde{X}\), the edge local time \(N(0,1)\) is slightly increased by \(f(N(0,1)) = o(N(0,1))\) whereas the edge local times \(N(x,x+1)\) are multiplied by \(1+\varepsilon \) for \(x\ge 1\).

Remark 3.5

  1. (a)

    Let us emphasize the fact that the laws of the three walks depend on the initial state \(\mathcal{C }\) since the local times \(Z_n(x)\) and \(N_n(x,x+1)\) depend upon it.

  2. (b)

    We should rigourously write \(\widehat{X}^{\varepsilon ,f}\) instead of \(\widehat{X}\) since the law of the walk depends on the choice of \((\varepsilon ,f)\). However, these two parameters depend, in turn, only on the weight function \(w\) which is fixed throughout the paper. For the sake of clarity, we keep the notation without any superscript.

3.3 Coupling between \(\widetilde{X}\), \(\bar{X}\) and \(\widehat{X}\)

For any random walk, the local time at site \(x\) is equal (up to an initial constant) to the sum of the local times of the ingoing edges adjacent to \(x\) since the walk always reaches \(x\) through one of these edges. Hence, looking at the definition of \(\widetilde{X}\) and \(\bar{X}\), we see that the reinforcements schemes give a stronger “push to the right” for \(\bar{X}\) than for \(\widetilde{X}\) so it is reasonable to expect \(\widetilde{X}\) to be at the left of \(\bar{X}\). This is indeed the case:

Lemma 3.6

For any initial state \(\mathcal C \), under \(\mathbb{P }_\mathcal C \), we have \(\widetilde{X} \prec \bar{X}\).

Proof

We just need to show that (24) holds with \(\widetilde{X}\) and \(\bar{X}\). Define \(\widetilde{\mathcal{A }}_{i,j}(n,x)\) and \(\bar{\mathcal{B }}_{i,j}(n,x)\) as in (23). On the one hand, for \(x\ge 0\), we have

$$\begin{aligned}&\mathbb{P }_\mathcal C \{\bar{X}_{n+1}=x-1 \ | \ \bar{\mathcal{F }}_n,\bar{\mathcal{B }}_{i,j}(n,x) \} = \frac{w(\bar{Z}_n(x-1))}{w(\bar{Z}_n(x-1))+w(\bar{Z}_n(x+1))} \mathbf{1}_{\{\bar{\mathcal{B }}_{i,j}(n,x)\}} \\&\qquad \qquad \qquad \le \frac{w(i)}{w(i)+w(j)}. \end{aligned}$$

On the other hand, since we have by definition of a state that \(\widetilde{N}_0(x,x+1)\le \widetilde{Z}_0(x+1)\) for all \(x\), we also have \(\widetilde{N}_n(x,x+1)\le \widetilde{Z}_n(x+1)\) for any \(x,n\) and thus

$$\begin{aligned}&\mathbb{P }_\mathcal C \{\widetilde{X}_{n+1}=x-1 \ | \ \widetilde{\mathcal{F }}_n,\widetilde{\mathcal{A }}_{i,j}(n,x) \} = \frac{w(\widetilde{Z}_n(x-1))}{w(\widetilde{Z}_n(x-1))+w(\widetilde{N}_n(x,x+1))} \mathbf{1}_{\{\widetilde{\mathcal{A }}_{i,j}(n,x)\}} \\&\qquad \qquad \qquad \ge \frac{w(i)}{w(i)+w(j)}, \end{aligned}$$

which proves (24). \(\square \)

Unfortunately, as we cannot a priori compare the quantity \((1+\varepsilon )N_n(x,x+1)\) with \(Z_n(x+1)\) nor \(N_n(0,1)+f(N_n(0,1))\) with \(Z_n(1)\), there is no direct coupling between \(\bar{X}\) and \(\widehat{X}\). However, we can still define a “good event” depending only on \(\widehat{X}\) on which \(\bar{X}\) is indeed at the left of \(\widehat{X}\) with positive probability. For \(L,M\ge 0\), set

$$\begin{aligned} \widehat{\mathcal{E }}(L,M)= \left\{ \exists K\le L,\; \forall n\ge M,\; \begin{array}{l} \widehat{Z}_n(1)\le \widehat{N}_n(0,1)+f(\widehat{N}_n(0,1)) \\ \forall x \in [\![2, K ]\!], \; \widehat{Z}_n(x)\le (1+\varepsilon )\widehat{N}_n(x-1,x) \\ \forall x \ge K, \; \widehat{Z}_n(x)=\widehat{Z}_{M}(x)\\ \end{array}\right\} .\nonumber \\ \end{aligned}$$
(31)

Lemma 3.7

Let \(\mathcal C \) be any initial state.

  1. (i)

    Under \(\mathbb{P }_{\mathcal{C }}\), we have \(\bar{X}\prec \widehat{X}\) on \(\widehat{\mathcal{E }}(L,0)\) (meaning that (25) and (26) hold on this event) and

    $$\begin{aligned} \widehat{\mathcal{E }}(L,0) \subset \{\bar{X} \text{ never } \text{ visits } \text{ site }\, L \}. \end{aligned}$$
    (32)
  2. (ii)

    Assume that \(\mathbb{P }_\mathcal C \{\widehat{\mathcal{E }}(L,M)\}>0\) for some \(L,M\ge 0\). Then, under \(\mathbb{P }_\mathcal C \), with positive probability, the walk \(\bar{X}\) ultimately stays confined in the interval \([\![-1, L-1 ]\!]\).

Proof

Concerning the first part of the lemma, the fact that \(\bar{X}\prec \widehat{X}\) on \(\widehat{\mathcal{E }}(L,0)\) follows from the definition of \(\widehat{\mathcal{E }}(L,0)\) combined with (28), (30) using the same argument as in the previous lemma. Moreover, we have \(\widehat{\mathcal{E }}(L,0) \subset \{\widehat{X} \text{ never } \text{ visits } \text{ site } L\}\). Hence (32) is a consequence of Corollary 3.4.

We now prove (ii). We introduce an auxiliary walk \(X^*\) on \([\![-1,\infty [\![\) such that \(\bar{X}\prec X^*\) and coinciding with \(\widehat{X}\) on a set of positive probability. The walk \(X^*\) is reflected at \(-1\) and with transition probabilities given for \(x\ge 0\) by

$$\begin{aligned} \mathbb{P }_\mathcal C \{X^{*}_{n+1}=x-1 \; | \; \mathcal F ^{*}_n,X^{*}_n=x\}=\frac{w(Z^{*}_n(x-1))}{w(Z^{*}_n(x-1))+w(V^{*}_n(x+1))}, \end{aligned}$$

where the functional \(V^*\) is defined by

$$\begin{aligned} V^*_n(x) := \left\{ \begin{array}{ll} \max (Z^*_n(1),N^*_n(0,1)+f(N^*_n(0,1))) &{} \text{ for } x= 1\\ \max (Z^*_n(x),(1+\varepsilon )N^*_n(x-1,x)) &{} \text{ for } x\ne 1. \end{array} \right. \end{aligned}$$

Since \(V^*_n\ge Z^*_n\), it follows clearly that \(\bar{X}\prec X^{*}\). Now set

$$\begin{aligned} \mathcal G :=\widehat{\mathcal{E }}(L,M)\cap \{\forall n\ge 0, X^{*}_n=\widehat{X}_n\}. \end{aligned}$$

On \(\widehat{\mathcal{E }}(L,M)\), there exists some \(K\le L\) such that, for all \(n> M\),

$$\begin{aligned} \widehat{X}_n \in [\![-1, K-1]\!]\quad \text{ and }\quad \widehat{V}_n(x)= \left\{ \begin{array}{ll} \widehat{N}_n(0,1)+f(\widehat{N}_n(0,1))) &{} \text{ for }\,\,x= 1,\\ (1+\varepsilon )\widehat{N}_n(x-1,x)) &{} \text{ for }\,\,x\in [\![2, K ]\!]. \end{array}\right. \end{aligned}$$

Therefore

$$\begin{aligned} \mathcal G = \widehat{\mathcal{E }}(L,M)\cap \{\forall n\le M,\, X^{*}_n=\widehat{X}_n\}. \end{aligned}$$

By ellipticity, we have a.s. \(\mathbb{P }_{\mathcal{C }}\{\forall n\le M,\, X^{*}_{n}=\widehat{X}_n\ | \ {\mathcal{\widehat{F} }}_{M}\}>0\). Conditionally on \(\mathcal{\widehat{F} }_{M}\), the events \(\{\forall n\le M,\; X^{*}_n=\widehat{X}_n\}\) and \(\widehat{\mathcal{E }}(L,M)\) are independent. Assuming that \(\mathbb{P }_\mathcal{C }\{\widehat{\mathcal{E }}(L,M)\}>0\), we deduce that \(\mathbb{P }_\mathcal{C }\{\mathcal{G }\}>0\). Moreover, on \(\mathcal G \), we have \(Z^{*}_\infty (x)= \widehat{Z}_\infty (x)=\widehat{Z}_{M}(x)\) for all \(x\ge L\) (i.e. \(X^{*}\) stays in the interval \([\![-1, L-1 ]\!]\) after time \(M\)). Using \(\bar{X}\prec X^*\), Corollary 3.4 gives

$$\begin{aligned} \mathcal G \subset \{\forall x\ge L, \; \bar{Z}_\infty (x)\le \widehat{Z}_{M}(x)\}, \end{aligned}$$

which implies

$$\begin{aligned} \mathbb{P }_\mathcal{C }\{\bar{X} \text{ eventually } \text{ remains } \text{ in } \text{ the } \text{ interval }\, [\![-1, L-1 ]\!]\}\ge \mathbb{P }_\mathcal{C }\{\mathcal{G }\}>0. \end{aligned}$$

\(\square \)

4 The walk \(\widetilde{X}\)

We now study the asymptotic behaviour of \(\widetilde{X}\). This walk is the easiest to analyse among those defined in the previous section and it is possible to obtain a precise description of the localization set. In fact, we can even show recurrence when the walk does not localize.

We introduce some notation to help make the proof more readable by removing unimportant constants. Given two (random) processes \(A_n,B_n\), we will write \(A_n\equiv B_n\) when \(A_n-B_n\) converges a.s. to some (random) finite constant. Similarly we write \(A_n\lesssim B_n\) when \(\limsup A_n-B_n\) is finite a.s..

Proposition 4.1

Let \(\mathcal C \) be a finite state. Recall that \(\widetilde{R}\) denotes the set of sites visited i.o. by \(\widetilde{X}\). We have

$$\begin{aligned}{}[\![-1,j_-(w)-1]\!]\subset \widetilde{R} \subset [\![-1,j_+(w)-1]\!]\qquad \mathbb{P }_\mathcal{C }\text{-a.s. } \end{aligned}$$

In particular, the walk is either recurrent or localizes a.s. depending on the finiteness of \(j_\pm (w)\).

Proof

First, it is easy to check that the walk \(\widetilde{X}\) is at the left (in the sense of Proposition 3.3) of an oriented edge reinforced random walk with weight \(w\) reflected at \(-1\) that is, a random walk which jumps from \(x\) to \(x+ 1\) with probability proportional to \(w(N_n(x,x+1))\) [where \(N_n(x,x+1)\) is defined by (22)] and from \(x\) to \(x-1\) with probability proportional to \(w(N_n(x,x-1))\) where \(N_n(x,x-1)\) is simply the number of jumps from \(x\) to \(x-1\) before time \(n\) (but without any additional initial constant). Such a walk can be constructed from a family \((\mathcal U _x,x\ge 0)\) of independent generalized Pólya \(w\)-urns where the sequence of draws in the urn \(\mathcal U _x\) corresponds to the sequence of jumps to \(x-1\) or \(x+1\) when the walk is at site \(x\). Using this representation, Davis [3] showed that, if \(\mathcal{C }\) is finite, the oriented edge reinforced random walk is recurrent as soon as \(\sum 1/w(k)=\infty \) (more precisely, in [3], recurrence is established for the non-oriented version of the edge reinforced walk but the same proof also applies to the oriented version and is even easier in that case).

In view of Corollary 3.4, it follows from the recurrence of the oriented edge reinforced random walk that \(\widetilde{X}\) cannot tend to infinity hence there exists at least one site which is visited infinitely often. Next, noticing that

$$\begin{aligned} \sum _{n=0}^\infty \mathbb{P }_\mathcal{C }\{\widetilde{X}_{n+1}=x-1 \; | \; \widetilde{\mathcal{F }}_n\} \ge \sum _{n=0}^\infty \frac{w(0)\mathbf{1}_{\{\widetilde{X}_n=x\}}}{w(0)+w(\widetilde{N}_n(x,x+1))} \ge \sum _{n=n_0}^{\widetilde{Z}_{\infty }(x)} \frac{w(0)}{w(0)+w(n)} \end{aligned}$$

the conditional Borel-Cantelli Lemma implies that if \(x\) is visited i.o., then so will \(x-1\). By induction we deduce that \(-1\) is visited i.o. a.s. Now, we have to prove that any site \(x \le j_-(w)\) is visited i.o. but that \(j_+(w)+1\) is not. More precisely, we show by induction that for each \(j\ge 1\):

$$\begin{aligned} \forall \alpha \in (0,1/2),\quad \Phi _{1/2-\alpha ,j}(\widetilde{Z}_k(j-1))&\lesssim \widetilde{N}_k(j-1,j)\nonumber \\&\lesssim \Phi _{1/2+\alpha ,j}(\widetilde{Z}_k(j-1)) \quad \text{ a.s. } \end{aligned}$$
(33)

where \((\Phi _{\eta ,j})_{\eta \in (0,1), j\ge 1}\) is the sequence of functions defined in (13). For \(x \ge 0\) , define

$$\begin{aligned} \widetilde{M}_n(x) \; := \;\sum _{k=0}^{n-1} \frac{\mathbf{1}_{\{\widetilde{X}_k=x\mathrm{and}\widetilde{X}_{k+1}=x+ 1\}}}{w(\widetilde{N}_k(x,x+1))}-\sum _{k=0}^{n-1} \frac{\mathbf{1}_{\{\widetilde{X}_k=x\mathrm{and}\widetilde{X}_{k+1}=x- 1\}}}{w(\widetilde{Z}_k(x- 1))}. \end{aligned}$$

It is well known and easy to check that \((\widetilde{M}_n(x),n\ge 0)\) is a martingale bounded in \(L^2\) which converges a.s. to a finite random variable c.f. for instance [1, 12]. Recalling the definition of \(W\) given in (1) we also have

$$\begin{aligned} W(n)\equiv \sum _{i=1}^{n-1} \frac{1}{w(i)}. \end{aligned}$$

Hence, we get

$$\begin{aligned} \widetilde{M}_{n}(0)\equiv W(\widetilde{N}_n(0,1))-W(\widetilde{Z}_n(-1)) \end{aligned}$$

and the convergence of the martingale \(\widetilde{M}_{n}(0)\) combined with Lemma 2.1 yields

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\widetilde{N}_n(0,1)}{\widetilde{Z}_n(-1)}= 1 \quad \mathbb{P }_{\mathcal{C }}\text{- }\mathrm{a.s}. \end{aligned}$$

Noticing that \(\widetilde{Z}_n(0)\sim \widetilde{N}_n(0,1)+\widetilde{Z}_n(-1)\) and recalling that \(\Phi _{\eta ,1}(x)=\eta x\) we conclude that (33) holds for \(j=1\).

Fix \(j\ge 1\) and assume that (33) holds for \(j\). If \(\widetilde{N}_\infty (j-1,j)\) is finite, then \(\widetilde{Z}_\infty (j)\) and \(\widetilde{N}_\infty (j,j+1)\) are necessarily also finite so (33) holds for \(j+1\). Now assume that \(\widetilde{N}_\infty (j-1,j)\) is infinite which, in view of (33), implies that \(\widetilde{Z}_\infty (j-1)\) is also infinite and that

$$\begin{aligned} \lim _{t\rightarrow \infty } \Phi _{1/2+\alpha ,j}(t)=\infty \quad \text{ for } \text{ any }\, \alpha \in (0,1/2). \end{aligned}$$

Besides, the convergence of the martingale \(\widetilde{M}_n(j)\) yields

$$\begin{aligned} W(\widetilde{N}_n(j,j+1))\equiv \sum _{k=0}^{n-1} \frac{\mathbf{1}_{\{\widetilde{X}_k=j\mathrm{and}\widetilde{X}_{k+1}=j- 1\}}}{w(\widetilde{Z}_k(j- 1))}. \end{aligned}$$
(34)

According to Lemma 2.5, we have

$$\begin{aligned} \lim _{t\rightarrow \infty } \left( \Phi _{1/2+\alpha ^{\prime },j}(t)-\Phi _{1/2+\alpha ,j}(t)\right) =\infty \quad \text{ for } \text{ any }\, 0<\alpha <\alpha ^{\prime }<1/2, \end{aligned}$$

hence we get from (33) that for \(k\) large enough \(\widetilde{Z}_k(j-1)\ge \Phi _{1/2+\alpha ,j}^{-1}(\widetilde{N}_k(j-1,j))\). Combining this with (34) yields

$$\begin{aligned} W(\widetilde{N}_n(j,j+1))\lesssim \sum _{k=0}^{\widetilde{N}_n(j-1,j)} \frac{1}{w(\Phi _{1/2+\alpha ,j}^{-1}(k))}. \end{aligned}$$

Recalling the definition of the sequence \((\Phi _{\eta ,j})_{j\ge 1}\) we obtain

$$\begin{aligned} W(\widetilde{N}_n(j,j+1))\lesssim W(\Phi _{1/2+\alpha ,j+1}(\widetilde{N}_n(j-1,j))). \end{aligned}$$
(35)

Thus, for \(\alpha ^{\prime }>\alpha \) and for \(k\) large enough, using Lemmas 2.1 and 2.5, we get

$$\begin{aligned}&\widetilde{N}_k(j,j+1)\le 2\Phi _{1/2+\alpha ,j+1}(\widetilde{N}_k(j-1,j))\le \Phi _{1/2+\alpha ^{\prime },j+1}(\widetilde{N}_k(j-1,j))\\&\qquad \qquad \qquad \qquad \le \Phi _{1/2+\alpha ^{\prime },j+1}(\widetilde{Z}_k(j)) \end{aligned}$$

provided that \(\lim _{t\rightarrow \infty }\Phi _{1/2+\alpha ,j+1}(t) = \infty \). When the previous limit is finite, it follows readily from (35) that \(\widetilde{N}_\infty (j,j+1) < \infty \). Thus, in any case, we obtain the required upper bound

$$\begin{aligned} \widetilde{N}_k(j,j+1) \lesssim \Phi _{1/2+\alpha ,j+1}(\widetilde{Z}_k(j)). \end{aligned}$$
(36)

Concerning the lower bound, there is nothing to prove if \(\lim _{t\rightarrow \infty }\Phi _{1/2-\alpha ,j+1}(t) < +\infty \). Otherwise, it follows from (36) and Lemma 2.5 that \(\widetilde{N}_k(j,j+1)=o(\widetilde{Z}_k(j))\). Moreover, using exactly the same argument as before, we find that for \(k\) large enough

$$\begin{aligned} \widetilde{N}_k(j,j+1)\ge \Phi _{1/2-\alpha ,j+1}(\widetilde{N}_k(j-1,j)). \end{aligned}$$

Noticing that \(\widetilde{N}_k(j-1,j) \sim (\widetilde{Z}_k(j)-\widetilde{N}_k(j,j+1)) \sim \widetilde{Z}_k(j)\), we conclude using again Lemma 2.5 that for \(\alpha ^{\prime } > \alpha \) and for \(k\) large enough,

$$\begin{aligned} \widetilde{N}_k(j,j+1)\ge \Phi _{1/2-\alpha ^{\prime },j+1}(\widetilde{Z}_k(j)), \end{aligned}$$

which yields the lower bound of (33).

Finally, choosing \(\alpha >0\) small enough such that \(\lim _{t\rightarrow \infty } \Phi _{1/2+\alpha ,j_+(w)}(t)<\infty \) we deduce that \(\widetilde{N}_\infty (j_+(w)-1,j_+(w))\) is finite hence \(\widetilde{Z}_\infty (j_+(w))\) is also finite. Conversely, (33) entails by a straightforward induction that \(\widetilde{Z}_\infty (j)=\infty \) for \(j< j_-(w)\).\(\square \)

5 The walk \(\widehat{X}\)

We now turn our attention towards the walk \(\widehat{X}\) which is more delicate to analyse than the previous process so we only obtain partial results concerning its asymptotic behaviour. In view of Lemma 3.7, we are mainly interested in finding the smallest integer \(L\) such that \(\mathbb{P }_\mathcal{C }\{\widehat{\mathcal{E }}(L,M)\}>0\) for some \(M\). The purpose of this section is to prove the proposition below which provides an upper bound for \(L\) which is optimal when \(j_-(w)=j_+(w)\).

Proposition 5.1

Assume that \(j_+(w)<\infty \). Then, for any initial state \(\mathcal C \), there exists \(M\ge 0\) such that

$$\begin{aligned} \mathbb{P }_\mathcal{C }\{\widehat{\mathcal{E }}(j_+(w),M) \}>0. \end{aligned}$$
(37)

Moreover, there exists a reachable initial state \(\mathcal C ^{\prime }=(z^{\prime }(x),n^{\prime }(x,x+1))_{x\in \mathbb{Z }}\) which is zero outside of the interval \([\![-1, j_+(w) ]\!]\) and with \(n^{\prime }(0,1)\ge n^{\prime }(-1,0)\) such that

$$\begin{aligned} \mathbb{P }_\mathcal{C ^{\prime }}\{\widehat{\mathcal{E }}(j_+(w),0)\}>3/4. \end{aligned}$$
(38)

One annoying difficulty studying \(\widehat{X}\) is that we cannot easily exclude the walk diverging to \(+\infty \) on a set of non-zero probability. In order to bypass this problem, we first study the walk on a bounded interval. More precisely, for \(L> 1\), we define the walk \(\widehat{X}^{\scriptscriptstyle {\! L}}\) on \([\![-1,L ]\!]\) which is reflected at the boundary sites \(-1\) and \(L\), with the same transition probabilities as \(\widehat{X}\) in the interior of the interval:

$$\begin{aligned} \mathbb{P }_\mathcal C \{\widehat{X}^{\scriptscriptstyle {L}}_{n+1}\!=\!x\!-\!1 \mid \widehat{\mathcal{F }}^{\scriptscriptstyle {L}}_n,\; \widehat{X}^{\scriptscriptstyle {L}}_{n}=x\} \!=\! \left\{ \!\begin{array}{ll} 0 &{} \text{ if }\,\, x \!=\! -1,\\ \frac{w(\widehat{Z}^{\scriptscriptstyle {\!L}}_n(-1))}{w(\widehat{Z}^{\scriptscriptstyle {\!L}}_n(-1))+w\left( \!\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)+f(\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1))\right) }&{} \text{ if }\,\,x \!=\! 0,\\ \!\frac{w(\widehat{Z}^{\scriptscriptstyle {\! L}}_n(x-1))}{w(\widehat{Z}^{\scriptscriptstyle {\! L}}_n(x-1))+w\left( (1+\varepsilon )\widehat{N}^{\scriptscriptstyle {\! L}}_n(x,x+1)\right) }&{} \text{ if }\,\, x \!\in \! [\![1,L\!-\!1]\!],\\ \!1 &{} \text{ if }\,\, x \!=\! L. \end{array} \right. \end{aligned}$$

The proof of Proposition 5.1 relies on the following lemma which estimates the edge/site local times of \(\widehat{X}^{\scriptscriptstyle {L}}\).

Lemma 5.2

Let \(\mathcal{C }\) be an initial state and \(L>1\). For \(n\) large enough, we have

$$\begin{aligned} \widehat{N}^{\scriptscriptstyle {\! L}}_n(-1,0) \;\le \; \widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)\qquad \mathbb{P }_\mathcal C \text{-a.s. } \end{aligned}$$
(39)

Moreover, for \(\eta \in (1/2+\varepsilon ,1)\) and \(j \in [\![0,L-1]\!]\),

$$\begin{aligned} \widehat{N}^{\scriptscriptstyle {\! L}}_n(j,j+1) \;\lesssim \; \Phi _{\eta ,j+1}\big (\widehat{Z}^{\scriptscriptstyle {\! L}}_n(j)\big ). \end{aligned}$$
(40)

Proof

The proof is fairly similar to that of Proposition 4.1. First, since \(\widehat{X}^{\scriptscriptstyle {\! L}}\) has compact support, the set \(\widehat{R}^{\scriptscriptstyle {\! L}}\) of sites visited infinitely often by the walk is necessarily not empty. Furthermore, noticing that \(\sum 1/w((1+\varepsilon )n)\) is infinite since \(w\) is regularly varying, the same arguments as those used for dealing with \(\widetilde{X}\) show that \(\widehat{X}^{\scriptscriptstyle {\! L}}\) visits site \(0\) infinitely often a.s.

We first prove (39) together with (40) for \(j=0\). As before, it is easily checked that

$$\begin{aligned} \widehat{M}^{\scriptscriptstyle {\! L}}_n(0)&:= \sum _{k=0}^{n-1} \frac{\mathbf{1}_{\{\widehat{X}^{\scriptscriptstyle {\! L}}_k=0\;\mathrm{and }\;\widehat{X}^{\scriptscriptstyle {\! L}}_{k+1}= 1\}}}{w(\widehat{N}^{\scriptscriptstyle {\! L}}_k(0,1)+f(\widehat{N}^{\scriptscriptstyle {\! L}}_k(0,1)))}-\sum _{k=0}^{n-1} \frac{\mathbf{1}_{\{\widehat{X}^{\scriptscriptstyle {\! L}}_k=0\;\mathrm{and}\;\widehat{X}^{\scriptscriptstyle {\! L}}_{k+1}=- 1\}}}{w(\widehat{Z}^{\scriptscriptstyle {\! L}}_k(-1))}\\ \end{aligned}$$

is a martingale bounded in \(L^2\) with converges to some finite constant. Besides, recalling the definitions of \(W\) and \(W_f\), we have

$$\begin{aligned} \widehat{M}^{\scriptscriptstyle {\! L}}_n(0)\equiv W_f(\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1))-W(\widehat{Z}^{\scriptscriptstyle {\! L}}_n(-1)). \end{aligned}$$
(41)

Since \(0\) is visited infinitely often and since \(W\) and \(W_f\) are unbounded, Equation (41) implies that \(-1\) and \(1\) are also visited infinitely often a.s. Recalling that \(f\) satisfies (c) of Lemma 2.3, Lemma 2.2 entails

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)}{\widehat{Z}^{\scriptscriptstyle {\! L}}_n(-1)}=1 \qquad \mathbb{P }_\mathcal C \text{-a.s. } \end{aligned}$$
(42)

Using \(\widehat{Z}^{\scriptscriptstyle {\! L}}_n(0)\sim \widehat{Z}^{\scriptscriptstyle {\! L}}_n(-1)+\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)\), we find for \(\delta >1/2\) and for \(n\) large enough,

$$\begin{aligned} \widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1) \;\le \; \delta \widehat{Z}^{\scriptscriptstyle {\! L}}_n(0) \; = \; \Phi _{\delta ,1}(\widehat{Z}^{\scriptscriptstyle {\! L}}_n(0)), \end{aligned}$$
(43)

which, in particular, proves (40) for \(j=0\). Moreover, using \(\widehat{N}^{\scriptscriptstyle {\! L}}_n(-1,0)\le \widehat{Z}^{\scriptscriptstyle {\! L}}_n(-1)+c\) for some constant \(c\) depending only on \(\mathcal{C }\), the fact that \(W(x+c)-W(x)\) tends to \(0\) at infinity and recalling that \(f\) satisfies (d) of Lemma 2.3, we deduce from (41) that

$$\begin{aligned} \lim _{n\rightarrow \infty } W(\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1))-W(\widehat{N}^{\scriptscriptstyle {\! L}}_n(-1,0))=\infty \qquad \mathbb{P }_\mathcal C \text{-a.s. } \end{aligned}$$

Since \(W\) is non-decreasing, this shows that (39) holds.

We now prove (40) by induction on \(j\). The same martingale argument as before shows that

$$\begin{aligned} W_{\varepsilon }(\widehat{N}^{\scriptscriptstyle { L}}_n(x,x+1))\equiv \sum _{k=0}^{n-1} \frac{\mathbf{1}_{\{\widehat{X}^{\scriptscriptstyle {\! L}}_k=x\;\mathrm{and }\;\widehat{X}^{\scriptscriptstyle {\! L}}_{k+1}=x- 1\}}}{w(\widehat{Z}^{\scriptscriptstyle {\! L}}_k(x- 1))}\qquad \text{ for }\, x\in [\![1,L-1]\!], \end{aligned}$$
(44)

where we recall the notation \(W_{\varepsilon } := W_\psi \) for \(\psi (x):=\varepsilon x\). Assume that (40) holds for \(j-1\in [\![0, L-2 ]\!]\) and fix \(\eta \in (1/2+\varepsilon ,1)\). If \(\widehat{N}^{\scriptscriptstyle {\! L}}_\infty (j-1,j)\) is finite, then \(\widehat{N}^{\scriptscriptstyle {\! L}}_\infty (j,j+1)\) is also finite and the proposition holds for \(j\). Hence, we assume that \(\widehat{N}^{\scriptscriptstyle {\! L}}_\infty (j-1,j) \) and \(\widehat{N}^{\scriptscriptstyle {\! L}}_\infty (j,j+1)\) are both infinite. If \(j=1\), we get, using (43), that for \(n\) large enough,

$$\begin{aligned} \widehat{Z}^{\scriptscriptstyle { L}}_n(0)\;\ge \; \frac{(1+2\varepsilon )}{\eta }\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1) = (1+2\varepsilon )\Phi _{\eta ,1}^{-1}\big (\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)\big ). \end{aligned}$$

On the other hand, if \(j>1\), recalling that \(\Phi _{\beta ,j}(\lambda t) \;\lesssim \; \Phi _{\alpha ,j}(t)\) for \(\alpha >\beta \) and \(\lambda >0\), we get using the recurrence hypothesis with \(\eta ^{\prime }\in (1/2+\varepsilon , \eta )\)

$$\begin{aligned} \widehat{Z}^{\scriptscriptstyle { L}}_k(j- 1)\;\gtrsim \; \Phi _{\eta ^{\prime },j}^{-1}\big (\widehat{N}^{\scriptscriptstyle {\! L}}_k(j-1,j)\big )\;\gtrsim \; (1+2\varepsilon )\Phi _{\eta ,j}^{-1}\big (\widehat{N}^{\scriptscriptstyle {\! L}}_k(j-1,j)\big ). \end{aligned}$$

In any case, (44) gives, for any \(j\ge 1\),

$$\begin{aligned} W_{\varepsilon }(\widehat{N}^{\scriptscriptstyle {\! L}}_n(j,j+1))&\lesssim \sum _{k=0}^{n-1} \frac{\mathbf{1}_{\{\widehat{X}^{\scriptscriptstyle {\! L}}_k=j\;\mathrm{and }\;\widehat{X}^{\scriptscriptstyle {\! L}}_{k+1}=j- 1\}}}{w\big ((1+2\varepsilon )\Phi _{\eta ,j}^{-1}(\widehat{N}^{\scriptscriptstyle {\! L}}_k(j-1,j))\big )}\\&\lesssim \sum _{k=0}^{\widehat{N}^{\scriptscriptstyle {\! L}}_n(j-1,j)} \frac{1}{w\big ((1+2\varepsilon )\Phi _{\eta ,j}^{-1}(k)\big )}\\&\lesssim \frac{1}{1+\frac{3\varepsilon }{2}} W(\Phi _{\eta ,j+1}(\widehat{N}^{\scriptscriptstyle {\! L}}_n(j-1,j))), \end{aligned}$$

where we used the regular variation of \(w\) for the last inequality. Noticing also that \((1+\varepsilon )W_{\varepsilon }(x)\sim W(x)\) we get, for \(n\) large enough,

$$\begin{aligned} W(\widehat{N}^{\scriptscriptstyle {\! L}}_n(j,j+1))\le W(\Phi _{\eta ,j+1}(\widehat{N}^{\scriptscriptstyle {\! L}}_n(j-1,j)))\le W(\Phi _{\eta ,j+1}(\widehat{Z}^{\scriptscriptstyle {\! L}}_n(j))), \end{aligned}$$

which concludes the proof of the lemma. \(\square \)

Proof of Proposition 5.1

Before proving the proposition, we prove a similar statement for the reflected random walk \(\widehat{X}^{\scriptscriptstyle {\! L}}\). On the one hand, recalling that \(\varepsilon \) is chosen small enough such that \(\Phi _{1/2+2\varepsilon ,j_+(w)}\) is bounded, the previous lemma insures that, for any \(L\), the reflected random walk \(\widehat{X}^{\scriptscriptstyle {\! L}}\) visits site \(j_+(w)\) only finitely many time a.s. On the other hand, denoting \(\widetilde{X}^{\scriptscriptstyle {\! L}}\) the walk \(\widetilde{X}\) restricted to \([\![-1,L]\!]\) (reflected at \(L\)), it is straightforward that \(\widetilde{X}^{\scriptscriptstyle {\! L}}\prec \widehat{X}^{\scriptscriptstyle {\! L}}\). Copying the proof of Proposition 4.1, we find that, for \(L\ge j_-(w)-1\), \(\widetilde{X}^{\scriptscriptstyle {\! L}}\) visits a.s. all sites of the interval \([\![-1, j_-(w)-1]\!]\) infinitely often. Thus, according to Corollary 3.4, the walk \(\widehat{X}^{\scriptscriptstyle {\! L}}\) also visits a.s. all sites of the interval \([\![-1,j_-(w)-1]\!]\) infinitely often.

Now fix \(L\) to be the largest integer such that the walk \(\widehat{X}^{\scriptscriptstyle {\! L}}\) satisfies

$$\begin{aligned} \mathbb{P }_\mathcal{C }\{\widehat{Z}^{\scriptscriptstyle {\! L}}_\infty (L-1)=\infty \}>0 \qquad \text{ and } \qquad \mathbb{P }_\mathcal{C }\{\widehat{Z}^{\scriptscriptstyle {\! L}}_\infty (L)=\infty \}=0. \end{aligned}$$
(45)

Noticing that \(\widehat{X}^{\scriptscriptstyle {\!L-1}}\prec \widehat{X}^{\scriptscriptstyle {\! L}}\), it follows from the previous observations that \(L\) is well defined with \(L\in \{j_-(w), j_+(w)\}\) (the index \(L\) can, a priori, depend on \(\mathcal{C }\)). We prove that, if the initial state \(\mathcal{C }= (z(x),n(x,x+1))_{x\in \mathbb{Z }}\) satisfies

$$\begin{aligned} z(x)\le (1+\varepsilon )n(x-1,x) \quad \text{ for }\, 1\le x\le j_+(w), \end{aligned}$$
(46)

then

$$\begin{aligned} \lim _{M\rightarrow \infty } \mathbb{P }_\mathcal{C }\big \{\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L}}(L,M)\cap \{\forall m&\ge M,\; \widehat{N}^{\scriptscriptstyle {\! L}}_{m}(0,1)\ge \widehat{N}^{\scriptscriptstyle {\! L}}_{m}(-1,0)\}\big \}\nonumber \\&\ge \mathbb{P }_\mathcal{C }\{\widehat{Z}^{\scriptscriptstyle {\! L}}_\infty (L-1)=\infty \}>0, \end{aligned}$$
(47)

where the event \(\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L}}(L,M)\) is defined in the same way as \(\widehat{\mathcal{E }}(L,M)\) with \(\widehat{X}^{\scriptscriptstyle {\! L}}\) in place of \(\widehat{X}\). Indeed, the previous lemma yields

$$\begin{aligned} \lim _{M\rightarrow \infty } \mathbb{P }_\mathcal{C }\{ \forall m\ge M,\; \widehat{N}^{\scriptscriptstyle {\! L}}_{m}(0,1)\ge \widehat{N}^{\scriptscriptstyle {\! L}}_{m}(-1,0)\}=1. \end{aligned}$$
(48)

Moreover, in view of (46), for any \(n\ge 0\) we have

$$\begin{aligned} \widehat{Z}^{\scriptscriptstyle {\! L}}_n(L)\le (1+\varepsilon )N_n(L-1,L). \end{aligned}$$
(49)

Notice also that, for \(j\ge 1\) and \(\gamma > 1/2 + \varepsilon \),

$$\begin{aligned} \widehat{Z}^{\scriptscriptstyle {\! L}}_n(j)\;\lesssim _n\; \widehat{N}^{\scriptscriptstyle {\! L}}_n(j-1,j)+\widehat{N}^{\scriptscriptstyle {\! L}}_n(j,j+1)\;\lesssim _n\; \widehat{N}^{\scriptscriptstyle {\! L}}_n(j-1,j)+\Phi _{\gamma ,j+1}(\widehat{Z}^{\scriptscriptstyle {\! L}}_n(j)), \end{aligned}$$

where we used Lemma 5.2 for the upper bound. Since \(\Phi _{\gamma ,j+1}(x)=o(x)\), it follows that, on the event \(\{ \widehat{Z}^{\scriptscriptstyle {\! L}}_\infty (j)=\infty \}\),

$$\begin{aligned} \widehat{Z}^{\scriptscriptstyle {\! L}}_n(j)\le (1+\varepsilon )\widehat{N}^{\scriptscriptstyle {\! L}}_n(j-1,j) \quad \text{ for } \,n \,\text{ large } \text{ enough }. \end{aligned}$$
(50)

This bound can be improved for \(j=1\). More precisely, for \(\gamma \in (1/2+\varepsilon ,1/2+2\varepsilon )\) and \(n\) large enough, we have

$$\begin{aligned} \widehat{Z}^{\scriptscriptstyle {\! L}}_n(1)&\le \widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)+\Phi _{\gamma ,2}(\widehat{Z}^{\scriptscriptstyle {\! J}}_n(1))\nonumber \\&\le \widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)+\Phi _{\gamma ,2}((1+\varepsilon )\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1))\nonumber \\&\le \widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)+\Phi _{1/2+2\varepsilon ,2}(\widehat{N}^{\scriptscriptstyle {\! L}}_n(0,1)) \nonumber \\&\le \widehat{N}^{\scriptscriptstyle {\!L}}_n(0,1)+f(\widehat{N}^{\scriptscriptstyle {\!L}}_n(0,1)), \end{aligned}$$
(51)

where we used Lemma 2.5 for the third inequality and the fact that \(f\) satisfies (a) of Lemma 2.3 with \(\eta = 1/2 + 2\varepsilon \) for the last inequality. Putting (45), (49), (50) and (51) together, we conclude that

$$\begin{aligned} \{\widehat{Z}^{\scriptscriptstyle {\! L}}_\infty (L-1)=\infty \}\subset \bigcup _{M\ge 0}\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L}}(L,M). \end{aligned}$$

This combined with (48), proves (47).

Still assuming that the initial state \(\mathcal{C }\) satisfies (46), it follows from (47) that there exists \(M\) such that \(\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L}}(L,M)\) has positive probability under \(\mathbb{P }_{\mathcal{C }}\). On this event, the reflected walk \(\widehat{X}^{\scriptscriptstyle {\! L}}\) visits site \(L\) finitely many times and thus

$$\begin{aligned} \mathbb{P }_\mathcal{C }\{\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L}}(L,M)\cap \{\widehat{X}^{\scriptscriptstyle {\! L}} \text{ coincides } \text{ with } \widehat{X} \text{ forever }\}\}>0, \end{aligned}$$

which yields

$$\begin{aligned} \mathbb{P }_\mathcal{C }\{\widehat{\mathcal{E }}(j_+(w),M)\}\ge \mathbb{P }_\mathcal{C }\{\widehat{\mathcal{E }}(L,M)\}>0. \end{aligned}$$

This proves the first part of the proposition under Assumption (46). In order to treat the general case, we simply notice that, from any initial state, the walk has a positive probability of reaching a state satisfying (46).

It remains to prove the second part of the proposition. Let \(L_0\) be the index \(L\) defined in (45) associated with the trivial initial state. Recalling that a state is reachable i.f.f. it can be created from the trivial state by an excursion of a walk away from \(0\), we deduce from (47) that there exists a reachable state \(\mathcal{C }\) equal to zero outside the interval \([\![-1, L_0]\!]\) such that

$$\begin{aligned} \mathbb{P }_\mathcal{C }\big \{\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L_0}}(L_0,0)\cap \{\forall m\ge 0,\; \widehat{N}^{\scriptscriptstyle {\! L_0}}_{m}(0,1)\ge \widehat{N}^{\scriptscriptstyle {\! L_0}}_{m}(-1,0)\}\big \}>0. \end{aligned}$$
(52)

Moreover, we have

$$\begin{aligned}&\lim _{n\rightarrow \infty }\mathbb{P }_\mathcal{C }\big \{\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L_0}}(L_0,0)\cap \{\forall m\ge 0,\; \widehat{N}^{\scriptscriptstyle {\! L_0}}_{m}(0,1)\ge \widehat{N}^{\scriptscriptstyle {\! L_0}}_{m}(-1,0)\}\;|\; \widehat{\mathcal{F }}^{\scriptscriptstyle {\! L_0}}_n\big \}\\&\quad =\mathbf{1}_{\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L_0}}(L_0,0)\cap \{\forall m\ge 0,\; \widehat{N}^{\scriptscriptstyle {\! L_0}}_{m}(0,1)\ge \widehat{N}^{\scriptscriptstyle {\! L_0}}_{m}(-1,0)\}} \quad \mathbb{P }_{\mathcal{C }}\text{-a.s. } \end{aligned}$$

Hence, there exists a reachable state \(\mathcal{C }^{\prime }=(z^{\prime }(x),n^{\prime }(x,x+1))_{x\in \mathbb{Z }}\) equal to zero outside the interval \([\![-1, L_0]\!]\) such that

$$\begin{aligned} \mathbb{P }_\mathcal{C ^{\prime }}\big \{\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L_0}}(L_0,0)\cap \{\forall m\ge 0,\; \widehat{N}^{\scriptscriptstyle {\! L_0}}_{m}(0,1)\ge \widehat{N}^{\scriptscriptstyle {\! L_0}}_{m}(-1,0)\}\big \}>\frac{3}{4}. \end{aligned}$$
(53)

In particular, \(\mathcal{C }^{\prime }\) satisfies the hypotheses of the proposition. Finally, on the event \(\widehat{\mathcal{E }}^{\scriptscriptstyle {\! L_0}}(L_0,0)\), the reflected walk \(\widehat{X}^{\scriptscriptstyle {\! L_0}}\) and \(\widehat{X}\) coincide forever since they never visit site \(L_0\). We conclude that

$$\begin{aligned} \mathbb{P }_\mathcal{C ^{\prime }}\{\widehat{\mathcal{E }}(j_+(w),0)\}\ge \mathbb{P }_\mathcal{C ^{\prime }}\{\widehat{\mathcal{E }}(L_0,0)\}>3/4. \end{aligned}$$

\(\square \)

6 The walk \(\bar{X}\)

Gathering results concerning \(\widetilde{X}\) and \(\widehat{X}\) obtained in Sects. 4 and 5 we can now describe the asymptotic behaviour of the reflected VRRW \(\bar{X}\) on the half line. The following proposition is the counterpart of Theorem 1.3 for \(\bar{X}\) instead of \(X\).

Proposition 6.1

Let \(\mathcal{C }\) be a finite state. Under \(\mathbb{P }_\mathcal{C }\), the following equivalences hold

$$\begin{aligned} j_{\pm }(w)<\infty \Longleftrightarrow \bar{X} \text{ localizes } \text{ with } \text{ positive } \text{ probability } \Longleftrightarrow \bar{X} \text{ localizes } \text{ a.s. } \end{aligned}$$

Moreover, if the indexes \(j_{\pm }(w)\) are finite, we have

$$\begin{aligned}&(i)&\mathbb{P }_\mathcal C \{|\bar{R}| \le j_-(w)\}=0, \\&(ii)&\mathbb{P }_\mathcal C \{|\bar{R}| \le j_+(w)+1\}>0. \end{aligned}$$

Proof

The combination of Lemma 3.7 and Proposition 5.1 implies that, with positive \(\mathbb{P }_{\mathcal{C }}\)-probability, the walk \(\bar{X}\) ultimately stays confined in the interval \([\![-1, j_+(w)-1 ]\!]\). In particular, (ii) holds. Let \(j\ge 1\) be such that

$$\begin{aligned} \mathbb{P }_\mathcal C \{0<|\bar{R}| \le j\}>0. \end{aligned}$$

This means that we can find a finite state \(\mathcal{C }^{\prime }\) such that

$$\begin{aligned} \mathbb{P }_\mathcal{C ^{\prime }}\big \{\{-1\} \subset \bar{R}\subset [\![-1, j-2]\!]\big \}>0. \end{aligned}$$

The combination of Corollary 3.4, Proposition 3.6 and Proposition 4.1 implies now that \(j\ge j_-(w)+1\). Therefore (i) holds. Furthermore, the same argument shows that, if \(j_-(w)=\infty \) then necessarily \(j=\infty \) which means that the walk does not localize. Hence, we have shown that

$$\begin{aligned} j_{\pm }(w)<\infty \Longleftrightarrow \bar{X} \text{ localizes } \text{ with } \text{ positive } \text{ probability. } \end{aligned}$$

It remains to prove that localization is, in fact, an almost sure property. Assume that \(j_\pm (w) < \infty \) and pick \(M\ge 0\) large enough such that, starting from the trivial environment, the reflected VRRW never visits \(M\) with positive probability. Given the finite state \(\mathcal{C }\), we choose \(x_0\ge -1\) such that all the local times of \(\mathcal{C }\) are zero on \([\![x_0,+\infty [\![\). Furthermore, for \(m\ge 1\), set \(x_m:=M m +x_0\) and

$$\begin{aligned} \tau _m:=\inf \{n\ge 0\; : \; \bar{X}_n=x_m\}. \end{aligned}$$

Conditionally on \(\tau _m < \infty \), the process \((\bar{X}_{\tau _m + n} - x_m)_{n\ge 0}\) is a reflected VRRW on \([\![-x_m-1,\infty [\![\) starting from a (random) finite initial state whose local times are zero for \(x\ge 0\). Comparing this walk with the reflected VRRW \(\bar{X}\) on \([\![-1,\infty [\![\) starting from the trivial state, it follows from Corollary 3.4 that

$$\begin{aligned} \mathbb{P }_\mathcal C \{\tau _{m+1}=\infty \, | \;\tau _m<\infty \}\ge \mathbb{P }_0\{\bar{X} \text{ never } \text{ visits } M\}>0, \end{aligned}$$

which proves that \(\bar{X}\) localizes a.s. \(\square \)

The following technical lemma will be useful later to show that the non-reflected VRRW localizes with positive probability on a set of cardinality at least \(2j_-(w) - 1\).

Lemma 6.2

Assume that \(j_+(w)<\infty \). Then, there exists a reachable initial state \(\mathcal C \) which is symmetric i.e. satisfying \(z(x)= z(-x)\) and \(n(x,x+1)=n(-x-1,-x)\) for all \(x\ge 0\), such that

$$\begin{aligned} \mathbb{P }_\mathcal{C }\left\{ \{\bar{R}\subset [\![-1, j_+(w)-1 ]\!]\}\cap \Big \{\limsup _{n\rightarrow \infty } \frac{\bar{Z}_{\bar{\sigma }(0,n)}(1)}{\bar{Z}_{\bar{\sigma }(0,n)}(-1)}\le 1\Big \}\right\} >3/4, \end{aligned}$$

recalling the notation \(\bar{\sigma }(0,n):=\inf \{k\ge 0\; :\; \bar{Z}_k(0)=n\}.\)

Proof

Since we are dealing with the reflected random walk \(\bar{X}\), the value of the state on \(]\!]-\infty ,-2]\!]\) is irrelevant so the symmetric assumption is not really restrictive apart from the edge/site local times at \(-1\) and \(1\). Moreover, according to the previous proposition and the fact that \(j_+(w)\le j_-(w)+1\), it follows that, on the event \(\{\bar{R}\subset [\![-1, j_+(w)-1 ]\!]\}\), the walk \(\bar{X}\) returns to \(0\) infinitely often. Hence all the hitting times \(\bar{\sigma }(0,n)\) are finite. In particular, the \(\limsup \) in the proposition is well-defined.

According to Proposition 5.1, there exists a reachable state \(\mathcal C ^{\prime }=(z^{\prime }(x),n^{\prime }(x,x+1))_{x\in \mathbb{Z }}\) which is zero outside of the interval \([\![-1, j_+(w) ]\!]\) such that \(n^{\prime }(0,1)\ge n^{\prime }(-1,0)\) and for which (38) holds, namely

$$\begin{aligned} \mathbb{P }_\mathcal{C ^{\prime }}\{\widehat{\mathcal{E }}(j_+(w),0)\}>3/4. \end{aligned}$$

Recall that \(\widehat{\mathcal{E }}\) is the “good event” for the modified reinforced walk \(\widehat{X}\) defined by (31). On \(\widehat{\mathcal{E }}(j_+(w),0)\), by definition, we have \(\widehat{Z}_n(1)\le \widehat{N}_n(0,1)+f(\widehat{N}_n(0,1))\). Recalling that \(f(x)=o(x)\) (c.f. (b) of Lemma 2.3), we get \(\widehat{Z}_n(1)\sim \widehat{N}_n(0,1)\). Moreover, on this event, the walk \(\widehat{X}\) coincides with the reflected walk \(\widehat{X}^{\scriptscriptstyle {\! j_+(w)}}\) on \([\![-1, j_+(w)]\!]\). In particular, it follows from (42) that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\widehat{Z}_n(1)}{\widehat{Z}_n(-1)}=1 \qquad \mathbb{P }_\mathcal{C ^{\prime }}\text{-a.s. } \text{ on } \text{ the } \text{ event }\, \widehat{\mathcal{E }}(j_+(w),0). \end{aligned}$$
(54)

Since \(\bar{X} \prec \widehat{X}\) on \(\widehat{\mathcal{E }}(j_+(w),0)\), Lemma 3.7 combined with (54) and Proposition 4.1 yield

$$\begin{aligned} \widehat{\mathcal{E }}(j_+(w),0)\subset \left\{ \{\bar{R}\subset [\![-1, j_+(w)-1 ]\!]\} \cap \Big \{\limsup _{n\rightarrow \infty } \frac{\bar{Z}_{\sigma (0,n)}(1)}{\bar{Z}_{\sigma (0,n)}(-1)}\le 1\Big \}\right\} . \end{aligned}$$
(55)

Consider now the reachable state \(\mathcal C =(z(x),n(x,x+1),x\in \mathbb{Z })\) obtained by symmetrizing \(\mathcal{C }^{\prime }\) i.e.

$$\begin{aligned} n(x,x+1)&= \left\{ \begin{array}{ll} n^{\prime }(x,x+1) &{} \text{ if } x\ge 0 \\ n^{\prime }(-x-1,-x) &{} \text{ if } x< 0 \\ \end{array}\right. \\ z(x)&= n(x,x+1) + n(x-1,x). \end{aligned}$$

With this definition, we have \(z(x)=z^{\prime }(x)\) for \(x\ge 1\) (recall that \(\mathcal{C }^{\prime }\) is reachable) and since \(n^{\prime }(0,1)\ge n^{\prime }(-1,0)\), we also have \(z(0)\ge z^{\prime }(0)\) and \(z(-1)\ge z^{\prime }(-1)\). Now set \(v(x):=z(x)-z^{\prime }(x)\) for \(x\ge -1\). Defining a reflected walk \(\check{X}\) on \([\![-1,\infty [\![\) with transition probabilities given for \(x\ge 0\) by

$$\begin{aligned} \mathbb{P }_\mathcal{C^{\prime } }\{\check{X}_{n+1}=x-1\; |\; \mathcal{\check{F} }_n,\check{X}_n=x\}= \frac{w(\check{Z}_n(x-1)+v(x-1))}{w(\check{Z}_n(x-1)+v(x-1)) +w(\check{Z}_n(x+1))}, \end{aligned}$$

it is clear that \(\check{X}\) under \(\mathbb{P }_\mathcal{C ^{\prime }}\) has the same law as \(\bar{X}\) under \(\mathbb{P }_\mathcal C \). Besides, using \(v(-1),v(0) \ge 0\) and \(v(x) = 0\) for \(x\ge 1\), it follows that \(\check{X}\prec \bar{X}\) under \(\mathbb{P }_\mathcal{C ^{\prime }}\) (just compare the transition probabilities). Using Lemma 3.3, Corollary 3.4 and (55), we conclude that

$$\begin{aligned}&\mathbb{P }_\mathcal{C }\left\{ \{\bar{R}\subset [\![-1, j_+(w)-1 ]\!]\}\cap \Big \{\limsup _{n\rightarrow \infty } \frac{\bar{Z}_{\bar{\sigma }(0,n)}(1)}{\bar{Z}_{\bar{\sigma }(0,n)}(-1)}\le 1\Big \}\right\} \\&\quad = \mathbb{P }_\mathcal{C ^{\prime }}\left\{ \{\check{R}\subset [\![-1, j_+(w)-1 ]\!]\}\cap \Big \{\limsup _{n\rightarrow \infty } \frac{\check{Z}_{\check{\sigma }(0,n)}(1)}{\check{Z}_{\check{\sigma }(0,n)}(-1)}\le 1\Big \} \right\} \\&\quad \ge \mathbb{P }_\mathcal{C ^{\prime }}\left\{ \{\bar{R}\subset [\![-1, j_+(w)-1 ]\!]\}\cap \Big \{\limsup _{n\rightarrow \infty } \frac{\bar{Z}_{\bar{\sigma }(0,n)}(1)}{\bar{Z}_{\bar{\sigma }(0,n)}(-1)}\le 1\Big \}\right\} \\&\quad \ge \mathbb{P }_\mathcal{C ^{\prime }}\{\widehat{\mathcal{E }}(j_+(w),0)\}>3/4. \end{aligned}$$

\(\square \)

7 The VRRW \(X\): proof of Theorem 1.3

We now have all the ingredients needed to prove Theorem 1.3 whose statement is rewritten below (recall that \(i_\pm (w) = j_\pm (w) -1\) according to Proposition 2.6).

Theorem 7.1

Let \(X\) be a VRRW on \(\mathbb{Z }\) with weight \(w\) satisfying Assumption 1.1. We have

$$\begin{aligned} j_{\pm }(w)<\infty \Longleftrightarrow \ X \text{ localizes } \text{ with } \text{ positive } \text{ probability } \Longleftrightarrow \ X \text{ localizes } \text{ a.s. }\qquad \end{aligned}$$
(56)

Moreover, when localization occurs (i.e. \(j_\pm (w)<\infty \)) we have

$$\begin{aligned}&(i)&\mathbb{P }_0\{ j_-(w) <|R| < \infty \}=1\end{aligned}$$
(57)
$$\begin{aligned}&(ii)&\mathbb{P }_0\big \{ 2j_-(w)-1 \le |R|\le 2j_+(w)-1 \big \}>0. \end{aligned}$$
(58)

Proof

It follows directly from the definition of the VRRW and its reflected counterpart that \(X \prec \bar{X}\). On the other hand, when \(j_{\pm }(w)<\infty \), Proposition 6.1 states that \(\bar{X}\) localizes a.s which, in view of Corollary 3.4, implies \(\sup _{n}X_n \le \sup _n\bar{X}_n < \infty \) a.s. By symmetry, we conclude that \(X\) localizes a.s. Reciprocally, if \(X\) localizes with positive probability then there exists a finite state \(\mathcal C \) such that

$$\begin{aligned} \mathbb{P }_\mathcal{C }\{X \text{ localizes } \text{ and } \text{ never } \text{ visits } \text{ site } \text{-1 }\}>0. \end{aligned}$$

On this event, \(\bar{X}\) coincides with \(X\), thus \(\mathbb{P }_\mathcal{C }\{\bar{X} \text{ localizes }\}>0\). Proposition 6.1 now implies that \(j_{\pm }(w)<\infty \) which concludes the proof of (56).

We now prove (57). Assume \(j_\pm (w)<\infty \) so that \(R\) is finite and not empty. Suppose by contradiction that \(\mathbb{P }_0\{1\le |R| \le j_-(w) \}>0\). Then, there exists a finite state \(\mathcal C \) such that

$$\begin{aligned} \mathbb{P }_\mathcal C \{ X \text{ never } \text{ exits } \text{ the } \text{ interval } [\![-1,j_-(w)-2]\!]\}>0. \end{aligned}$$

On this event, the walks \(X\) and \(\bar{X}\) coincide. In particular, we get \(\mathbb{P }_\mathcal{C }\{ |\bar{R}| \le j_-(w) \}>0\) which contradicts Proposition 6.1.

It remains to establish (58). According to Lemma 6.2, we can find a symmetric reachable initial state \(\mathcal C \) such that

$$\begin{aligned} \mathbb{P }_\mathcal{C }\left\{ \left\{ \bar{R}\subset [\![-1, j_+(w)-1 ]\!]\right\} \cap \left\{ \limsup _{n\rightarrow \infty } \frac{\bar{Z}_{\bar{\sigma }(0,n)}(1)}{\bar{Z}_{\bar{\sigma }(0,n)}(-1)}\le 1\right\} \right\} >3/4. \end{aligned}$$

Using again \(X\prec \bar{X}\) together with Proposition 3.3 and Corollary 3.4, we get

$$\begin{aligned} \mathbb{P }_\mathcal{C }\left\{ \left\{ R\subset ]\!]\!-\!\infty , j_+(w)\!-\!1 ]\!]\right\} \!\cap \!\left\{ \left\{ \limsup _{n\rightarrow \infty } \frac{Z_{\sigma (0,n)}(1)}{Z_{\sigma (0,n)}(-1)}\!\le \! 1\right\} \cup \left\{ Z_\infty (0)\!<\!\infty \right\} \right\} \!\right\} \!>\!3/4. \end{aligned}$$

The state \(\mathcal C \) being symmetric, we also have

$$\begin{aligned} \mathbb{P }_\mathcal{C }\left\{ \left\{ R\subset [\![-j_+(w)+1,\infty [\![\right\} \!\cap \!\left\{ \left\{ \limsup _{n\rightarrow \infty } \frac{Z_{\sigma (0,n)}(-1)}{Z_{\sigma (0,n)}(1)}\!\le \! 1\right\} \!\cup \! \left\{ Z_\infty (0)\!<\!\infty \right\} \right\} \!\right\} \!>\!3/4. \end{aligned}$$

Hence

$$\begin{aligned} \mathbb{P }_\mathcal{C }\left\{ \left\{ R\subset [\![-j_+(w)+1, j_+(w)-1 ]\!]\right\} \cap \left\{ \lim _{n\rightarrow \infty } \frac{Z_{\sigma (0,n)}(-1)}{Z_{\sigma (0,n)}(1)}= 1\right\} \right\} >1/2, \end{aligned}$$
(59)

where we used that, on the event \(\{R\subset [\![-j_+(w)+1, j_+(w)-1 ]\!]\} \), the walk \(X\) visits the origin infinitely often since it cannot localize on less than \(j_-(w) + 1 \ge j_+(w)\) sites. The state \(\mathcal{C }\) being reachable, we already deduce that

$$\begin{aligned} \mathbb{P }_0\{ 1\le |R| \le 2j_+(w)-1 \}>0. \end{aligned}$$

Next, for \(\gamma \in (0,1/2)\), define

$$\begin{aligned}&\mathcal G _\gamma :=\left\{ R\subset [\![-j_+(w)+1, j_+(w)-1 ]\!]\right\} \\&\quad \cap \left\{ \forall n\ge 0,\; \gamma \le \frac{\omega \left( Z_{\sigma (0,n)}(1)\right) }{ \omega \left( Z_{\sigma (0,n)}(-1)\right) + \omega \left( Z_{\sigma (0,n)}(1)\right) }\le 1-\gamma \right\} . \end{aligned}$$

Since the weight function \(w\) is regularly varying, it follows from (59) that, for any given \(\gamma \), there exists a reachable configuration \(\mathcal C ^{\prime }\) such that \(\mathbb{P }_{\mathcal{C }^{\prime }}\{\mathcal{G }_\gamma \}>0\). Thus, it suffices to prove that, for \(\gamma \) close enough to \(1/2\), we have

$$\begin{aligned} \mathcal G _\gamma \subset \{ 2j_-(w)-1\le |R| \le 2j_+(w)-1\}\quad \mathbb{P }_{\mathcal{C }^{\prime }}\text{-a.s. } \end{aligned}$$
(60)

To this end, we introduce the walk \(\breve{X}\) on \([\![0,\infty [\![\) with the same transition probabilities as the walk \(\widetilde{X}\) studied in Sect. 4 except at site \(x=0\) where we define

$$\begin{aligned} \mathbb{P }_{\mathcal{C }^{\prime }}\{\breve{X}_{n+1}=1 \;| \;\breve{\mathcal{F }}_n, \breve{X}_n=0\}=1-\mathbb{P }_{\mathcal{C }^{\prime }}\{\breve{X}_{n+1}=0 \;|\; \breve{\mathcal{F }}_n, \breve{X}_n=0\}=\gamma \end{aligned}$$

(i.e. when this walk visits 0, it has a positive probability of staying at the origin at the next step). Using exactly the same arguments as in Proposition 4.1, we see that \(\breve{X}\) localizes a.s. under \(\mathbb{P }_{\mathcal{C }^{\prime }}\) and that the bounds (33) obtained for \(\widetilde{X}\) give similar estimates for \(\breve{X}\): for \(j\ge 1\) and \(\alpha \in (0,\gamma )\),

$$\begin{aligned} \Phi _{\gamma -\alpha ,j}(\breve{Z}_k(j-1)) \lesssim \breve{N}_k(j-1,j) \lesssim \Phi _{\gamma +\alpha ,j}(\breve{Z}_k(j-1)) \qquad \mathbb{P }_{\mathcal{C }^{\prime }}\text{-a.s. } \end{aligned}$$

Thus, we can now choose \(\gamma \) close enough to \(1/2\) such that, \(j_{\gamma -\alpha }(w)=j_-(w)\) for some \(\alpha >0\). The previous estimate implies, by induction, that the localization set of \(\breve{X}\) is such that

$$\begin{aligned}{}[\![0,j_-(w)-1]\!]\subset \breve{R}\qquad \mathbb{P }_{\mathcal{C }^{\prime }}\text{-a.s. } \end{aligned}$$
(61)

Finally, consider the walk \(X^+\) on \([\![0,\infty [\![\) obtained from \(X\) by keeping only its excursions on the half-line \([\![0,+\infty [\![\) i.e.

$$\begin{aligned} X^+_n := X_{\zeta _n}, \end{aligned}$$

where \(\zeta _0 := 0\) and \(\zeta _{n+1} := \inf \{k>\zeta _n\; : \; X_k\ge 0\}\). On the event \(\mathcal G _\gamma \), the r.v. \(\zeta _n\) are finite. Recalling the construction described in Sect. 3 of the VRRW \(X\) from a sequence \((U_i^x,x \in \mathbb{Z },i\ge 1)\) of i.i.d. uniform random variables, we see that, on \(\mathcal G _\gamma \) we have

$$\begin{aligned}&U_n^0\ge 1-\gamma \quad \Longrightarrow \quad X^+_{\sigma ^+(0,n)+1}=X^+_{\sigma ^+(0,n)}+1\\&\qquad \qquad =1\qquad \text{(for }\, n\, \text{ larger } \text{ than } \text{ the } \text{ initial } \text{ local } \text{ time } \text{ at }\; 0). \end{aligned}$$

We also construct \(\breve{X}\) from the same random variables \((U_i^x)\) (the walk is not nearest neighbour at \(0\) so we set \(\breve{X}_{\breve{\sigma }(0,n)+1}=1\) if \(U_n^0\ge 1-\gamma \) and \(\breve{X}_{\breve{\sigma }(0,n)+1}=0\) otherwise). Then, it follows from the previous remark that \(\breve{X}\prec X^+\) on \(\mathcal G _\gamma \). Using one last time Corollary 3.4 and (61), we deduce that

$$\begin{aligned} \mathcal G _\gamma \subset \{X \text{ visits } j_-(w)-1\, i.o. \}\qquad \mathbb{P }_{\mathcal{C }^{\prime }}\text{-a.s }. \end{aligned}$$

By invariance of the event \(\mathcal G _\gamma \) under the space reversal \(x \mapsto -x\), we conclude that

$$\begin{aligned} \mathcal G _\gamma \subset \{X \text{ visits }\, j_-(w)-1 \text{ and }\, -(j_-(w)-1)\; i.o.\}\qquad \mathbb{P }_{\mathcal{C }^{\prime }}\text{-a.s }. \end{aligned}$$

hence (60) holds. \(\square \)

8 Asymptotic local time profile

Although Theorem 1.3 is only concerned with the size of the localization set, looking back at the proof, we see that we can also describe, with little additional work, an asymptotic local time profile of the walk (but we cannot prove that other asymptotics do not happen). Let us give a rough idea of how to proceed while leaving out the cumbersome details. In order to simplify the discussion, assume that \(i_\pm (w)\) are finite and that both indexes are equal. Hence, the VRRW \(X\) localizes with positive probability on the interval \([\![-i_\pm (w),i_\pm (w)]\!]\). Looking at the proof of (58), we see that, with positive probability, the urn at the center of the interval is balanced, i.e.

$$\begin{aligned} Z_n(-1)\sim Z_n(1)\sim \frac{Z_n(0)}{2}. \end{aligned}$$
(62)

This tells us that, with positive probability, as \(n\) tends to infinity, the local times \(Z_n(1), Z_n(2),\ldots ,Z_n(i_\pm )\) and \(Z_n(-1), Z_n(-2),\ldots ,Z_n(-i_\pm )\) are of the same magnitude as \(\bar{Z}_n(1),\bar{Z}_n(2),\ldots ,\bar{Z}_n(i_\pm )\) for the reflected random walk \(\bar{X}\) on \([\![-1,\infty [\![\). Furthermore, recalling that \(\widetilde{X} \prec \bar{X} \prec \widehat{X}\) on \(\widehat{\mathcal{E }}(i_\pm (w)+1,0)\), we can use (33) and (40) to estimate the local times of \(\bar{X}\), which therefore also provides asymptotic for the local times of the non-reflected walk \(X\). More precisely, given a family of functions \((\chi _\eta (x),\eta \in (0,1))\), introduce the notation

$$\begin{aligned} f(x)\asymp \chi _{\eta _0}(x) \quad \text{ if } \quad \chi _{\eta _0-\varepsilon }(x) \le f(x) \le \chi _{\eta _0+\varepsilon }(x)\quad \text{ for } \text{ all }\, \varepsilon \!>\!0\, \text{ and }\, x \,\text{ large } \text{ enough }. \end{aligned}$$

Then, one can prove that, with positive probability, the VRRW localizes on \([\![-i_\pm (w), i_\pm (w)]\!]\) in such a way that (62) holds and that, for every \(1\le i\le i_\pm (w)\),

$$\begin{aligned} \left\{ \begin{array}{c} Z_n(i)\asymp \Phi _{1/2,i}(Z_n(i-1))\\ Z_n(-i)\asymp \Phi _{1/2,i}(Z_n(-i+1)^{})\end{array}\right. \quad \text{ as }\, n\, \text{ goes } \text{ to } \text{ infinity }, \end{aligned}$$

where \((\Phi _{\eta ,i}, \eta \in (0,1))\) is the family of functions defined in (13). Recalling that \(\Phi _{\eta ,i}(x)=o(x)\) for any \(i\ge 2\), we deduce in particular

$$\begin{aligned} Z_n(-1)\sim Z_n(1)\sim \frac{Z_n(0)}{2}\sim \frac{n}{4}, \end{aligned}$$

i.e. the walk spends almost all its time on the three center sites \(\{-1,0,1\}\). Furthermore, setting

$$\begin{aligned} \Psi _{\eta ,i}(x):= \Phi _{1/2,i}\circ \Phi _{1/2,i-1} \circ \ldots \circ \Phi _{\eta ,1}(x/2), \end{aligned}$$
(63)

we get, for any \(i\in [\![1, i_\pm (w) ]\!]\)

$$\begin{aligned} \left\{ \begin{array}{c} Z_{n}(i)\asymp \Psi _{1/2,i}(n)\\ Z_{n}(-i)\asymp \Psi _{1/2,i}(n)^{} \end{array}\right. \quad \text{ as }\, n\, \text{ goes } \text{ to } \text{ infinity }, \end{aligned}$$
(64)

(c.f. Fig. 1). The calculation of this family of functions may be carried out explicitly in some cases. For example, if we consider a weight sequence of the form \(w(k)\sim k \exp (-\log ^\alpha k)\) for some \(\alpha \in (0,1)\), then, with arguments similar as those used in the proof of Proposition 1.5, we can estimate the functions \(\Psi _{\eta ,i}(n)\) and, after a few lines of calculus, we conclude that, in this case,

$$\begin{aligned} Z_n(i) \; = \; \frac{n}{\exp ( (\log n)^{(1-\alpha )(i-1)+ o(1)})}\qquad \text{ for } i\in [\![1, i_\pm (w) ]\!]. \end{aligned}$$
Fig. 1
figure 1

Local time profile at time \(n\)