1 Introduction

The Ant Mill is a phenomenon in which a group of blind army ants gets separated from their main group and, guided by pheromones, start to walk behind one another and in this way form a circuit they follow until they die of exhaustion. We refer the interested reader to the paper [4] a discussion of that phenomenon, and to the video [9] for an illustration.

In this work we investigate a model that probabilistically encodes the above phenomenon in the case of a single ant on connected non-tree finite graphs and on \({\mathbb Z}^d\), \(d\ge 2\). We then interpret the ant as a random walk with a bias towards already visited directed edges. Here the bias increases with each crossing of a directed edge, and it decreases whenever an edge is crossed in the opposite direction. Put differently what counts is the “net” number of crossings.

Our model can be placed into the world of reinforced random walks. To the best of our knowledge this notion goes back to [2, 5, 14]. Since then, a large literature has been developed and reinforced random walks have become an active and challenging area of research. Among the most prominent models are the vertex reinforced random walk [14, 15] and the edge reinforced random walk [2, 5], where the bias is proportional to the number of times a certain vertex and edge respectively has been visited. One of the questions of interest in these models is concerned with localisation, i.e., will the random walk be eventually trapped in a finite region? For the vertex reinforced random walk this is indeed the case as has been shown in numerous works with different stages of refinement [1, 12, 16,17,18]. Here, depending on the strength of the reinforcement and the underlying graph the walk may localise on two or more vertices. For the edge reinforced walk, similar results have been obtained.

The article [11] considers path formation for vertex reinforced random walks that are non-backtracking. In the case of strong reinforcement, the authors show that, with positive probability, the walker will localize in a path. More specifically, they showed that for reinforcement function \(W(k)=k^\alpha \), with \(\alpha > 1\) and \(3\le m<\frac{3\alpha -1}{\alpha -1}\), the walk localizes on m vertices with positive probability. Their methods rely on stochastic approximation techniques.

In [12] it was shown that if the sum of inverse of weights is finite and under some further technical assumptions the walk eventually gets stuck on a single edge. We also mention a model with a similar flavour and names as ours, namely, the directionally reinforced random walk, which was investigated in [10, 13]. However, in that model the walker looses its memory after each change of direction, which makes the model fundamentally different to ours.

In the present work we aim at showing localisation as in the vertex or edge reinforced models. However, since the reinforcement is along directed edges localisation on a single edge is not possible; jumping forth and back over the same edge neutralises the reinforcement. Instead we will show in our main result, Theorem 2.2, localisation on circuits, which justifies the Ant RW name for the walk: this theorem states that on non-tree finite graphs and on \({\mathbb Z}^d\) for \(d\ge 2\), the Ant RW (Ant Random Walk) with probability one eventually gets trapped in a directed circuit which will be followed forever, similarly to the Ant Mill phenomenon mentioned at the beginning of this introduction.

What makes the Ant RW so challenging is that it is heavily non Markovian, due to the fact that at each step the behaviour of the walker depends on its entire past. In the two previously described models a feature that partially compensates that difficulty is monotonicity, i.e., the more often a vertex, respectively edge, is visited the more attractive it will become in the future.

In our model this is not the case. Indeed, if an edge (xy) was crossed as many times as the edge (yx) it is as if neither of the two were ever crossed, i.e., it is possible to “kill” a bias by crossing an edge in the reversed direction. Consequently, classical tools such as Pólya Urn techniques, e.g., the Rubin construction in [3], are not directly available.

To partially compensate for that difficulty we, at least for the moment, work with a strong, i.e., exponential reinforcement. This then enables us to analyse the model in two steps. The first is completely deterministic and investigates the evolution of the environment, i.e., the field of crossing numbers induced by a fixed path. Having gained sufficient information on the environment we then use a renewal property of the dynamics to conclude the analysis. A relevant feature of the paper is the understanding of the environment; we believe that this comprehension will be useful also for weaker reinforcement versions of the model. Finally, it is worthy commenting that ant inspired algorithms are in great development nowadays in Computer Science (see for instance [6, 7, 19] and references therein). Finally, we would like to point out that while working on the revision of this article we are working on a new preprint in which we deal with the case of a super-linear reinforcement, see [8]. We believe that the results stated in this paper do not change. However we are not sure what to conjecture in the case of a linear or sub-linear reinforcement.

Organization of the paper. In Sect. 2, we define the model, state results and discuss the ideas of the proof. In Sect. 3 we give the proof of Theorem 2.2 in the finite graph case and in Sect. 4 we show Theorem 2.2 in the case of \({\mathbb Z}^d\) with \(d\ge 2\). In Appendix A we provide the proof of Proposition 2.1.

2 Statements

We define here the directed edge reinforced random walk, which will be referred as Ant RW in the sequel, as a discrete time stochastic process on some locally finite, connected, undirected graph G with vertex set \(V=V(G)\) and edge set \(E=E(G)\). Given two vertices v and w we write \(v\sim w\) if the pair (vw) forms an edge. We then define the stochastic process \((X_n)_{n\in {\mathbb N}}\) with state space V by the following transition rule. Fix a vertex v, and set \( X_0=v\). For \(n \ge 0\), we define

$$\begin{aligned} {\mathbb P}(X_{n+1} = x | {\mathcal G}_n)\; =\; \frac{a_n(X_n, x)}{\displaystyle \sum _{y\sim X_n} a_n(X_n, y)}\,, \end{aligned}$$
(2.1)

where \({\mathcal G}_n = \sigma (X_0, X_1, \ldots , X_n)\) is the \(\sigma \)-algebra generated by the walk up to time n. Here the weights \(a_n\) are given by

$$\begin{aligned} a_n(X_n,x)\;=\;\exp \big \{\beta \,c_n(X_n, x)\big \}\,\,, \end{aligned}$$

where \(\beta \in (0,\infty )\) and the crossing numbers \(c_n(x,y)\) above are defined via

$$\begin{aligned} c_n (x,y) \;=\; \sum _{k=0}^{n-1}\Big ( \mathbbm {1}\big [(X_k,X_{k+1})=(x,y)\big ]-\mathbbm {1}\big [(X_k,X_{k+1})=(y,x)\big ]\Big )\,. \end{aligned}$$
(2.2)

In plain words, \(c_n (x,y)\) is the number of times that, up to time n, the walk has jumped from x to y minus the number of times it has jumped from y to x. The parameter \(\beta \in (0,\infty )\) represents the strength of the reinforcement. In the limiting case \(\beta =0\) we recover the usual symmetric random walk, whereas in the other limiting case \(\beta = \infty \) once the walk has crossed a certain edge (xy) from x to y, it will always choose the same edge in the same direction once it returns to x.

Our first observation reads as follows and shows that the behaviour of X on \(G={\mathbb Z}\) is particularly simple. The proof follows from elementary observations and is provided in Appendix A.

Proposition 2.1

Let \(G={\mathbb Z}\) and assume \(X_0=0\). Then the Ant RW \((X_n)_{n\ge 0}\) is a Markov chain with transition probabilities given by

$$\begin{aligned} \begin{aligned} {\mathbb P}\big (X_{n+1}\;=\;\pm 1|X_n=0\big )&\;=\;\frac{1}{2}\,,\\ {\mathbb P}\big (X_{n+1}=x+1|X_n=x\big )&\;=\; {\left\{ \begin{array}{ll} \dfrac{1}{1+e^{-\beta }},\quad \text{ if } x\ge 1,\\ \dfrac{e^{-\beta }}{1+e^{-\beta }},\quad \text{ if } x\le -1, \end{array}\right. }\\ {\mathbb P}\big (X_{n+1}=x-1|X_n=x\big )&\;=\;{\left\{ \begin{array}{ll} \dfrac{e^{-\beta }}{1+e^{-\beta }},\quad \text{ if } x\ge 1,\\ \dfrac{1}{1+e^{-\beta }},\quad \text{ if } x\le -1. \end{array}\right. } \end{aligned} \end{aligned}$$
(2.3)

In particular, the Ant RW on \({\mathbb Z}\) is transient and satisfies the following law of large numbers:

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{X_n}{n}\;=\; Y\quad \text {a.s.} \end{aligned}$$
(2.4)

where \({\mathbb P}\bigg [Y=\pm \Big (\displaystyle \frac{1-e^{-\beta }}{1+e^{-\beta }}\Big )\bigg ]=1/2\).

The fact that on \(G={\mathbb Z}\) the Ant RW is a Markov chain is due to the specific structure of \({\mathbb Z}\). In general the process \((X_n)_{n\in {\mathbb N}}\) itself is not a Markov chain. However, it is known that \(\{\xi _n=(X_n,a_n),\, n\in {\mathbb N}\}\) does define one. We denote by \({\mathbb P}_{\xi }\) the law of this joint process when started from a given configuration \(\xi _0=\xi \).

We introduce more notation. Assume for the moment that G is not a tree, so that in particular it possesses at least one circuit. Here, a circuit C denotes a closed path of distinct directed edges and distinct vertices. We will often write \(C=(u_0,\ldots ,u_{\ell -1})\) to denote a generic circuit C of length \(\ell \) with starting point (or root) \(u_0\), where \(u_i\ne u_j\) if \(i\ne j\) and \(u_{\ell -1}\sim u_0\). We denote by \({\mathscr {C}}\) the set of all circuits on G.

For any \(i\in {\mathbb N}=\{0,1,2,\ldots \}\), abbreviate \(i(\ell )=i \text{ mod } \ell \). We define the trapping event associated to the circuit \(C=(u_0,\ldots ,u_{\ell -1})\) and the time \(m\ge 0\) by

$$\begin{aligned} T^{C}_m\;=\;\big \{X_{m+i}\;=\;u_{i (\ell )}\,, \forall \; i\ge 0\big \}\,. \end{aligned}$$

In plain words, \(T_m^{C}\) is the event that the Ant RW is trapped in C at time m, and afterwards spins around C forever. We then define

$$\begin{aligned} T^{C}\;=\; \bigcup _{m\ge 0} T^C_m\,, \end{aligned}$$
(2.5)

as the event that the Ant RW eventually gets trapped in C. The main result of this paper is the following:

Theorem 2.2

Consider the Ant RW \((X_n)_{n\in {\mathbb N}}\) with strength of reinforcement \(\beta \in (0,\infty )\) on an undirected graph G such that

  1. (a)

    G is connected, finite and is not a tree, or

  2. (b)

    \(G={\mathbb Z}^d\) with \(d\ge 2\).

Then,

$$\begin{aligned} {\mathbb P} \Big (\bigcup _{C\in {\mathscr {C}}} T^C\Big )\;=\;1\,. \end{aligned}$$

In words, under the above assumptions, the Ant RW will almost surely be eventually trapped in some circuit C. Observe that, differently to random polymers or the Ising model, there is no phase transition in the parameter \(\beta \in (0,\infty )\) and, differently to the usual symmetric random walk, the phase transition in the dimension occurs from \(d=1\) to \(d=2\). To keep notation light, we simply assume that \(\beta =1\) throughout the proofs, except in the proof of Proposition 2.1. Going carefully over our proof it is however not hard to show that all results hold for any \(\beta \in (0,\infty )\).

Moreover, the proof of item b) of Theorem 2.2 can be easily adapted to different lattices. This is explained in Remark 4.2 where we point out which property a lattice must have in order to exhibit the same behavior as \({\mathbb Z}^d\), \(d\ge 2\), with respect to the Ant RW. When G is an infinite tree the walk is transient as in the case \(G={\mathbb Z}\) considered in Proposition 2.1. Indeed, it is possible to show that if the random walk is at depth n, i.e, at distance n from the root, then its bias to go in the next step to depth \(n+1\) is at least as big as going from n to \(n+1\) in the case of \(G={\mathbb Z}\). This yields the conclusion.

2.1 Idea of the Proof for the Finite Case

The central novelty of this article is Theorem 2.2 for which we shortly explain the idea of its proof in the finite graph case.

To explain the idea of the proof we introduce our key concept, the good edge. Let \(v\in G\) and assume that \(X_n=v.\) The probability of the walker to follow the edge (vw) is given by (2.1) which can be rewritten as

$$\begin{aligned} \frac{1}{1+\!\!\!\displaystyle \sum _{\begin{array}{c} u\,:\,u\sim v,\\ u\ne w \end{array}}\exp \big \{c_{n}(v,u)- c_{n}(v,w)\big \}}\,. \end{aligned}$$
(2.6)

The main observation is that if (vw) is good in the sense that \(c_n(v,w)\) maximises \(c_n(v,u),\) over all \(u\sim v,\) then (2.6) is bounded from below by \(1/(1+D)\), where D is the maximal degree of the graph G. In particular this bound is uniform in the environment. The major work in the proof of Theorem 2.5 is then to show that the probability that the walker follows forever only good edges is as well uniformly in the environment bounded from below. A renewal argument will then show that eventually this event will happen with probability one. Since the graph is finite, a path consisting solely of good edges will eventually close a circuit, which can be shown to be followed forever.

2.2 Open Problems

Theorem 2.2 gives a quite in depth description for the Ant RW on finite graphs. However, there are still many challenges left open, some of them which we plan to address in future works. We mention some of them:

  • The weights in this article depend exponentially on the crossing numbers. It would be interesting to investigate the case that the dependence is only of polynomial form. That is, for some \(\gamma >0\), the environment \(a_n\) is given via

    $$\begin{aligned} a_n(x,y)\;=\;{\left\{ \begin{array}{ll} c_n(x,y)^\gamma , &{} \text{ if } \; c_n(x,y)>0\,,\\ 1,&{} \text{ if } \; c_n(x,y)=0\,,\\ (-c_n(x,y))^{-\gamma }, &{} \text{ if } \; c_n(x,y)<0\,. \end{array}\right. } \end{aligned}$$

    Does Theorem 2.2 still hold true? Is there maybe a phase transition in \(\gamma \), in the sense that there exists \(\gamma _*\) such that for \(\gamma <\gamma _*\) the random walk does not necessarily get stuck in a circuit but for \(\gamma >\gamma _*\) it does? If this is the case, does \(\gamma _*\) depend on the choice of the graph, or is it maybe universal? We expect that to answer these questions the understanding of the role of the environment on the dynamics employed in this article could be used. This however will not be enough. Indeed, one feature that is crucial to our analysis and to which the notion of good edge is well adjusted is that for any pair of edges \((x_1,y_1), (x_2,y_2)\) one has the relation

    $$\begin{aligned} \frac{a_n(x_1,y_1)}{a_n(x_2,y_2)}\;=\; \exp \big \{\beta [c_n(x_1,y_1)-c_n(x_2,y_2)]\big \}\,, \end{aligned}$$

    which fails to be true in the polynomial case. In particular it is no longer enough to follow only good edges. In the polynomial case the lower bound will fail to be uniform and our arguments can not be applied directly.

  • This work is mainly concerned with the Ant RW on finite graphs and on \({\mathbb Z}^d\). However, the behaviour of the walk on general infinite graphs can be very different, and can depend in a sensitive manner on the structure of the underlying graph. For instance, for the graph of Fig. 1, composed by a circuit connected to a copy of the infinite half line (we will call an infinite half line an infinite leaf), the statement of Theorem 2.2 is not true.

Fig. 1
figure 1

Graph G given by a triangle connected to an infinite leaf

In fact, using the same reasoning as in the proof of Proposition 2.1, one can show that when walking over the infinite leaf the Ant RW behaves as an asymmetric random walk. In particular it has positive probability of never returning to the root of the infinite leaf. Hence, the probability of not getting trapped in any circuit is positive. More generally, any connected graph which is not a tree and possesses an infinite leaf may serve as example as well. The presence of an infinite leaf is sufficient to assure that, with positive probability, the Ant RW is not trapped in any circuit, but we believe that it should be not necessary. Hand-waving calculations guided us to guess that “an infinite tree whose nodes at even generations are replaced by circuits with a sufficiently large number of branches leaving from it” should be such a corresponding example (see Fig. 2 for an illustration).

Fig. 2
figure 2

Infinite graph G which is not a tree, has no infinite leaf, and for which we believe the Ant RW has positive probability of not getting trapped in any circuit. Since the number of branches as well as circuit lengths increase exponentially as we step forward to next generations, we believe that the probability of never going back to a previous generations and also never closing a single circuit is positive

In light of the above discussion and Theorem 2.2, we also conjecture:

Conjecture 2.3

Let G be an infinite graph. Then, denoting by \(d(\cdot ,\cdot )\) the shortest path distance on G, one has the following dichotomy

$$\begin{aligned} {\mathbb P} \Big (\bigcup _{C\in {\mathscr {C}}} T^C \cup \big \{\lim _{n\rightarrow \infty }d(X_n, X_0)= \infty \big \}\Big )\;=\;1\,, \end{aligned}$$

i.e., either the walk gets trapped in a circuit or it escapes to infinity.

  • The Ant Mill phenomenon alluded to above is observed in a group of army ants. Thus, it is actually natural to study the behaviour of a large number of Ant RWs. In this case there will be two effects that are competing with each other. On the one hand if an ant follows an edge already crossed before by another ant it further reinforces the edge, so that it should be easier for a large group of ants to be trapped in a circuit. However, as long as the reinforcement is not yet strong enough an ant may also simply cross a directed edge in the opposite direction and in this way kill the reinforcement effect and “neutralize” the edge. The preprint [8] that we work on while doing the revision of the current article suggests that as long as the reinforcement is super-linear all ants will get trapped, albeit possibly in different circuits.

3 Proof of Theorem 2.2 in the Finite Graph Case

To prove Theorem 2.2 in the finite graph case we will show that eventually the walk only follows good edges, i.e., edges that maximise their crossing numbers among adjacent edges. By the explanation given in Sect. 2.1 it is tempting to impose that the path only follows good edges. However, it may be possible that in that way the path ends up in leaf and then is stuck forever on the edge connecting to that leaf. In other words one must guarantee that the walker never backtracks, i.e., never goes back to a vertex visited immediately before. To that end a certain structure on the good edges needs to be required. We now formalise these ideas.

Definition 3.1

Let \((X_n)_{n\in {\mathbb N}}\) be the Ant Random Walk on G. Given a vertex \(u\in G\) we say that the edge \((u,v)\in E\) is a good edge for u at time n if

$$\begin{aligned} c_n(u,v)\;=\;\max _{w\,:\,w\sim u}c_n(u,w)\,. \end{aligned}$$

Since G is connected for any non-empty subset \(S\subset G\) and any vertex \(v\in G\) there is a path connecting v to any vertex of S. We denote by \(v\rightarrow S\) a shortest such path connecting v to S. We further fix an circuit \(C^*\) of G. For any finite stopping time \(\tau \) we define the following random set

$$\begin{aligned} S_\tau \;=\;\big \{u\in V:\exists \; v\sim u \text{ such } \text{ that } |c_\tau (u,v)|\ge 2\big \}\,. \end{aligned}$$
(3.1)

Now we define an auxiliary random path \((Y^\tau _n)_{n\ge 0}\) that is a deterministic function of the pair \((S_\tau ,X_\tau ).\) We start with \(Y^\tau _0=X_\tau \) and \((c^\tau _0(\cdot ,\cdot ))=(c_\tau (\cdot ,\cdot )).\) The evolution of the field \((c^\tau _n(\cdot ,\cdot ))_n\) will obey the same rules as in (2.2) with Y instead of X. We distinguish the following cases.

  1. (1)

    \(S_\tau \ne \varnothing ,\) and \(X_\tau \in S_\tau \). The construction of \((Y^\tau _n)_n\) goes as follows. Assume that for some \(j\ge 1\), \(Y_0^\tau , Y_1^\tau ,\ldots , Y_{j-1}^\tau \) have already been constructed. We then choose \(Y_j^\tau \) such that the edge \((Y_{j-1}^\tau , Y_j^\tau )\) is good. We remark already at that point that we will show in Sect. 3.1 that \(Y_j^\tau \ne Y_{j-2}^\tau \). Moreover, since G is finite \((Y_n^\tau )_{n}\) eventually will visit a vertex for the second time and thereafter will follow forever a circuit composed of good edges.

  2. (2)

    \(S_\tau \ne \varnothing \), and \(X_\tau \notin S_\tau \). In that case \((Y^\tau _n)_n\) first follows the path \(X_\tau \rightarrow S_\tau \). Having reached \(S_\tau \) it copies the strategy from the first item, and therefore will eventually follow a circuit of good edges forever.

  3. (3)

    \(S_\tau =\varnothing \), and \(X_\tau \notin C^*\). In that case \((Y^\tau _n)_n\) follows \(X_\tau \rightarrow C^*\) and after that gives infinite turns around \(C^*.\)

  4. (4)

    \(S_\tau =\varnothing \), and \(X_\tau \in C^*\). Then \((Y^\tau _n)_n\) just gives infinite turns around \(C^*.\)

Note that in all four cases above, the path \((Y_n^\tau )_n\) will eventually follow a circuit of good edges.

Given the above construction we then define recursively the following sequence of stopping times: \(\tau _0=0\), and

$$\begin{aligned} \tau _{k+1}=\inf \{n>\tau _k \,:\, X_n\ne Y^{\tau _k}_{n-\tau _k}\}. \end{aligned}$$

where we use the convention that \(\tau _{k+1}=\infty \) if \(\tau _k=\infty \).

The crucial observation is that on the event \(\{\exists \, k\ge 1\,:\,\tau _k=\infty \}\) the walker \((X_n)_{n\ge 1}\) will eventually be trapped in a circuit.

Lemma 3.2

There exists a constant \(\delta =\delta (G)>0\) such that almost surely for any \(k\in {\mathbb N}\)

$$\begin{aligned} {\mathbb P}_{\xi _{\tau _k}}(\tau _{k+1}=\infty )\ge \delta \text{, } \text{ on } \text{ the } \text{ event } \{\tau _k<\infty \}. \end{aligned}$$

We prove Lemma 3.2 in Sect. 3.2. We now show how to deduce the first item in Theorem 2.2 from Lemma 3.2.

Our goal is to prove that for a initial state \(\xi _0=(X_0,c_0(\cdot ,\cdot )),\) with \(c_0(\cdot ,\cdot )\equiv 0,\) one has that \({\mathbb P}_{\xi _0}(\exists \, k\ge 1\,:\,\tau _k=\infty )=1.\) By the Lemma of Borel-Cantelli it is enough to show that

$$\begin{aligned} \sum _{k\ge 1}{\mathbb P}_{\xi _0}(\tau _k<\infty )<\infty . \end{aligned}$$
(3.2)

Note that by Lemma 3.2 applied to \(k=0\)

$$\begin{aligned} {\mathbb P}_{\xi _0}(\tau _1<\infty )=1-{\mathbb P}_{\xi _0}(\tau _1=\infty )\le 1-\delta . \end{aligned}$$

Assume that

$$\begin{aligned} {\mathbb P}_{\xi _0}(\tau _k<\infty )\le (1-\delta )^k. \end{aligned}$$

Using the Markov property and again Lemma 3.2 it follows that

$$\begin{aligned} {\mathbb P}_{\xi _0}(\tau _{k+1}<\infty )&={\mathbb P}_{\xi _0}(\tau _{k+1}<\infty ,\,\tau _k<\infty )\\&={\mathbb E}_{\xi _0}\big [\mathbbm {1}\{\tau _k<\infty \} {\mathbb P}_{\xi _{\tau _k}}(\tau _{k+1}<\infty )\big ]\\&\le (1-\delta )^{k+1} \end{aligned}$$

and this concludes the proof of (3.2).

3.1 Non Backtracking Property

The goal of this section is to prove that with certain control on the environment the strategy of only following only good edges generates a non-backtracking path. To be more precise, we want to show that the path \((Y_n^\tau )_n\) constructed in the first item in the previous section does not make steps back to a vertex visited one time unit before. It will be expedient to study some flow properties of the crossing numbers.

Definition 3.3

The total flow at time n through the vertex \(u\in G\) is defined by the quantity

$$\begin{aligned} F_n(u)\;=\;\sum _{v\,:\,v\sim u}c_n(u,v)\,. \end{aligned}$$

The next result is a simple fact about the flow of the random walk on a graph, which is proved by induction, and so we omit its proof.

Lemma 3.4

In the previous setting, fix a vertex \(u\in G.\) Then \(F_n(u)=\delta _{X_0}(u)-\delta _{X_n}(u)\) and in particular \(F_n(u)\in \{-1,0, +1\}\).

A consequence of the previous result is the following proposition.

Proposition 3.5

If \(X_0^n=\{X_0,\ldots , X_n\}\) is a trial, i.e., \(X_0=X_n\), and \(c_n(u,w)\ne 0\) for some vertex \(w\sim u\) then, for any good edge (uv) of u, one has that \(c_n(u,v)>0.\)

Proof

We first note that since \(X_0=X_n\), Lemma 3.4 implies that \(F_n(u)= 0\) for all \(u\in V\). If \(c_n(u,w)>0\) then the good edge has a positive crossing number since it maximises the crossing numbers among the neighbours of u. If \(c_n(u,w)<0\), then we can conclude from \(F_n(u)=0\), that there exists a vertex \(w^*\) such that \(c_n(u,w^*)>0\). Hence, the claim follows. \(\square \)

Of course there is no guarantee that the Ant RW at time n does form a trial. However in any case we have the following result.

Proposition 3.6

Let u be a given vertex, and fix a realisation \(X_0^n=\{X_0,\ldots ,\) \( X_n\}\) of the Ant RW until time n. Assuming that there exists an edge (uw) such that \(c_n(u,w)\le -2\), then any good edge (uv) of u satisfies \(c_n(u,v)\ge 1.\)

Proof

Since \(c_n(u,w)\le -2\) and \(F_n(u)=\sum _{z:z\sim u}c_n(u,z)\ge -1\) (cf. Lemma 3.4) it is impossible that \(c_n(u,w)\le 0\) for all \(w\sim u\). Therefore, any good edge (uv) of u satisfies \(c_n(u,v)\ge 1\). \(\square \)

Now we will show that if \(S_\tau \ne \varnothing \), and \(X_\tau =v_0 \in S_\tau \), then \((Y^\tau _n)_n\) is non backtracking. Recall that \(Y^\tau _0=X_\tau =v_0.\) By Proposition 3.6, any neighbour \(v_1\) of \(Y^\tau _0\) such that \((Y^\tau _0,v_1)\) is a good edge satisfies \(c^\tau _0(Y^\tau _0,v_1)\ge 1\). To proceed assume \(Y^\tau _1=v_1\). As consequence, \(c^\tau _{1}(Y^\tau _0,Y^\tau _{1})\ge 2,\) which implies \(c^\tau _{1}(Y^\tau _{1},Y^\tau _0)=-c^\tau _{1}(Y^\tau _0,Y^\tau _1)\le -2.\) Thus, we can again apply Proposition 3.6. Consequently, for all neighbours \(v_2\) of \(Y^\tau _{1}\) such that \((Y^\tau _{1},v_2)\) is a good edge one has that \(c_{1}^\tau (Y^\tau _{1},v_2)\ge 1\). Note that \(v_2\ne Y^\tau _0\) since \(c_{1}^\tau (Y^\tau _{1},Y_0^\tau )\le -2.\) In particular, \(Y^\tau _{1}=v_1\) has degree at least 2. The key observation is that for any \(v_2\) as above the path \(\{Y^\tau _{0}=v_0,\;Y^\tau _{1}=v_1,\; Y^\tau _{2}=v_2\}\) is non-backtracking and that \(c^\tau _{2}(Y^\tau _{1},Y^\tau _{2})\ge 2\).

Repeatedly applying the above arguments shows that \((Y_n^\tau )\) never backtracks. Moreover, it also shows that since the graph G is finite, and \((Y_n^\tau )\) walks along vertices of degree at least 2, it eventually follows a circuit consisting of strictly positive good edges.

3.2 Proof of Lemma 3.2

We assume that \(\tau _k\) is finite almost surely. We want to prove that \({\mathbb P}_{\xi _{\tau _k}}(\tau _{k+1}=\infty )\ge \delta .\) This is implied by

$$\begin{aligned} {\mathbb P}_{\xi _{\tau _k}}\big (\forall \, n\ge \tau _k,\,X_n=Y^{\tau _k}_{n-\tau _k}\big )\ge \delta . \end{aligned}$$

To prove the above statement we will need to estimate the probability of giving turns around a circuit. If \(C=(u_0,\ldots , u_{\ell -1})\) is a circuit and \(X_n=u_0=u_\ell \), then the probability of making a turn around C is given by

$$\begin{aligned} \prod _{j=0}^{\ell -1}\frac{1}{1+\!\!\!\displaystyle \sum _{\begin{array}{c} w\,:\,w\sim u_{j},\\ w\ne u_{j+1} \end{array}}\exp \big \{c_{n+j}(u_{j},w)- c_{n+j}(u_{j},u_{j+1})\big \}}\,. \end{aligned}$$
(3.3)

To analyse (3.3) it comes in handy to introduce the quantity \(R_n^C\) defined via

$$\begin{aligned} R_n^C\;=\; \min _{0\le j\le \ell -1}\min _{\begin{array}{c} y\,:\,y\sim u_j,\\ y\ne u_{j+1} \end{array}} \big [c_n(u_j, u_{j+1})- c_n(u_j, y)\big ]. \end{aligned}$$
(3.4)

Recall the definition of the event \(T^C\) in (2.5).

Lemma 3.7

Assume that \(R_n^C\ge -2\) and let |V| be the number of vertices of G and D be the maximum degree of G. Then for any configuration \(\xi _0\)

$$\begin{aligned} {\mathbb P}_{\xi _0}(T^C)\ge \exp \bigg (\frac{-|V|De^2}{1-e^{-1}}\bigg ). \end{aligned}$$

We prove this Lemma in Sect. 3.3.

Now we proceed to prove Lemma 3.2. We distinguish between several cases.

(1) \(S_{\tau _k}=\varnothing \). In this case, the absolute value of all crossing numbers are bounded by one. Observe that on the event \(\{S_{\tau _k}=\varnothing \}\) one has that \(R_{\tau _k}^{C^*}\ge -2\). Define the stopping time \(\sigma =\inf \{m\ge 0\,:\,Y^{\tau _k}_m \in C^*\}\). We have the equality

$$\begin{aligned} \{\forall \, n\ge \tau _k,\,X_n=Y^{\tau _k}_{n-\tau _k}\}=\{X_{\tau _k}\rightarrow C^*\}\cap T^C_\sigma . \end{aligned}$$

Therefore, we need to estimate the probability

$$\begin{aligned} {\mathbb P}_{\xi _{\tau _k}} \bigg [\{X_0 \rightarrow C^*\}\cap T_{\sigma }^{C^*}\bigg ]\,. \end{aligned}$$

In the following we will use the Markov Property at time \(\sigma .\) Notice that on the event \(\{X_0\rightarrow C^*\}\) one still has that \(R_{\sigma }^{C^*}\ge -2\) and it always holds that \(|C|\le |V|\). Then with the help of Lemma 3.7, we can estimate, almost surely, the above probability from below by

$$\begin{aligned} {\mathbb E}_{\xi _{\tau _k}}\Big [\mathbbm {1}_{\{X_0 \rightarrow C^*\}}{\mathbb P}_{\xi _{\sigma }}\big [T^{C}\big ]\Big ] \;\ge \; {\mathbb P}_{\xi _0}\big [X_{\tau _k}\rightarrow C^*\big ] \exp \bigg (\frac{-|V|De^2}{1-e^{-1}}\bigg )\,. \end{aligned}$$

It only remains to bound the probability on the right hand side above. To that end note that along the path \(X_{\tau _k}\rightarrow C^*\) all edges have crossing number bounded in modulus by one and that \(\sigma \le |V|.\) Hence, for any fixed vertex \(v\in V,\) on the event \(\{X_{\tau _k}=v, S_{\tau _k}=\varnothing \}\),

$$\begin{aligned} {\mathbb P}_{\xi _{\tau _k}}\big [X_0\rightarrow C^*\big ]\;=\;{\mathbb P}_{v}\big [X_0\rightarrow C^*\big ] \;\ge \; \frac{1}{(1+D e^{2})^{|V|}}\,. \end{aligned}$$

Thus, we can conclude that on \(\{S_{\tau _k}=\varnothing \}\)

$$\begin{aligned} {\mathbb P}_{\xi _{\tau _k}}\big (\forall \, n\ge \tau _k,\,X_n=Y^{\tau _k}_{n-\tau _k}\big ) \ge \; \frac{1}{(1+D e^{2})^{|V|}}\,\exp \bigg (\frac{-|V|De^2}{1-e^{-1}}\bigg )\;:= \;\delta ^{(1)}\,. \end{aligned}$$

(2) \(S_{\tau _k}\ne \varnothing \), and \(X_{\tau _k}=v_0 \in S_{\tau _k}\). By definition on the event

$$\begin{aligned} \{\forall \, n\ge \tau _k,\,X_n=Y^{\tau _k}_{n-\tau _k}\} \end{aligned}$$

the walker will follow only good edges and eventually closes a good circuit C and gives infinite turns around it. Let \(\sigma \) be the first time the walker meets C. Since C is a good circuit, then \(R_\sigma ^C\ge 0> -2\). Again the path joining \(X_{\tau _k}\) with C is of length at most |V|. Hence, similarly as in the case \(S_{\tau _k}=\varnothing \), we see that on \(\{S_{\tau _k}\ne \varnothing \}\cap \{X_{\tau _k}\notin C\}\)

$$\begin{aligned} {\mathbb P}_{\xi _{\tau _k}}\big (\forall \, n\ge \tau _k,\,X_n=Y^{\tau _k}_{n-\tau _k}\big ) \ge \; \frac{1}{(1+D)^{|V|}}\,\exp \bigg (\frac{-|V|De^2}{1-e^{-1}}\bigg )\;:= \;\delta ^{(2)}\,. \end{aligned}$$

where the terms \(e^2\) are not present since the path connecting \(X_{\tau _k}\) with C consists only of good edges.

(3) \(S_{\tau _k}\ne \varnothing \), and \(X_{\tau _k} \notin S_{\tau _k}.\) Define \(\sigma =\inf \{m\ge 0 \,:\,Y^{\tau _k}_m \in S_{\tau _k} \}.\) Denote by \(v_0\) a random vertex in \(S_{\tau _k}\) that minimizes the graph distance to \(X_{\tau _k}=Y^{\tau _k}_0\) among all vertices in \(S_{\tau _k}\). In this case the path \(Y^{\tau _k}_{0}\rightarrow v_0\) lies completely in \(S_{\tau _k}^\complement ,\) i.e., all edges on this path have crossing numbers bounded in absolute value by one. Hence, as in the case (1) we obtain, almost surely,

$$\begin{aligned} {\mathbb P}_{\xi _{\tau _k}}\big [X_0\rightarrow S_0\big ]\;\ge \; \frac{1}{(1+De^2)^{|V|}}\,. \end{aligned}$$

On the event \(\{Y^{\tau _k}_0\rightarrow S_{\tau _k}\}\) we then have that \(Y^{\tau _k}_{\sigma }\in S_{\tau _k}\subset S_{\tau _k+\sigma }\). In particular, on the event \(\{\forall \,n\ge \tau _k,\,X_n=Y^{\tau _k}_{n-\tau _k}\}\) we have that \(X_{\tau _k+\sigma }\in S_{\tau _k+\sigma }\) and we are in the setting of the previous case. Therefore, on the event \(\{S_{\tau _k}\ne \varnothing \}\cap \{X_{\tau _k}\notin S_{\tau _k}\}\) we can bound

$$\begin{aligned}&{\mathbb P}_{\xi _{\tau _k}}\big (\forall \, n\ge \tau _k,\,X_n=Y^{\tau _k}_{n-\tau _k}\big )\\&\quad \ge \; \frac{1}{(1+De^2)^{|V|}}\,\frac{1}{(1+D)^{|V|}}\exp \bigg (\frac{-|V|De^2}{1-e^{-1}}\bigg )\;:=\; \delta ^{(3)}\,. \end{aligned}$$

Collecting all estimates obtained we see that the proof is concluded with the choice \(\delta =\min \big \{\delta ^{(1)}, \delta ^{(2)}, \delta ^{(3)}\big \}=\delta ^{(3)}\).

3.3 Proof of Lemma 3.7

Fix a circuit \(C=(u_0,\ldots ,u_{\ell -1})\) and recall our notation \(i(\ell )=i \text{ mod } \ell \). For \(k\in {\mathbb N}\) we define the truncated trapping event \(T_m^{C,k}\) by

$$\begin{aligned} T_m^{C,k} \;= \; \{X_{m+i}\;=\;u_{i (\ell )}, \forall \; 0\le i\le k\ell \}\,. \end{aligned}$$
(3.5)

In plain words, \(T_m^{C,k}\) is the event in which the walk makes k consecutive turns around C starting at time m.

To continue bounding the above trapping event we adopt a notation in this section that slighty differs from the one used in the rest of the article. For a field of integers \(\{c_0(x,y):\, (x,y)\in E\}\) we denote with a slight abuse of notation

$$\begin{aligned} c_n (x,y) \;=\;c_0(x,y)+ \sum _{k=0}^{n-1}\Big ( \mathbbm {1}\big [(X_k,X_{k+1})=(x,y)\big ]-\mathbbm {1}\big [(X_k,X_{k+1})=(y,x)\big ]\Big )\,. \end{aligned}$$

Recall that we write \(\xi _n=(X_n,a_n)\) for the pair consisting of the position of the Ant RW and its induced environment at time n. Let \(\xi _0=(u_0,a_0)\) be the initial state of this Markov chain. We then have the following:

Lemma 3.8

Let G be any locally finite graph. If \(X_0=u_0\), then

$$\begin{aligned} {\mathbb P}_{\xi _{0}}(T_0^{C,1}) \;\ge \; \prod _{j=0}^{\ell -1}\frac{1}{1+\displaystyle \sum _{\begin{array}{c} w\,:\,w\sim u_{j}\\ w\ne u_{j+1} \end{array}}\exp \big \{c_{0}(u_{j},w)- c_{0}(u_{j},u_{j+1})\big \}}\,. \end{aligned}$$

Proof

Observe that on the event \(T_0^{C,1}\) the walker makes one turn around C. Therefore, on that event we have, for all \(0\le j\le \ell -1\),

$$\begin{aligned} {\left\{ \begin{array}{ll} c_0(u,v)=c_{j}(u,v),&{} \text{ if } (u,v)\notin C,\\ c_{j}(u,v)\ge c_{0}(u,v),&{} \text{ if } (u,v)\in C. \end{array}\right. } \end{aligned}$$

It is now plain to see that for all \(j\in \{0,\ldots , \ell -1\}\) and for all \(w\sim u_j,\) \(w\ne u_{j}\),

$$\begin{aligned} c_{j}(u_{j},w)- c_{j}(u_{j},u_{j+1}) \;\le \; c_{0}(u_{j},w)- c_{0}(u_{j},u_{j+1})\,. \end{aligned}$$

Hence, the claim follows from Eq. (3.3). \(\square \)

Lemma 3.9

Let G be a locally finite graph with maximum degree \(D<\infty .\) For all \(k\in {\mathbb N}\) and all \(M\in {\mathbb R},\) if \(R_0^C\ge M\) and \(X_n=u_0\), then

$$\begin{aligned} {\mathbb P}_{\xi _0}(T_0^{C,k}) \;\ge \; \frac{1}{\prod _{j=0}^{k-1}\big (1+ D\exp \{-M-j\}\big )^\ell }\,. \end{aligned}$$

Proof

We prove the result by induction. The case \(k=1\) is an immediate consequence of Lemma 3.8 using that D is a bound for the degree of any vertex of G.

Assume that the result is true for \(1, \ldots , k-1\) and for all \(M\in {\mathbb R}.\) Using the Markov Property and the observation \(T_{0}^{C,k} = T_{0}^{C,k-1}\cap T_{(k-1)\ell }^{C,1}\), we obtain that

$$\begin{aligned} {\mathbb P}_{\xi _0}\big [T_{0}^{C,k}\big ]&={\mathbb E}_{\xi _0}\big [{\mathbb P}_{\xi _0}(T_{0}^{C,k} \,\vert \, {\mathcal G}_{(k-1)\ell })\big ]\\&={\mathbb E}_{\xi _0}\big [{\mathbb P}_{\xi _0}\big (T_{0}^{C,k-1}\cap T_{(k-1)\ell }^{C,1} \,\vert \, {\mathcal G}_{(k-1)\ell }\big )\big ]\\&={\mathbb E}_{\xi _0}\big [\mathbbm {1}_{T_{0}^{C,k-1}}{\mathbb P}_{\xi _{(k-1)\ell }}\big (T_{0}^{C,1}\big )\big ]\,. \end{aligned}$$

Now observe that on the event \(T_{0}^{C,k-1}\), we have that \(R^C_{(k-1)\ell }\ge M+k-1.\) Therefore, using the base case \(k=1\), we can write

$$\begin{aligned} {\mathbb E}_{\xi _0}\big [\mathbbm {1}_{T_0^{C,k-1}}{\mathbb P}_{\xi _{(k-1)\ell }}(T_0^{C,1})\big ]&\;\ge \; {\mathbb E}_{\xi _0}\Big [\mathbbm {1}_{T_0^{C,k-1}}\frac{1}{(1+ D\exp \{-M-(k-1)\})^\ell }\Big ]\\&\;=\; \frac{1}{(1+ D\exp \{-M-(k-1)\})^\ell }\,{\mathbb P}_{\xi _0}\big (T_0^{C,k-1}\big ) \end{aligned}$$

and using the induction hypothesis we finish the proof. \(\square \)

To finish the proof of Lemma 3.7 we just use the following facts:

  1. (1)

    \({\mathbb P}_{\xi _0}\big (T_0^C\big )=\lim _{k\rightarrow \infty } {\mathbb P}_{\xi _0}\big (T_0^{C,k}\big )\),

  2. (2)

    \(1+x\le e^x,\)

  3. (3)

    \(\sum _{i\ge 0}x^i=1/(1-x)\) if \(x\in [0,1).\)

4 Proof of Theorem 2.2 in the \({\mathbb Z}^d\) Case

We start now to deal with the proof of Theorem 2.2 in the case \(G={\mathbb Z}^d\) with \(d\ge 2\). A sketch of the proof goes as follows. First, we will argue that for the Ant RW the probability to be trapped in a circuit directly after escaping certain well chosen increasing balls is uniformly bounded from below in the environment. This will show that the walk is almost surely bounded. Then, we will construct a simultaneous coupling between the Ant RW on \({\mathbb Z}^d\) and on all those balls. Under that coupling and by the previous boundedness result, we will conclude that the Ant RW on \({\mathbb Z}^d\) almost surely coincides with the Ant RW on some (random) ball. This with the finite case of Theorem 2.2 will permit to conclude the proof.

Proposition 4.1

The Ant RW in \(G={\mathbb Z}^d\) with \(d\ge 2\) is almost surely bounded.

Proof

Denote by \(B_k=B[0,k]\) the closed ball of center 0 and radius \(k\in {\mathbb N}\) in the graph \({\mathbb Z}^d\) with respect to the \(\ell ^1\)-distance and denote by \(\partial B_{k}\) its inner boundary. Recall that we are assuming \(X_0=0\). For each \(k\in {\mathbb N}\) we define the stopping time

$$\begin{aligned} \tau _k\;=\;\inf \{n>0: X_n \in B_{3k}^\complement \} \end{aligned}$$
(4.1)

and let

$$\begin{aligned} E_k\;\overset{\text {def}}{=}\; \big \{\tau _k<\infty \big \}\,. \end{aligned}$$

That is, \(E_k\) is the event where the Ant RW escapes the ball of radius 3k. Let V(k) be the set of vertices \(v\in {\mathbb Z}^d\) such that \(d(v,B_{3k})=1\). It is elementary to check that, for each \(v \in V(k)\), there exists a circuit \(C_{v}\) of length 4 such that \(C_{v}\subset B_{3(k+1)}\backslash B_{3k}\), see Fig. 3 for an illustration. These circuits are not unique. However in the sequel, for ease of notation, for each v as above \(C_{v}\subset B_{3(k+1)}\backslash B_{3k}\) denotes a fixed but arbitrarily chosen circuit.

Recall the trapping event \(T^C_m\) defined in (2.5) and let

$$\begin{aligned} F_k\;\overset{\text {def}}{=}\; E_k\bigcap \Big (\bigcup _{v\in V(k)} T^{C_{v}}_{\tau _k} \Big )\,. \end{aligned}$$

In other words, \(F_k\) is the event in which the Ant RW eventually escapes \(B_{3k}\) and immediately after that, is trapped in a directed circuit of length four contained in \(B_{3(k+1)}\backslash B_{3k}\), see Figure 3.

We now claim that there exists some \(\delta =\delta (d)>0\) such that

$$\begin{aligned} {\mathbb P}\big (F_k\,|\,E_k\big )\;>\;\delta \,,\quad \forall \,k\in {\mathbb N}\,. \end{aligned}$$
(4.2)

To prove the claim we first apply the strong Markov property at time \(\tau _k\), which yields that

$$\begin{aligned} {\mathbb P}\big (F_k|E_k\big )&={\mathbb P}\bigg (\tau _k<\infty ,\bigcup _{v\in V(k)} T^{C_{v}}_{\tau _k}\bigg |\tau _k<\infty \bigg )= {\mathbb E}\bigg [{\mathbb P}_{\xi _{\tau _k}}\bigg (\bigcup _{v\in V(k)} T^{C_{v}}_{0}\bigg )\,\bigg |\tau _k<\infty \bigg ]\,. \end{aligned}$$

Note now that

$$\begin{aligned} \{\tau _k<\infty \}\;=\;\bigcup _{v\in V(k)}\big \{\tau _k<\infty \,,\, X_{\tau _k}=v\big \} \end{aligned}$$

and

$$\begin{aligned} {\mathbb P}_{\xi _{\tau _k}}\bigg (\bigcup _{v\in V(k)} T^{C_{v}}_{0}\bigg )\;\ge \; {\mathbb P}_{\xi _{\tau _k}}\big (T^{C_{u_0}}_{0}\big )\; \text { on the event }\; \{\tau _k<\infty ,X_{\tau _k}=u_0\}\,. \end{aligned}$$
(4.3)

Therefore, to obtain (4.2) it is enough to get a uniform lower bound for the random variable on the right hand side of (4.3) on the event \(\{\tau _k<\infty ,X_{\tau _k}=u_0\}\).

Immediately after exiting \(B_{3k}\), the Ant RW has not crossed any edge in \(B_{3(k+1)}\backslash B_{3k}\) except the edge \(\{X_{\tau _k-1}, X_{\tau _k}\}\) connecting the vertex \(X_{\tau _k-1}\in \partial B_{3k}\) to the root \(u_0=X_{\tau _k}\in B_{3(k+1)}\) of the circuit \(C_{u_0}=(u_0,u_1,u_2,u_3)\). Therefore, at time \(\tau _k\), all edges contained \(B_{3(k+1)}\backslash B_{3k}\) have crossing number zero, except the two directed edges \((X_{\tau _k-1},u_0)\) and \((u_0, X_{\tau _k-1})\) which have crossing number 1 and \(-1\) respectively. Thus, it follows that \(R_{\tau _k}^{C_{u_0}}\ge -1>-2\). Lemma 3.7 permits one to obtain the desired \(\delta >0\), which is independent of k and hence the claim.

Since \(F_k\subset E_{k+1}^\complement \) and by the previous claim, we obtain that \({\mathbb P}\big (E_{k+1}^\complement \cap E_k\big )>\delta \, {\mathbb P}\big (E_k\big )\) for all \(k\in {\mathbb N}\). Since \(E_{k+1}\subset E_{k}\), this implies that \({\mathbb P}\big (E_{k}\big )-{\mathbb P}\big (E_{k+1}\big )> \delta \, {\mathbb P}\big (E_k\big )\), for all \(k\in {\mathbb N}\), which leads to \({\mathbb P}\big (E_{k+1}\big )<(1-\delta )^{k+1}\, {\mathbb P}\big (E_0\big )=(1-\delta )^{k+1}\) for all \(k\in {\mathbb N}\). This yields \({\mathbb P} \big (\bigcap _{k=1}^\infty E_k\big )=0\) and finishes the proof. \(\square \)

Fig. 3
figure 3

Event \(F_k\). After exiting the ball \(B_{3k}\), the Ant RW spins forever around a circuit \(C\subset B_{3(k+1)}\backslash B_{3k}\) of length 4, which is indicated by arrows. The gray ball represents the root \(u_0\) of the circuit \(C_{u_0}=(u_0,u_1,u_2,u_3)\). The vertex x is the last visited vertex of \(\partial B_{3k}\) before exiting \(B_{3k}\)

Fig. 4
figure 4

Coupling between the Ant RW on \({\mathbb Z}^d\) and the Ant RW’s on \(B_k\subset {\mathbb Z}^d\), \(k\in {\mathbb N}\). After the hitting time \(\sigma _k\) of \(\partial B_k\), the Ant RW \(X_n^{B_k}\) evolves independently of the Ant RW \(X_n\). Above, the dashed path represents \(X_n^{B_k}\) for times greater than \(\sigma _k\). Note that, immediately after \(\sigma _k\), the Ant RW \(X_n\) may or may not exit \(B_k\). The gray ball represents the (final) position of \(X_n^{B_k}\) and the black ball the (final) position of \(X_n\)

Proof of the Theorem 2.2

in the case \(G={\mathbb Z}^d\) with \(d\ge 2\)

For any \(k\in {\mathbb N}\) we denote by \((X_n^{B_k})_{n\ge 0}\) the Ant RW on \(B_k\). We will construct a coupling

$$\begin{aligned} \Big ((X_n)_{n\ge 0}, (X_n^{B_1})_{n\ge 0}, (X_n^{B_2})_{n\ge 0}, (X_n^{B_3})_{n\ge 0},\ldots \Big ) \end{aligned}$$

of all these stochastic processes. To do so, we first assume that \((X_n)_{n\ge 0}\) has been constructed on some probability space, which will be enriched in the sequel. On this probability space, we define stopping times \(\sigma _k\) defined via

$$\begin{aligned} \sigma _k\;=\;\min \big \{n\ge 0: X_n \in \partial B_k\big \} \,. \end{aligned}$$

To construct \((X_n^{B_k})_{n\ge 0}\) from \((X_n)_{n\ge 0}\), we let \(X_n^{B_k}\overset{\text {def}}{=}X_n\) for \(n< \sigma _k\). If \(\sigma _k<\infty \), then for \(n\ge \sigma _k\) we let \((X_n^{B_k})_{n \ge \sigma _k}\) evolve independently of \((X_n)_{n\ge \sigma _k}\) on \(B_k\), see Fig. 4 for an illustration. One then readily checks that \((X_n^{B_k})_{n\ge 0}\) indeed has the law of the Ant RW on \(B_k\). Moreover, for \(n< \sigma _k\), one has that

$$\begin{aligned} X_n\;=\; X_n^{B_{k}}\;=\;X_n^{B_{k+1}}\;=\;X_n^{B_{k+2}}\;=\;\cdots \end{aligned}$$
(4.4)

By Theorem 2.2 for finite graphs, we know that for any \(k\in {\mathbb N}\) there exists a random circuit \(C=C(k)\) such that \((X_n^{B_k})_{n\ge 0}\) is almost surely eventually trapped in C. By Proposition 4.1, the Ant RW X on \({\mathbb Z}^d\) is bounded. Hence, almost surely there exists a random index \(k\ge 1\) such that \(\sigma _k=\infty \) and hence (4.4) holds for any \(n\in {\mathbb N}\). Therefore, the Ant RW on \({\mathbb Z}^d\) is almost surely trapped in some (random) circuit C, thus concluding the proof. \(\square \)

Remark 4.2

The key property of the lattice \({\mathbb Z}^d\), \(d\ge 2\), in proof of Theorem 2.2 item b) is the presence of circuits of fixed length starting from any vertex (outside of any given large set), which lead to the conditional probability (4.2). Keeping this in mind, the proof of Theorem 2.2 item b) can be easily adapted to different lattices as the slab \(\{1,\ldots ,N\}\times {\mathbb Z}^d\), regular non-square lattices etc.