A property of random walks on a cycle graph

Open Access
Original Article

Abstract

We analyze the Hunter vs. Rabbit game on a graph, which is a model of communication in adhoc mobile networks. Let G be a cycle graph with N nodes. The hunter can move from a vertex to a vertex along an edge. The rabbit can jump from any vertex to any vertex on the graph. We formalize the game using the random walk framework. The strategy of the rabbit is formalized using a one dimensional random walk over Open image in new window . We classify strategies using the order O(k β−1) of their Fourier transformation. We investigate lower bounds and upper bounds of the probability that the hunter catches the rabbit. We found a constant lower bound if β∈(0,1) which does not depend on the size N of the graph. We show the order is equivalent to O(1/logN) if β=1 and a lower bound is 1/N (β−1)/β if β∈(1,2]. These results help us to choose the parameter β of a rabbit strategy according to the size N of the given graph. We introduce a formalization of strategies using a random walk, theoretical estimation of bounds of a probability that the hunter catches the rabbit, and also show computing simulation results.

Keywords

Graph theory Random walk Combinatorial probability Adhoc network

Introduction

We consider a game played by two players: the hunter and the rabbit. This game is described using a graph G(V,E) where V is a set of vertices and E is a set of edges. Both players may use a randomized strategy. The hunter can move from vertex to vertex along edges. The rabbit can move to any vertex at once. The hunter’s purpose is to catch the rabbit in as few steps as possible. On the other hand, the rabbit considers a strategy that maximizes the time until the hunter catch the rabbit. If the hunter moves to a vertex that the rabbit is at, the game finishes and we say that the hunter catches the rabbit.

The Hunter vs. Rabbit game model is used for analyzing transmission procedures in mobile adhoc networks [5,6]. This model helps to send an electronic messages efficiently using mobile phones. The expected value of time until the hunter catches the rabbit is equal to the expected time until the recipient receives the mail. One of our goals is to improve these procedures.

We introduce some games resembling the Hunter vs. Rabbit game. The first one is the Princess vs. Monster game. In this game, the Monster tries to catch the Princess in area D. The difference between the Hunter vs. Rabbit game is that the Monster catches the Princess if the distance between the two players is smaller than a chosen value. Also the Monster moves at a constant speed whereas the Princess can move at any speed. This game is played on a cycle graph as introduced by Isaacs . The Princess vs. Monster game has been investigated by Alpern , Zelikin , and so on. Gal analyzed the Princess-Monster game on a convex multidimensional domain .

The next one is the Deterministic pursuit-evasion game. In this game we consider a runaway hide dark spot, for example a tunnel. Parsons innovated the search number of a graph [16,17]. The search number of a graph is the least number of people that are required to catch a runaway hiding dark spot moving at any speed. LaPaugh  showed that if the runaway is known not to be in edge e at any point of time, then the runaway can not enter edge e without being caught in the remainder of the game. Meggido showed that the computation time of the search number of a graph is NP-hard . If an edge can be cleared without moving along it, but it suffices to ‘look into’ an edge from a vertex, then the minimum number of guards needed to catch the fugitive is called the node search number of graph . The pursuit evasion problem in the plane was introduced by Suzuki and Yamashita . They gave necessary and sufficient conditions for a simple polygon to be searchable by a single pursuer. Later Guibas et al.  presented a complete algorithm and showed that the problem of determining the minimal number of pursuers needed to clear a polygonal region with holes is NP-hard. Park et al.  gave three necessary and sufficient conditions for a polygon to be searchable and showed that there is O(n 2) time algorithm for constructing a search path for an n-sided polygon. Efrat et al.  gave a polynomial time algorithm for the problem of clearing a simple polygon with a chain of k pursuers when the first and last pursuer can only move on the boundary of the polygon.

A first study of the Hunter vs. Rabbit game can be found in . The presented hunter strategy is based on random walk on a graph and it is shown that the hunter catches an unrestricted rabbit within O(n m 2) rounds, where n and m denote the number of nodes and edges, respectively. Adler et al. showed that if the hunter chooses a good strategy, the upper bound of the expected time that the hunter catches the rabbit is O(n log(d i a m(G))), where d i a m(G) is a diameter of a graph G, and if the rabbit chooses a good strategy, the lower bound of the expected time that the hunter catches the rabbit is Ω(n log(d i a m(G))) . Babichenko et al. showed Adler’s strategies yield a Kakeya set consisting of 4n triangles with minimal area .

In this paper, we propose three assumptions for the strategy of the rabbit. We have the general lower bound formula for the probability that the hunter catches the rabbit. The strategy of the rabbit is formalized using a one dimensional random walk over Open image in new window . We classify strategies using the order O(k β−1) of their Fourier transform. If β=1, the lower bound of a probability that the hunter catches the rabbit is ((c π)−1 logN+c 2)−1 where c 2 and c are constants defined by the given strategy. If β∈(1,2], the lower bound of the probability that the hunter catches the rabbit is c 4 N −(β−1)/β where c 4>0 is are constant defined by the given strategy.

We show experimental results for three examples of the rabbit strategy.
1. 1.
$$P\left\{X_{1}=k\right\} =\left\{ \begin{array}{ll} \frac{1}{2a(\vert k\vert+1)(\vert k\vert+2)} &\quad (k\in \mathbb{Z} \setminus\left\{0\right\})\\ 1-\frac{1}{2a} &\quad (k=0) \end{array} \right.$$

2. 2.
$$P\left\{X_{1}=k\right\}=\left\{ \begin{array}{ll} \frac{1}{2a\vert k\vert^{\beta +1}} &\quad (k\in\mathbb{Z}\setminus\left\{0\right\})\\ 1-\frac{1}{a}\sum\limits_{k=1}^{\infty}\frac{1}{k^{\beta +1}} &\quad (k=0) \end{array} \right.$$

3. 3.
$$P\left\{X_{1}=k\right\}=\left\{ \begin{array}{ll} \frac{1}{3} &\quad (k\in\left\{-1,0,1\right\})\\ 0 &\quad (k\not\in\left\{-1,0,1\right\}). \end{array} \right.$$

We can confirm our bounds formula, and the asymptotic behavior of those bounds by the results of simulations.

Statements of results

We consider the Hunter vs Rabbit game on a cycle graph. To explain the Hunter vs Rabbit game, we introduce some notation. Let X 1,X 2,… be independent, identically distributed random variables defined on a probability space (Ω,,P) taking values in the integer lattice Open image in new window . A one-dimensional random walk $$\{ S_{n} \}_{n=1}^{\infty }$$ is defined by
$$S_{n}= \sum_{j=1}^{n} X_{j}.$$
Let Y 1,Y 2,… be independent, identically distributed random variables defined on a probability space $$(\Omega _{\mathcal {H}}, {\cal F}_{\mathcal {H}}, P_{\mathcal {H}})$$ taking values in the integer lattice Open image in new window with
$$P_{\mathcal{H}} \{ \vert Y_{1} \vert \leq 1 \} =1.$$
Let $$N \in {\mathbb {N}}$$ be fixed. We denote by $$X_{0}^{(N)}$$ a random variable defined on a probability space $$(\Omega _{N}, {\mathcal F}_{N}, \mu _{N})$$ taking values in V N :={0,1,2,…,N−1} with
$$\mu_{N} \left\{ X_{0}^{(N)}=l \right\} = \frac{1}{N} \quad (l \in V_{N}).$$

For $$b \in {\mathbb {Z}}$$, we denote by (b mod N) the remainder of b divided by N.

A rabbit’s strategy $$\left \{\mathcal {R}_{n}^{(N)} \right \}_{n=0}^{\infty }$$ is defined by
$$\mathcal{R}_{0}^{(N)} = X_{0}^{(N)} \ ~~\text{and} ~~\ \mathcal{R}_{n}^{(N)} =\left(X_{0}^{(N)} + S_{n} \mod N\right).$$
$$\mathcal {R}_{n}^{(N)}$$ indicates the position of the rabbit at time n on V N . Hunter’s strategy $$\left \{ \mathcal {H}_{n}^{(N)} \right \}_{n=0}^{\infty }$$ is defined by
$$\mathcal{H}_{0}^{(N)} =0 \ ~~\text{and}~~ \mathcal{H}_{n}^{(N)} =\left(\sum_{j=1}^{n} Y_{j} \mod N\right).$$
$$\mathcal {H}_{n}^{(N)}$$ indicates the position of the hunter at time n on V N . Put
$$\mathbb{P}_{\mathcal{R}}^{(N)} = \mu_{N} \times P ~~ \text{and}~~ \ \tilde{\mathbb{P} }^{(N)}= P_{\mathcal{H}} \times \mathbb{P}_{\mathcal{R}}^{(N)}.$$

The hunter catches the rabbit when the hunter and the rabbit are both located on the same place.

We will discuss the probability that the hunter catches the rabbit by time N on V N , that is,
$$\tilde{\mathbb{P} }^{(N)} \left (\bigcup_{n=1}^{N} \left\{ \mathcal{H}_{n}^{(N)} = \mathcal{R}_{n}^{(N)} \right\} \right).$$
We investigate the asymptotic estimate of this probability as N.

Definition 1.

We define conditions (A1), (A2) and (A3) as follows.
• The random walk $$\{S_{n} \}_{n=1}^{\infty }$$ is strongly aperiodic, i.e. for each $$y \in \mathbb {Z}$$, the smallest subgroup containing the set
$$\begin{array}{@{}rcl@{}} \left\{y+k \in {\mathbb{Z}} \ \vert \ P\left\{X_{1} = k \right\} > 0\right\} \end{array}$$
• $$P\left \{X_{1}= k \right \} = P\left \{X_{1}=- k \right \} \quad (k \in {\mathbb {Z}})$$.

• There exist β∈(0,2], c >0 and ε>0 such that
$$\begin{array}{@{}rcl@{}} {}\phi(\theta) := \sum_{k \in \mathbb{Z}}e^{i\theta k}P\left\{X_{1} = k \right\} = 1 - c_{*} \vert \theta \vert^{\beta} + O\left(\vert \theta \vert^{\beta + \varepsilon}\right). \end{array}$$

We denote the β in (A3) as $$\beta _{\mathcal {R}}$$.

Theorem 1.

Assume that X 1 satisfies (A1)−(A3).
1. (I)
If $$\beta _{\mathcal {R}} \in (0,1)$$, then there exists a constant c 1>0 such that for $$N \in \mathbb {N} \setminus \{ 1 \}$$ and $$y_{1},y_{2}, \ldots, y_{N} \in {\mathbb {Z}}$$ with |y n y n+1|≤1 (n=1,2,…,N−1),
$$c_{1} \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N} \left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right).$$
(1)

2. (II)
If $$\beta _{\mathcal {R}}=1$$, then there exist constants c 2>0 and c 3>0 such that for $$N \in \mathbb {N} \setminus \{ 1 \}$$ and $$y_{1}, y_{2}, \ldots, y_{N} \in {\mathbb {Z}}$$ with |y n y n+1|≤1(n=1,2,…,N−1),
$$\begin{array}{@{}rcl@{}} {}\frac{1}{ \frac{1}{ c_{*} \pi} \log N +c_{2}} &\leq & \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right)\\ &\leq & \frac{c_{3}}{ \log N}. \end{array}$$
(2)

3. (III)
If $$\beta _{\mathcal {R}} \in (1,2]$$, then there exists a constant c 4>0 such that for $$N \in \mathbb {N} \setminus \{ 1 \}$$ and $$y_{1}, y_{2}, \ldots, y_{N} \in {\mathbb {Z}}$$ with |y n y n+1|≤1 (n=1,2,…,N−1),
$$\begin{array}{@{}rcl@{}} \frac{c_{4}}{ N^{(\beta-1)/ \beta}} \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right). \end{array}$$
(3)

The following bounds are obtained as a corollary of Theorem 1.

Corollary 1.

Assume (A1)−(A3).

If $$\beta _{\mathcal {R}} \in (0,1)$$, then there exists a constant c 1>0 such that for $$N \in \mathbb {N} \setminus \{ 1 \}$$,
$$c_{1} \leq \tilde{\mathbb{P} }^{(N)} \left (\bigcup_{n=1}^{N} \{\mathcal{H}_{n}^{(N)} = \mathcal{R}_{n}^{(N)} \} \right).$$
If $$\beta _{\mathcal {R}}=1$$, then there exist constants c 2>0 and c 3>0 such that for $$N \in \mathbb {N} \setminus \{ 1 \}$$,
$$\begin{array}{@{}rcl@{}} \frac{1}{ \frac{1}{c_{*} \pi} \log N + c_{2}} &\leq & \tilde{\mathbb{P} }^{(N)} \left (\bigcup_{n=1}^{N} \left\{ \mathcal{H}_{n}^{(N)} = \mathcal{R}_{n}^{(N)}\right\} \right)\\ &\leq & \frac{c_{3}}{ \log N}. \end{array}$$
(4)
If $$\beta _{\mathcal {R}} \in (1,2]$$, then there exists a constant c 4>0 such that for $$N \in \mathbb {N} \setminus \{ 1 \}$$,
$$\frac{c_{4}}{ N^{(\beta-1)/ \beta}} \leq \tilde{\mathbb{P} }^{(N)} \left (\bigcup_{n=1}^{N} \left\{ \mathcal{H}_{n}^{(N)} = \mathcal{R}_{n}^{(N)} \right\} \right).$$

Remark 1.

Adler, Räcke, Sivadasan, Sohler and Vöcking considered $$\tilde {\mathbb {P} }^{(N)} \left (\cup _{n=1}^{N} \left \{\mathcal {H}_{n}^{(N)} = \mathcal {R}_{n}^{(N)} \right \}\right)$$ in the case of
$$\begin{array}{@{}rcl@{}} P\left\{X_{1}=k\right\} =\left\{ \begin{array}{ll} \displaystyle\frac{1}{2(\vert k\vert+1)(\vert k\vert+2)} &\quad \left(k \in \mathbb{Z} \setminus \left\{0\right\}\right) \\ \displaystyle \frac{1}{2} &\quad (k=0). \end{array} \right. \end{array}$$
In this case, X 1 satisfies (A1), (A2) and
$$\phi (\theta)= 1 - \frac{\pi }{2} \vert \theta \vert + O(\vert \theta \vert^{3/2})$$
((A3) with β=1), and we have (4) in Corollary 1 which coincides with the result of Lemma 3 in .

Remark 2.

For β∈(0,2), let
$$\begin{array}{@{}rcl@{}} P \left\{X_{1}=k\right\}=\left\{ \begin{array}{ll} \displaystyle\frac{1}{2a\vert k\vert^{\beta +1}} &\quad (k\in {\mathbb{Z}} \setminus \left\{0\right\}) \\ \displaystyle 1-\frac{1}{a}\sum_{k=1}^{\infty}\frac{1}{k^{\beta +1}} & \quad (k=0) \end{array} \right. \end{array}$$
with a constant a satisfying $$a > \sum _{k=1}^{\infty } (1/ k^{\beta +1}).$$ Then ϕ(θ) in (A3) is
$$\phi (\theta)= 1 - \frac{\pi }{2a} \frac{\vert \theta \vert^{\beta }}{ \Gamma (\beta +1) \sin (\beta \pi /2)} + O\left(\vert \theta \vert^{\beta +(2- \beta)/2}\right),$$
(5)

where Γ is the gamma function (see Proof of Proposition 2). X 1 satisfies (A1), (A2) and (5).

If X 1 takes three values −1,0,1 with equal probability, then X 1 satisfies (A1), (A2) and
$$\phi (\theta)= 1- \frac{1}{3} \vert \theta \vert^{2} + O(\vert \theta \vert^{4})$$
((A3) with β=2).

The inequality (3) seems to be sharp, because the powers of upper and lower bound appearing in (3) cannot be improved. Indeed, we have the following estimates.

Proposition 1.

Let $$\mathcal {H}_{i}^{(N)} = 0$$ for any i and assume (A1)−(A3). If $$\beta _{\mathcal {R}} \in (1,2]$$, then there exist constants c 5,c 6>0 such that for $$N \in \mathbb {N}$$,
$$\frac{c_{5}}{ N^{(\beta-1)/ \beta}} \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N} \left\{ \mathcal{R}_{n}^{(N)} = 0 \right\} \right) \leq \frac{c_{6}}{ N^{(\beta-1)/ \beta}}.$$
(6)

Proposition 2.

Let $$\mathcal {H}_{i}^{(N)} = i$$ for any i. If X 1 takes three values −1,0,1 with equal probability, then there exists a constant c 7>0 such that for $$N \in \mathbb {N}$$,
$$c_{7} \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N} \left\{ \mathcal{R}_{n}^{(N)} = (n \mod N) \right\} \right).$$
(7)

The proofs of Proposition 1 and Proposition 2 are given in Proof of Proposition 1.

Remark 3.

Assume (A1) and (A2). If there exist c >0 and ε>0 such that
$$\phi(\theta) = 1 - c_{*} \vert \theta \vert + O\left(\vert \theta \vert^{1 + \varepsilon}\right)$$
((A3) with β=1). Then
$${\lim}_{N \rightarrow \infty} \left(\frac{1}{ c_{*} \pi } \log N \right) \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N} \left\{ \mathcal{R}_{n}^{(N)} = 0 \right\} \right) =1.$$
(8)

The proof of (8) is given in Proof of (8).

Computer simulation

In this section, we show some experimental results about the Hunter vs Rabbit game on a cycle graph. We compute P{S n mod N=k} by using the gamma function and the class discrete_distribution in C++. We can show the probability the rabbit is caught and the expected value of the time until the rabbit is caught using this application.

In this section, we consider a lower bound L(N,a) of the probability that the hunter catches the rabbit. According to the Proposition 3 and Proposition 6, we define L(N,a) as follows:
$$\begin{array}{@{}rcl@{}} L(N) = \frac{1}{1 + A_{N} + B_{N} + \frac{1}{1-\rho_{*}}} \end{array}$$
where
$$\begin{array}{@{}rcl@{}} A_{N} = \left\{ \begin{array}{ll} \frac{2^{2+\varepsilon - \beta}\pi^{\varepsilon - \beta}C_{*}}{c_{*}^{2}} &\quad (\beta \in (0,1]),\\ 2N^{(\beta -1)/\beta} &\quad (\beta \in (1,2)) \end{array} \right. \end{array}$$
and
$$\begin{array}{@{}rcl@{}} B_{N} = \left\{ \begin{array}{ll} \frac{2^{1-\beta}}{\pi^{\beta}c_{*}(1-\beta)} &\quad (\beta \in (0,1)),\\ \frac{1}{\pi c_{*}}\log N + \frac{1}{\pi c_{*}} &\quad (\beta = 1),\\ \frac{2^{2-\beta}}{c_{*} \pi}\left(1+\frac{1}{\beta -1}\right)N^{(\beta -1)/\beta} &\quad (\beta \in (1,2)). \end{array} \right. \end{array}$$

We note β and c are defined by a given P{X t =k} in an example. We choose appropriate constants ε, ρ and C for each examples.

Example 1.

We consider the generalization of the case of . Let
$$\begin{array}{@{}rcl@{}} P\left\{X_{t}=k\right\} =\left\{ \begin{array}{ll} \displaystyle\frac{1}{2a(\vert k\vert+1)(\vert k\vert+2)} &\quad (k\in \mathbb{Z} \setminus\left\{0\right\})\\ \displaystyle 1-\frac{1}{2a} &\quad (k=0) \end{array} \right. \end{array}$$

where $$a \geq \frac {1}{2}$$. We note β=1, c =π and ε=1/2 in Remark 1. If a=1, then this is the case in .

We can define C and ρ for this case. So we have
$$\begin{array}{@{}rcl@{}} \frac{1}{\sum_{i=0}^{N-1}p_{i}^{(N)}} \geq L(N,1) = \frac{1}{\frac{2}{\pi^{2}}\log N + 7.45574}. \end{array}$$
(9)

The proof of (9) is given in Proof of (9).

Figure 1 shows an experimental result of the probabilities for all initial positions of the rabbit with N=100 and a=1. The horizontal axis is the initial position of the rabbit, and the vertical axis shows the probability the rabbit is caught. The red line in the figure is a probability that the hunter catches the rabbit. The blue line is the average of probabilities that the hunter catches the rabbit. The green line is L(N,a). In this case, the hunter does not move from the initial position 0. As you can see, the average of the probability that the hunter catches the rabbit is bounded below by L(N,a). Figure 1 This is an experimental result of Example 1. In this case, a=1. The hunter does not move from an initial position 0.
In this case, the average of the probability that the hunter catches the rabbit each initial position of the rabbit nearly equals 0.4258, so we have
$$\begin{array}{@{}rcl@{}} \frac{1}{L(100,1)} \fallingdotseq 8.38894, \end{array}$$
and
$$\begin{array}{@{}rcl@{}} \frac{1}{L(100,1)} \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = 0 \right\} \right) \fallingdotseq 3.1672. \end{array}$$
Table 1 is the experimental results of Example 1 with a=1 and N=100,500 and 1000. This table shows the asymptotic behavior of (8).
Table 1

This table is experimental results of Example 1 with a =1 and N =100,500 and 1000

N

1/L(N,a)

A

A/L(N,a)

100

8.38894

0.4258

3.57201

500

8.71508

0.39048

3.40307

1000

8.85554

0.37555

3.3257

A is the average of the probability that the hunter catches the rabbit.

Example 2.

We consider the case of β∈(0,2). We put
$$\begin{array}{@{}rcl@{}} P\left\{X_{t}=k\right\}=\left\{ \begin{array}{ll} \displaystyle\frac{1}{2a\vert k\vert^{\beta +1}} &\quad (k\in\mathbb{Z}\setminus\left\{0\right\})\\ \displaystyle 1-\frac{1}{a}\sum_{k=1}^{\infty}\frac{1}{k^{\beta +1}} &\quad (k=0) \end{array} \right. \end{array}$$
where $$a\! >\! \sum _{k=1}^{\infty }\frac {1}{k^{\beta +1}}$$. By Remark 2, $$c_{*} = \frac {\pi }{2a \Gamma (\beta +1) \sin (\beta \pi /2)}$$ and $$\varepsilon = \frac {2- \beta }{2}$$. Then, the lower bound of the probability that the hunter catches the rabbit L(N,a) is
$$\begin{array}{@{}rcl@{}} & L(N,a)\\ &{\kern-7.5pt}= \left\{ \begin{array}{l} \displaystyle\frac{1}{1 + \frac{2^{1-\beta}}{(1-\beta)\pi^{\beta}c_{*}} + 2^{4-3\beta /2}\pi^{1-3\beta /2}c_{*}^{-1}C_{*} + (1-\rho_{*})^{-1}}\\ \hspace{5cm} (\beta \in (0,1))\\ \displaystyle\frac{1}{1 + \frac{1+\log N}{\pi c_{*}}+2^{7/2}\pi^{-1/2}c_{*}^{-1}C_{*} + (1-\rho_{*})^{-1} }\\ \hspace{5cm} (\beta =1)\\ \displaystyle\frac{1}{1 + 2N^{(\beta-1)/\beta} + \frac{2^{2-\beta}\left(1+(\beta-1)^{-1} \right)N^{(\beta-1)/\beta}}{c_{*}\pi^{\beta}}+(1\,-\,\rho_{*})^{-1}}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad(\beta \in (1,2)) \end{array} \right. \end{array}$$
where ρ and C are appropriate constants for each examples. When a=2.5 and β=1, we set $$C_{*} \fallingdotseq 0.177245$$ and $$\rho _{*} \fallingdotseq 0.694811$$. So we have
$$\begin{array}{@{}rcl@{}} L(N,2.5) = \frac{1}{\frac{5}{\pi^{2}}\log N + 4.65936}. \end{array}$$
Figure 2 is an experimental result with β=1, N=100 and a=2.5. In this case, the average of the probability that the hunter catches the rabbit nearly equals 0.318, so we have
$$\begin{array}{@{}rcl@{}} \frac{1}{L(100,2.5)} \fallingdotseq 6.99237, \end{array}$$ Figure 2 This is an experimental result of Example 2. In this case, a=2.5. The hunter does not move from an initial position 0.
and
$$\begin{array}{@{}rcl@{}} \frac{1}{L(100,2)} \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = 0 \right\} \right) \fallingdotseq 2.22357. \end{array}$$
Table 2 is the experimental results of Example 2 with β=1, a=2.5 and N=100,500 and 1000. This table shows that the value of A/L(N,a)(>1) is decreasing.
Table 2

This table is experimental results of Example 2 with β =1, a =2 . 5 and N =100,500 and 1000

N

1/L(N,a)

A

A/L(N,a)

100

6.99237

0.318

2.22357

500

7.80772

0.25924

2.02407

1000

8.15887

0.24015

1.95935

A is the average of the probability that the hunter catches the rabbit.

Example 3.

We put
$$\begin{array}{@{}rcl@{}} P\left\{X_{t}=k\right\}=\left\{ \begin{array}{ll} \displaystyle\frac{1}{3} &\quad (k\in\left\{-1,0,1\right\})\\ \displaystyle 0 &\quad (k\not\in\left\{-1,0,1\right\}). \end{array} \right. \end{array}$$
By Remark 2, β=2, $$c_{*} = \frac {1}{3}$$ and ε=2. In this case, the lower bound of the probability the hunter catches the rabbit L (N) is
$$\begin{array}{@{}rcl@{}} L'(N) = \frac{1}{\left(1+\frac{6}{\pi^{2}}\right)N^{1/2} + 4.26301}. \end{array}$$
(We can prove this using in the same way in Proof of (9).) Figure 3 is an experimental result of Example 3. The green line in Fig. 3 is L (N). Figure 3 This is an experimental result of Example 3. The hunter does not move from an initial position 0.

We could have a concrete lower bound of the average of a probability that the hunter catches the rabbit for those examples.

Upper bounds and lower bounds

In this section, we give a relation between
$$\mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right)$$
and one-dimensional random walk $$\{ S_{n} \}_{n=1}^{\infty }$$.

Proposition 3.

For $$N \in {\mathbb {N}} \setminus \{ 1 \}$$ and $$y_{1},y_{2}, \ldots, y_{N} \in {\mathbb {Z}}$$ with |y n y n+1|≤1 (n=1,2,…,N−1),
$$\begin{array}{@{}rcl@{}} \frac{1}{ \sum_{i=0}^{N-1} p_{i}^{(N)}} &\leq & \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right)\\ &\leq & \frac{2}{ \sum_{i=0}^{N-1} q_{i}^{(N)} }, \end{array}$$
(10)
where
$$[y]_{N}= \left\{ y+kN \ | \ k \in {\mathbb{Z}} \right\},$$
$$\begin{array}{@{}rcl@{}} p_{i}^{(N)} = \left\{ \begin{array}{ll} 1 &\quad (i=0) \\ \displaystyle \max_{\vert y \vert \leq i, \ y \in {\mathbb{Z}}} P\left\{S_{i} \in [y]_{N} \right\} &\quad (i \in \mathbb{N}) \\ \end{array} \right. \end{array}$$
and
$$\begin{array}{@{}rcl@{}} q_{i}^{(N)} = \left\{ \begin{array}{ll} 1 &\quad (i=0) \\ \displaystyle \min_{\vert y \vert \leq i, \ y \in {\mathbb{Z}}} P\left\{S_{i} \in [y]_{N} \right\} &\quad (i \in \mathbb{N}). \end{array} \right. \end{array}$$

Proof.

We note that
$$\begin{array}{@{}rcl@{}} &&\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\}\\ &&= \bigcup_{l=0}^{N-1} \bigcup_{n=1}^{N} \left\{ X_{0}^{(N)}=l, \ l+S_{n} \in [y_{n}]_{N} \right\}\\ &&= \bigcup_{l=0}^{N-1} \bigcup_{n=1}^{N} \left\{ \begin{array}{ll} X_{0}^{(N)}=l, & l+S_{n} \in [y_{n}]_{N},\\ l+S_{i} \notin [y_{i}]_{N}, &1 \leq i \leq n-1 \end{array} \right\} \end{array}$$
by the definition of $$\left \{ \mathcal {R}_{n}^{(N)} \right \}_{n=0}^{\infty }$$. By $$\mathbb {P}_{\mathcal {R}}^{(N)} = \mu _{N} \times P$$, the above relation implies
$$\begin{array}{@{}rcl@{}} &&\mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right)\\ &&= \sum_{l=0}^{N-1} \sum_{n=1}^{N} \frac{1}{N} P \left\{ \begin{array}{ll} l+S_{i} \notin [y_{i}]_{N}, & 1 \leq i \leq n-1, \\ l+S_{n} \in [y_{n}]_{N} & \\ \end{array} \right\}.\\ \end{array}$$
(11)
For l∈{0,1,…,N−1} and n∈{2,3,…,N}, we decompose the event {l+S n ∈[y n ] N } according to the value of the first hitting time for [y 1] N ,[y 2] N ,…,[y n ] N and the hitting place to obtain
$$\begin{array}{@{}rcl@{}} &&P \{ l+ S_{n} \in [y_{n}]_{N} \}\\ &&\quad = \sum_{j=1}^{n} \sum_{m \in {\mathbb{Z}}} P \left\{ \begin{array}{ll} l+ S_{i} \notin [y_{i}]_{N}, & \ 1 \leq i \leq j-1, \\ \ l+ S_{j}= y_{j}+mN, & \\ y_{j}+mN+ X_{j+1} + & \cdots + X_{n} \in [y_{n}]_{N} \\ \end{array} \right\}\!. \end{array}$$
The probability in the double summation on the right-hand side above is equal to
$$\begin{array}{@{}rcl@{}} &&P \left\{ \begin{array}{ll} l+ S_{i} \notin [y_{i}]_{N}, & \ 1 \leq i \leq j-1, \\ \ l+ S_{j}= y_{j}+mN, & \\ \end{array} \right\}\\ &&\quad\times P \left\{ y_{j}+mN+S_{n-j} \in [y_{n}]_{N} \right\} \end{array}$$
by the Markov property. It is easy to verify that for any $$m \in {\mathbb {Z}}$$,
$$\begin{array}{@{}rcl@{}} &&P \left\{ y_{j}+mN+S_{n-j} \in [y_{n}]_{N} \right\}\\ &&= P \left\{ S_{n-j} \in [y_{n}-y_{j}]_{N} \right\} \leq p_{n-j}^{(N)} \end{array}$$
by |y n y j |≤nj. Therefore
$$\begin{array}{@{}rcl@{}} &&P \left\{l+ S_{n} \in [y_{n}]_{N} \right\}\\ &&\leq \sum_{j=1}^{n} P \left\{ \begin{array}{ll} l+ S_{i} \notin [y_{i}]_{N}, & \ 1 \leq i \leq j-1, \\ \ l+ S_{j}= [ y_{j}]_{N} & \\ \end{array} \right\} p_{n-j}^{(N)},\\ \end{array}$$
(12)
for l∈{0,1,…,N−1} and n∈{1,2,…,N}. By multiplying (12) by 1/N and summing (l,n) over {0,1,…,N−1}×{1,2,…,N}, we have
$$\begin{array}{@{}rcl@{}} &&{}\sum_{l=0}^{N-1} \sum_{n=1}^{N} \frac{1}{N} P \left\{ l+ S_{n} \in [y_{n}]_{N} \right\}\\ &&{}\leq \sum_{l=0}^{N-1} \sum_{j=1}^{N} \frac{1}{N} P \left\{ \begin{array}{ll} l+ S_{i} \notin [y_{i}]_{N}, & \ 1 \leq i \leq j-1, \\ l+ S_{j}= [ y_{j}]_{N} & \\ \end{array} \right\}\\ &&{}\quad\times \left(\sum_{i=0}^{N-j} p_{i}^{(N)} \right)\\ &&{}\leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right)\left(\sum_{i=0}^{N-1} p_{i}^{(N)} \right)\!\!. \end{array}$$
(13)

Here we used (11).

By $$\sum _{l=0}^{N-1} P \{ l\,+\,S_{n}\! \in [\!y]_{N} \}= P \{ S_{n} \in {\mathbb {Z}} \}\! = 1 (n \in {\mathbb {N}}, y \in {\mathbb {Z}})$$,
$$\sum_{l=0}^{N-1} \sum_{n=1}^{N} \frac{1}{N} P \{ l+ S_{n} \in [y_{n}]_{N} \} =1.$$
(14)
(13) and (14) imply
$$1 \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right)\left(\sum_{i=0}^{N-1} p_{i}^{(N)} \right)$$
(15)

that is the first inequality in (10).

For the last inequality in (10), let y N+j =y N (j=1,2,…,N). The same argument as showing (15) (we use $$q_{i}^{(N)}$$ instead of $$p_{i}^{(N)}$$) gives
$$\begin{array}{@{}rcl@{}} 2 &=& \sum_{l=0}^{N-1} \sum_{n=1}^{2N} \frac{1}{N} P \{ l+ S_{n} \in [y_{n}]_{N} \}\\ &\geq & \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right)\left(\sum_{i=0}^{N-1} q_{i}^{(N)} \right). \end{array}$$

Corollary 2.

For $$N \in {\mathbb {N}} \setminus \{ 1 \}$$,
$$\begin{array}{@{}rcl@{}} &&\frac{1}{ 1+ \sum_{i=1}^{N-1} P \{ S_{i} \in _{N} \}} \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = 0 \right\} \right)\\ &&\leq \frac{2}{ 1+ \sum_{i=1}^{N-1} P \{ S_{i} \in _{N} \}}. \end{array}$$
(16)

Proof.

Put y 1=y 2=⋯=y 2N =0 in the proof of Proposition 3. Then the same argument as showing (10) gives (16).

Corollary 3.

For $$N \in {\mathbb {N}} \setminus \{ 1 \}$$,
$$\begin{array}{@{}rcl@{}} &&\frac{1}{ 1+ \sum_{i=1}^{N-1} P \{ S_{i} \in [i]_{N} \} }\\ &&\leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (n \mod N) \right\} \right)\\ &&\leq \frac{2}{ 1+ \sum_{i=1}^{N-1} P \{ S_{i} \in [i]_{N} \} }. \end{array}$$
(17)

Proof.

Put y j =j (j=1,2,…,2N) in the proof of Proposition 3. Then the same argument as showing (10) gives (17).

Remark 4.

By the same argument as showing (16), we obtain that for $$\tilde {\epsilon } >0$$ and $$N \geq 1/ \tilde {\epsilon }$$,
$$\mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = 0 \right\} \right) \leq \frac{1+ \tilde{\epsilon} }{ 1+ \sum_{i=1}^{\tilde{\epsilon} N} P \{ S_{i} \in _{N} \}}.$$

Fourier transform

In this section, we introduce some results concerning one-dimensional random walk.

Proposition 4.

If a one-dimensional random walk satisfies (A1) and (A3), then there exist C 1>0 and $$N_{1} \in {\mathbb {N}}$$ such that for nN 1,
$$\begin{array}{@{}rcl@{}} &&{}\sup_{l \in {\mathbb{Z}}} \left\vert n^{1/ \beta} P \{ S_{n}=l \} - \frac{1}{2 \pi} \int_{- \infty}^{+ \infty} e^{-c_{*} \vert x \vert^{\beta} } \exp \left(-i \frac{ xl }{n^{1/ \beta }} \right) dx \right\vert\\ &&{}\leq C_{1} n^{- \delta }, \end{array}$$

where δ= min{ε/(2β),1/2}.

Proof.

Proposition 4 can be proved by the same procedure as in Theorem 1.2.1 of .

The Fourier inversion formula for ϕ n (θ) is
$$n^{1/ \beta} P \{ S_{n}=l \}= \frac{n^{1/ \beta}}{2 \pi} \int_{- \pi }^{\pi } \phi^{n}(\theta) e^{-i \theta l} \ d \theta.$$
(18)
By (A3), there exist C >0 and r∈(0,π) such that for |θ|<r,
$$\vert \phi (\theta) -\left(1- c_{*} \vert \theta \vert^{\beta} \right) \vert \leq C_{*} \vert \theta \vert^{\beta + \varepsilon }$$
(19)
and
$$\vert \phi (\theta) \vert \leq 1- \frac{c_{*}}{2} \vert \theta \vert^{\beta}.$$
(20)
With r, we decompose the right-hand side of (18) to obtain
$$n^{1/ \beta} P \{ S_{n}=l \} = I(n,l)+ J(n,l),$$
where
$$\begin{array}{@{}rcl@{}} && I(n,l)= \frac{n^{1/ \beta}}{2 \pi} \int_{\vert \theta \vert <r} \phi^{n}(\theta) e^{-i \theta l} \ d \theta, \\ && J(n,l)= \frac{n^{1/ \beta}}{2 \pi} \int_{r \leq \vert \theta \vert \leq \pi} \phi^{n}(\theta) e^{-i \theta l} \ d \theta. \end{array}$$
A strongly aperiodic random walk (A1) has the property that |ϕ(θ)|=1 only when θ is a multiple of 2π (see § 7 Proposition 8 of ). By the definition of ϕ(θ), |ϕ(θ)| is a continuous function on the bounded closed set [−π,−r]∪[r,π], and |ϕ(θ)|≤1 (θ∈[−π,π]). Hence, there exists a ρ<1, depending on r∈(0,π], such that
$$\max_{r \leq \vert \theta \vert \leq \pi } \vert \phi (\theta) \vert \leq \rho.$$
(21)
By using the above inequality,
$$\vert J(n,l) \vert \leq \frac{n^{1/ \beta }}{ 2 \pi} \int_{r \leq \vert \theta \vert \leq \pi } \vert \phi (\theta)\vert^{n} \ d \theta \leq n^{1/ \beta } \rho^{n}.$$
We perform the change of variables θ=x/n 1/β , so that
$$I(n,l)= \frac{1}{2 \pi} \int_{\vert x \vert <r n^{1/ \beta }} \phi^{n} \left(\frac{x}{ n^{1/ \beta }} \right) \exp \left(-i \frac{ xl }{n^{1/ \beta }} \right) \ dx.$$
Put
$$\gamma = \min \left\{ \frac{\varepsilon } { 2 \beta (\beta + \varepsilon +1)}, \ \frac{1}{ 2(2\beta +1)} \right\}.$$
We decompose I(n,l) as follows:
$$\begin{array}{@{}rcl@{}} I(n,l)&=& \frac{1}{2 \pi} \int_{- \infty}^{+ \infty} e^{-c_{*} \vert x \vert^{\beta} } \exp \left(-i \frac{ xl }{n^{1/ \beta }} \right) \ dx\\ && + I_{1}(n,l)+I_{2}(n,l)+I_{3}(n,l), \end{array}$$
where
$$I_{1}(n,l)= \frac{1}{2 \pi} \int_{\vert x \vert \leq n^{\gamma} } \left \{ \phi^{n} \left(\frac{x}{ n^{1/ \beta }} \right) - e^{-c_{*} \vert x \vert^{\beta} } \right\} \quad \quad \quad \quad \quad$$
$$\qquad\qquad\qquad\,\,\,\times \exp \left(-i \frac{ xl }{n^{1/ \beta }} \right) \ dx,$$
$$I_{2}(n,l)= - \frac{1}{2 \pi} \int_{n^{\gamma} < \vert x \vert} e^{-c_{*} \vert x \vert^{\beta} } \exp \left(-i \frac{ xl }{n^{1/ \beta }} \right) \ dx \quad \quad \quad$$
and
$${}I_{3}(n,l)\,=\, \frac{1}{2 \pi } \int_{n^{\gamma} < \vert x \vert <r n^{1/ \beta }} \phi^{n} \left(\! \frac{x}{ n^{1/ \beta }} \!\right) \exp \left(-i \frac{ xl }{n^{1/ \beta }} \right) \ dx.$$
Therefore,
$$\left\vert n^{1/ \beta} P \{ S_{n}=l \} - \frac{1}{2 \pi} \int_{- \infty}^{\infty} e^{-c_{*} \vert x \vert^{\beta} } \exp \left(-i \frac{ xl }{n^{1/ \beta }} \right) \ dx \right\vert$$
$$\leq \vert J(n,l) \vert + \sum_{k=1}^{3} \vert I_{k}(n,l) \vert. \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad$$

The proof of Proposition 4 will be complete if we show that each term in the right-hand side of the above inequality is bounded by a constant (independent of l) multiple of n δ .

If n is large enough, then the bound |J(n,l)|≤n 1/β ρ n , which has already been shown above, yields
$$\vert J(n,l) \vert \leq n^{- \delta}.$$
With the help of
$$\begin{array}{@{}rcl@{}} \vert a^{n}-b^{n} \vert &=& \vert a-b \vert \left\vert \sum_{j=0}^{n-1} a^{n-1-j}b^{j}\right\vert\\ &\leq & n \vert a-b \vert \quad (a,b \in [ -1,1]) \end{array}$$
(22)
and |ϕ(θ)|≤1 (θ∈[−π,π]), (19) implies that for |x|<r n 1/β ,
$$\begin{array}{@{}rcl@{}} &&\left\vert \phi^{n} \left(\frac{x}{ n^{1/ \beta }} \right) - e^{-c_{*} \vert x \vert^{\beta} } \right\vert \leq n \left\vert \phi \left(\frac{x}{ n^{1/ \beta }} \right) - e^{-c_{*} \vert x \vert^{\beta} /n} \right\vert\\ &&\leq n \left\vert \phi \left(\frac{x}{ n^{1/ \beta }} \right) - \left(1-c_{*} \frac{\vert x \vert^{\beta}}{ n} \right) \right\vert\\ &&\hspace{1cm} + n \left\vert \left(1-c_{*} \frac{\vert x \vert^{\beta}}{ n} \right) - e^{-c_{*} \vert x \vert^{\beta} /n} \right\vert\\ &&\leq C_{*} \vert x \vert^{\beta + \varepsilon}n^{- \varepsilon / \beta} + \frac{c_{*}^{2}}{2} \vert x \vert^{2 \beta} n^{-1}. \end{array}$$
Thus
$$\begin{array}{@{}rcl@{}} \vert I_{1}(n,l) \vert &\leq & \frac{1}{2 \pi} \int_{\vert x \vert \leq n^{\gamma} }\left\vert \phi^{n} \left(\frac{x}{ n^{1/ \beta }} \right) - e^{-c_{*} \vert x \vert^{\beta} } \right\vert \ d \theta\\ &\leq & \frac{1}{ \pi}\left(\frac{C_{*}}{ \beta + \varepsilon +1} + \frac{c_{*}^{2}}{2(2 \beta +1)}\right) n^{- \delta}. \end{array}$$
It is easy to verify that for |x|<r n 1/β ,
$$\left\vert \phi^{n} \left(\frac{x}{n^{1/ \beta}} \right) \right\vert \leq \left(1 -\frac{c_{*}}{2} \frac{\vert x \vert^{\beta }}{n} \right)^{n} \leq e^{-c_{*} \vert x \vert^{\beta} /2}$$
by (20), and we obtain that
$$\begin{array}{@{}rcl@{}} \vert I_{3}(n,l) \vert &\leq & \frac{1}{ 2 \pi} \int_{n^{\gamma} < \vert x \vert < r n^{1/ \beta }} \left\vert \phi^{n} \left(\frac{x}{n^{1/ \beta}} \right) \right\vert \ dx\\ &\leq & \frac{1}{ 2 \pi} \int_{n^{\gamma} < \vert x \vert} e^{-c_{*} \vert x \vert^{\beta} /2} \ dx. \end{array}$$
(23)
Moreover, if n is large enough, then
$$e^{-c_{*} \vert x \vert^{\beta }/2} \leq \frac{2^{s}}{ c_{*}^{s}} \vert x \vert^{-s \beta} \quad (\vert x \vert > n^{\gamma }),$$
where s=(1/β)(1+1/(2γ)). By replacing the integrand in the right-hand side of the last inequality of (23) with the right-hand side of the above inequality, we obtain
$$\vert I_{3}(n,l) \vert \leq \frac{2^{s+1} \gamma}{ \pi c_{*}^{s}}n^{- 1/2 } \leq \frac{2^{s+1} \gamma}{ \pi c_{*}^{s}} n^{- \delta}.$$
(24)
The same argument as showing (24) gives
$$\vert I_{2}(n,l) \vert \leq \frac{1}{ 2 \pi} \int_{n^{\gamma} \leq \vert \theta \vert} e^{-c_{*} \vert x \vert^{\beta}} \ dx \leq \frac{2^{s+1} \gamma}{ \pi c_{*}^{s}}n^{- \delta}.$$
Let
$$I_{0}(n,l: \beta,c_{*})= \frac{1}{2 \pi} \int_{- \infty }^{+ \infty } e^{-c_{*} \vert x \vert^{\beta} } \exp \left(-i \frac{ xl }{n^{1/ \beta }} \right) \ dx$$
appearing in Proposition 4.

Remark 5.

When a one-dimensional random walk is the strongly aperiodic (A1) with E[X 1]=0 and E[|X 1|2+ε ]< for some ε∈(0,1), it is verified that
$$\phi (\theta)= 1 - \frac{E[{X_{1}^{2}}]}{2} \vert \theta \vert^{2} + O\left(\vert \theta \vert^{2+ \varepsilon }\right).$$

In this case, $$I_{0}(n,l:2,E\left [{X_{1}^{2}}\right ]/2)$$ can be computed and Proposition 4 gives the following.

(Local Central Limit Theorem) There exist $$\tilde {C}_{1}>0$$ and $$\tilde {N}_{1} \in {\mathbb {N}}$$ such that for $$n \geq \tilde {N}_{1}$$,
$$\sup_{l \in {\mathbb{Z}}} \left\vert n^{1/2} P \{ S_{n}=l \} - \frac{1}{ \sqrt{2 E[{X_{1}^{2}}] \pi}} \exp \left(- \frac{l^{2}}{2 E[{X_{1}^{2}}]n} \right) \right\vert$$
$$\leq \tilde{C}_{1} n^{- \delta }, \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad$$
(25)

where δ= min{ε/4,1/2}. (See Remark after Proposition 7.9 in .)

It is easy to see
$$I_{0}(n,l: 1,c_{*})= \frac{1}{ \pi} \frac{c_{*}}{ c_{*}^{2}+ (l/n)^{2}} \quad (n \in {\mathbb{N}}, l \in {\mathbb{Z}}, c_{*} >0)$$
and we have the following corollary of Proposition 4.

Corollary 4.

If a one-dimensional random walk satisfies (A1) and (A3) with β=1, then there exist C 2>0 and $$N_{2} \in {\mathbb {N}}$$ such that for nN 2,
$$\sup_{l \in {\mathbb{Z}}} \left\vert n P \{ S_{n}=l \} - \frac{1}{ \pi} \frac{c_{*}}{ c_{*}^{2}+ (l/n)^{2}} \right\vert \leq C_{2} n^{- \delta },$$
where δ= min{ε/2,1/2}.
We perform the change of variables t=c x β , so that
$$I_{0}(n,0: \beta,c_{*}) = \frac{1}{\pi} \int_{0}^{+ \infty } e^{- c_{*} x^{\beta }} \ dx = \frac{1}{ \beta c_{*}^{1 / \beta} \pi} \Gamma \left(\frac{1}{ \beta} \right).$$

With the help of the above calculation, Proposition 4 gives the following corollary.

Corollary 5.

If a one-dimensional random walk satisfies (A1) and (A3), then there exist C 3>0 and $$N_{3} \in {\mathbb {N}}$$ such that for nN 3,
$$\left\vert n^{1 / \beta} P \{ S_{n}=0 \} - \frac{1}{ \beta c_{*}^{1 / \beta} \pi} \Gamma \left(\frac{1}{ \beta} \right) \right\vert \leq C_{3} n^{- \delta },$$
where δ= min{ε/2β,1/2}.

Proposition 5.

If a one-dimensional random walk satisfies (A2), then for $$l \in {\mathbb {Z}}$$ and $$n \in \{ 0 \} \cup {\mathbb {N}}$$,
$$\begin{array}{@{}rcl@{}} &&P \left\{ S_{n} \in [l]_{N} \right\}\\ &&= \frac{1}{N}+ \frac{2}{N} \sum_{1 \leq j \leq(N-1)/2 } \phi^{n}\left(\frac{2j \pi}{N} \right) \cos \left(\frac{2j \pi}{N} l \right) + J_{N}(n,l),\\ \end{array}$$
(26)
where
$$\begin{array}{@{}rcl@{}} J_{N}(n,l)= \left\{ \begin{array}{ll} (1/N) \phi^{n}(\pi)\cos(\pi l) & \quad (\text{if}~~~ N~~~ \text{is even}) \\ 0 & \quad (\text{if} ~~~N~~~ \text{is odd}).\\ \end{array} \right. \end{array}$$

Proof.

By the definition of ϕ(θ),
$$\begin{array}{@{}rcl@{}} \phi^{n}(\theta) = \sum_{k \in \mathbb{Z}}e^{i\theta k}P\left\{S_{n} = k\right\}. \end{array}$$
Thus
$$\begin{array}{@{}rcl@{}} &&\phi^{n}\left(\frac{2j\pi}{N} \right) = \sum_{k \in \mathbb{Z}}e^{2ij\pi k/N}P\left\{S_{n} = k\right\}\\ &&= \sum_{\tilde{l} = 0}^{N-1}\sum_{m \in \mathbb{Z}}e^{2ij\pi (\tilde{l}+mN)/N}P\left\{S_{n} = \tilde{l}+mN\right\}\\ &&= \sum_{\tilde{l} = 0}^{N-1}e^{2ij\pi \tilde{l}/N}P\left\{S_{n} \in [\tilde{l}]_{N}\right\}. \end{array}$$
Then,
$$\begin{array}{@{}rcl@{}} &&{}\sum_{j=0}^{N-1}e^{-2ij\pi l/N}\phi^{n}\left(\!\frac{2j\pi}{N}\!\right) = \sum_{\tilde{l} = 0}^{N-1}\sum_{j=0}^{N-1}e^{2ij\pi (\tilde{l}-l)/N}P\left\{S_{n} \in [\tilde{l}]_{N}\!\right\}\\ &&{}= NP\left\{S_{n} \in [l]_{N} \right\} \end{array}$$
since
$$\begin{array}{@{}rcl@{}} \sum_{j=0}^{N-1} e^{2ij\pi (\tilde{l}-l)/N} = \left\{ \begin{array}{ll} N &\quad \tilde{l} = l\\ 0 &\quad \tilde{l} \neq l \end{array} \right.. \end{array}$$
Therefore,
$$\begin{array}{@{}rcl@{}} P\left\{S_{n} \in [l]_{N}\right\} &=& \frac{1}{N}\sum_{j = 0}^{N-1}\phi^{n}\left(\frac{2j\pi}{N}\right)e^{-2j\pi il/N}\\ &=& \frac{1}{N}\sum_{j = 0}^{N-1}\phi^{n}\left(\frac{2j\pi}{N}\right)\cos \left(\frac{2j\pi l}{N}\right). \end{array}$$
We note that $$\phi ^{n}(\theta) \in \mathbb {R}$$ and
$$\begin{array}{@{}rcl@{}} \frac{1}{N}\sum_{j = 0}^{N-1}\phi^{n}\left(\frac{2j\pi}{N}\right)\cos \left(\frac{2j\pi l}{N}\right) \in \mathbb{R} \end{array}$$
by (A2). So we have
$$\begin{array}{@{}rcl@{}} &&\phi^{n}\left(\frac{2m\pi}{N}\right)\cos \left(\frac{2m\pi l}{N}\right)\\ &&= \phi^{n}\left(\frac{2(N-m)\pi}{N}\right)\cos \left(\frac{2(N-m)\pi l}{N}\right). \end{array}$$
(27)
Let N be an even number. Then, by (27),
$$\begin{array}{@{}rcl@{}} &&P\left\{S_{n} \in [l]_{N}\right\}\\ &&= \frac{1}{N}\phi^{n}\left(0\right)\cos \left(0\right)\\ &&\quad + \frac{2}{N}\sum_{1 \leq j \leq (N-1)/2}\phi^{n}\left(\frac{2j\pi}{N}\right)\cos \left(\frac{2j\pi l}{N}\right)\\ &&\quad+ \frac{1}{N}\phi^{n}\left(\pi\right)\cos \left(\pi l\right)\\ &&= \frac{1}{N} + \frac{2}{N}\sum_{1 \leq j \leq (N-1)/2}\phi^{n}\left(\frac{2j\pi}{N}\right)\cos \left(\frac{2j\pi l}{N}\right)\\ &&\quad+ \frac{1}{N}\phi^{n}\left(\pi\right)\cos \left(\pi l\right). \end{array}$$

Therefore, we have (26) for every even number N. The proof of (26) for odd number is similar and is omitted.

Proof of Theorem 1

In this section we prove Theorem 1. To prove it, we introduce the following Proposition.

Proposition 6.

Assume (A1), (A2) and (A3).

If β∈(0,1), then there exists a constant c 8>0 such that
$$\sum_{i=0}^{N-1} p_{i}^{(N)} \leq c_{8}.$$
(28)
If β=1, then there exists a constant c 9>0 such that
$$\sum_{i=0}^{N-1} p_{i}^{(N)} \leq \frac{1}{c_{*} \pi } \log N + c_{9}.$$
(29)
If β∈(1,2], then there exists a constant c 10>0 such that
$$\sum_{i=0}^{N-1} p_{i}^{(N)} \leq c_{10} N^{(\beta -1) / \beta}.$$
(30)

Proof.

There exist C and r∈(0,π) such that for |θ|<r,
$$\vert \phi (\theta) -\left(1- c_{*} \vert \theta \vert^{\beta }\right) \vert \leq C_{*} \vert \theta \vert^{\beta + \varepsilon}$$
(31)
by (A3). We can choose r ∈(0,r] small enough so that
$$\begin{array}{@{}rcl@{}} C_{*} r_{*}^{\varepsilon} \leq \frac{1}{2} c_{*} \ \text{and } \ c_{*} r_{*}^{\beta} \leq \frac{1}{3}. \end{array}$$
(32)
Then for |θ|<r ,
$$\frac{1}{2} c_{*} \vert \theta \vert^{\beta } \leq \vert 1 -\phi (\theta) \vert$$
(33)
and
$$\vert 1 -\phi (\theta) \vert \leq \frac{3}{2}c_{*} \vert \theta \vert^{\beta } \leq \frac{1}{2}.$$
(34)
There exists a ρ ∈[0,1), depending on r , such that
$$\max_{r_{*} \leq \vert \theta \vert \leq \pi } \vert \phi (\theta) \vert \leq \rho_{*}$$
(35)

by the same reason as (21). (Here we used the condition (A1).)

Using Proposition 5 and (35), we obtain that for i∈{1,2,…,N−1},
$$\begin{array}{@{}rcl@{}} p_{i}^{(N)} &=& \max_{\vert l \vert \leq i }P\left\{S_{i} \in [l]_{N} \right\}\\ &\leq & \frac{1}{N} + \sum_{1 \leq j \leq (N-1)/2} \frac{2}{N} \left\vert\phi\left(\frac{2j\pi}{N}\right)\right\vert^{i} + \vert J_{N}(i,0)\vert\\ &\leq & \frac{1}{N} + \sum_{1 \leq j < (r_{*}/(2 \pi))N} \frac{2}{N} \left\vert\phi\left(\frac{2j\pi}{N}\right)\right\vert^{i} + \rho_{*}^{i}. \end{array}$$
Therefore
$$\sum_{i=0}^{N-1} p_{i}^{(N)} \leq 1+ \Phi_{N} + \frac{1}{ 1- \rho_{*}},$$
(36)
where
$$\Phi_{N}= \sum_{1 \leq j < (r_{*}/(2 \pi))N } \frac{2}{N} \frac{1-\left\vert\phi\left(\frac{2j\pi}{N}\right)\right\vert^{N}}{1- \left\vert \phi\left(\frac{2j\pi}{N}\right) \right\vert }.$$
Because of (A2), ϕ(θ) takes a real number. Then (33), (34) and (A1) mean that
$$\frac{1}{2} < \phi (\theta) = \vert \phi (\theta) \vert <1 \quad (\theta \in (-r_{*},0) \cup (0, r_{*}))$$
(37)
and
$$\Phi_{N} \leq \sum_{ 1 \leq j < (r_{*}/(2 \pi))N} \frac{2}{N} \frac{1 }{1- \phi\left(\frac{2j\pi}{N}\right) }.$$
(38)
We will calculate Φ N in the case β∈(0,1]. By (38), we decompose the right-hand side of the above to obtain
$$\sum_{1 \leq j < (r_{*}/(2 \pi))N} \frac{2}{N} \frac{1 }{1- \phi\left(\frac{2j\pi}{N}\right)} = \tilde{\Phi}_{N}+ E_{N},$$
(39)
where
$$\tilde{\Phi}_{N}= \frac{2^{1- \beta }}{ \pi^{\beta }c_{*}} N^{\beta -1} \sum_{ 1 \leq j < (r_{*}/(2 \pi))N} j^{- \beta}, \quad \quad \quad \quad \quad \quad \quad$$
$$E_{N}= \sum_{1 \leq j < (r_{*}/(2 \pi))N} \frac{2}{N} \left(\frac{1 }{1-\phi\left(\frac{2j\pi}{N}\right) } - \frac{1 }{c_{*} \left(\frac{2j\pi}{N}\right)^{\beta} } \right).$$
To estimate E N , we use (31) and (33) which imply that for $$j \in [1, (r_{*}/(2 \pi))N) \cap \mathbb {Z}$$,
$$\begin{array}{@{}rcl@{}} &&\frac{2}{N} \left\vert \frac{1 }{1-\phi\left(\frac{2j\pi}{N}\right) }- \frac{1 }{c_{*}\left(\frac{2j\pi}{N}\right)^{\beta} } \right\vert\\ &&= \frac{2}{N}\frac{\left\vert 1-\phi\left(\frac{2j\pi}{N}\right) - c_{*} \left(\frac{2j\pi}{N}\right)^{\beta} \right\vert }{\left\vert 1-\phi\left(\frac{2j\pi}{N}\right) \right\vert \cdot\left\vert c_{*} \left(\frac{2j\pi}{N}\right)^{\beta} \right\vert} \leq c_{11} N^{\beta - \varepsilon -1} j^{\varepsilon - \beta}, \end{array}$$
where $$c_{11}= 2^{2+ \varepsilon - \beta } \pi ^{\varepsilon - \beta } C_{*}/c_{*}^{2}.$$ By noticing that 1+εβ>0,
$$\sum_{1 \leq j < (r_{*}/(2 \pi))N} j^{\varepsilon - \beta } \leq {\int_{0}^{N}} x^{\varepsilon - \beta} \ dx = \frac{N^{1+ \varepsilon - \beta }}{ 1+ \varepsilon - \beta}.$$
Thus
$$\vert E_{N} \vert \leq c_{11}/(1+ \varepsilon - \beta).$$
(40)
It is easy to see that
$$\begin{array}{@{}rcl@{}} \tilde{\Phi}_{N} &\leq & \frac{2^{1- \beta }}{ \pi^{\beta }c_{*}} N^{\beta -1}\left(1+ {\int_{1}^{N}} x^{- \beta} \ dx \right)\\ &\leq & \left\{ \begin{array}{ll} \displaystyle{\frac{2^{1- \beta }}{\pi^{\beta }c_{*}(1- \beta)} }& (\beta \in (0,1)) \\ \displaystyle{\frac{1}{ \pi c_{*}} \log N +\frac{1}{ \pi c_{*}}} & (\beta =1). \end{array} \right. \end{array}$$
(41)

Put the pieces ((36), (38)-(41)) together, we have (28) and (29).

In the case β∈(1,2], we use (37) to obtain
$$\Phi_{N} \leq \Phi_{N}^{(1)} +\Phi_{N}^{(2)},$$
(42)
where N(β)= min{N (β−1)/β ,(r /(2π))N} and
$$\Phi_{N}^{(1)}=\sum_{1 \leq j < N(\beta)} \frac{2}{N} \frac{\left\vert 1- \phi\left(\frac{2j\pi}{N}\right)^{N} \right\vert} { \left\vert 1 - \phi\left(\frac{2j\pi}{N}\right) \right\vert}, \quad \quad \quad$$
$$\Phi_{N}^{(2)}= \sum_{N(\beta) \leq j < (r_{*}/(2 \pi))N } \frac{2}{N} \frac{1} { \left\vert 1 - \phi\left(\frac{2j\pi}{N}\right) \right\vert}.$$
We use (22) (set n=N and $$a=1,b= \phi \left (\frac {2j\pi }{N}\right)$$), then
$$\Phi_{N}^{(1)} \leq 2N(\beta) \leq 2 N^{(\beta -1)/ \beta}.$$
(43)
We notice that β−1>0, (33) gives
$$\begin{array}{@{}rcl@{}} \Phi_{N}^{(2)} &\leq & \frac{2^{2- \beta }}{c_{*} \pi^{\beta }} N^{\beta -1} \left(\sum_{N(\beta) \leq j < (r_{*}/(2 \pi))N} j^{- \beta }\right)\\ &\leq & \frac{2^{2- \beta }}{c_{*} \pi^{\beta }}N^{\beta -1} \left(N^{- \beta +1} + \int_{N^{(\beta -1)/ \beta }}^{+ \infty} x^{- \beta} \ dx \right)\\ &\leq & \frac{2^{2- \beta }}{c_{*} \pi^{\beta }} \left(1+\frac{1}{ \beta -1} \right) N^{(\beta -1)/ \beta}. \end{array}$$
(44)

Put the pieces ((36), (42)-(44)) together, we have (30).

It remains to show the last inequality in (2). To achieve this, we will use Proposition 3 and Corollary 4.

There exist C 2>0 and $$N_{2} \in \mathbb {N}$$ such that for iN 2 and $$l \in \mathbb {Z}$$,
$$P \{S_{i} =l \} \geq \frac{1}{ \pi }\frac{c_{*}}{c_{*}^{2}+(l/i)^{2}} \frac{1}{i} - C_{2} i^{-1- \delta}$$
by Corollary 4. Let
$$\begin{array}{@{}rcl@{}} c_{12}:= \frac{1}{ \pi} \frac{c_{*}}{c_{*}^{2}+1} \log N_{2} +C_{2} \sum_{i=N_{2}}^{\infty} i^{-1- \delta}. \end{array}$$
We can choose $$N_{*} \in \mathbb {N}$$ large enough so that
$$\begin{array}{@{}rcl@{}} \frac{1}{2} \frac{1}{ \pi} \frac{c_{*}}{c_{*}^{2}+1} \log N_{*} \geq c_{12}. \end{array}$$
Then for NN +1,
$$\begin{array}{@{}rcl@{}} \sum_{i=0}^{N-1} q_{i}^{(N)} &\geq & \sum_{i=N_{2}}^{N-1}\min_{\vert l \vert \leq i} P \{ S_{i} =l \}\\ &\geq & \frac{1}{ \pi }\frac{c_{*}}{c_{*}^{2}+1} \sum_{i=N_{2}}^{N-1}\frac{1}{i} - C_{2} \sum_{i=N_{2}}^{\infty} i^{-1- \delta }\\ &\geq & \frac{1}{ \pi }\frac{c_{*}}{c_{*}^{2}+1} \log N -c_{12}\\ &\geq & \frac{1}{2} \frac{1}{ \pi }\frac{c_{*}}{c_{*}^{2}+1} \log N. \end{array}$$
(45)
It follows from Proposition 3 and (45) that for $$N \in [N_{*}+1, + \infty) \cap \mathbb {N}$$ and $$y_{1},y_{2}, \ldots, y_{N} \in {\mathbb {Z}}$$ with |y n y n+1|≤1 (n=1,2,…,N−1),
$$\mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N} \left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N)\right \} \right) \leq \frac{\frac{4 \pi (c_{*}^{2}+1)}{c_{*}}}{ \log N}.$$

It is clear that $$\mathbb {P}_{\mathcal {R}}^{(N)} \left (\bigcup _{n=1}^{N} \left \{ \mathcal {R}_{n}^{(N)} = (y_{n} \mod N)\right \} \right)$$ is bounded by 1. Put $$c_{3}= \max \left \{ 4 \pi (c_{*}^{2}+1)/c_{*}, \log N_{*} \right \}$$. The last inequality in (2) holds.

The proof of Theorem 1 is complete.

Conclusion and future works

We formalized the Hunter vs Rabbit game using the random walk framework. We generalize a probability distribution of the rabbit’s strategy using four assumptions. We have the general lower bound formula of a probability that the rabbit is caught. Let P{X 1=k}=O(k β−1). If β∈(0,1), the lower bound of a probability that the hunter catches the rabbit is c 1 where c 1>0 is a constant. If β=1, the lower bound of a probability that the rabbit is caught is $$\frac {1}{\frac {1}{c_{*}\pi }\log N + c_{2}}$$ where c 2 and c are constants defined by the given strategy. If β∈(1,2], the lower bound of a probability that the rabbit is caught is $$\frac {c_{4}}{N^{(\beta -1)/\beta }}$$ where c 4>0 is a constant defined by the given strategy.

We show experimental results for three examples of the rabbit strategies. We can confirm our bounds formula, and asymptotic behavior of those bounds
$$\begin{array}{@{}rcl@{}} {\lim}_{N \rightarrow \infty}\left(\frac{1}{ c_{*} \pi} \log N \right) \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = 0\right \} \right) = 1. \end{array}$$

In this paper, we consider the lower bound of a probability that the rabbit is caught to show the worst expected value of time until the rabbit caught. Our motivation is to find the best strategy of the rabbit. Our results help to find the best strategy of the rabbit. On the other hands, what is the best strategy of the hunter? And what is the worst strategy of the hunter? Future works include to show the best strategy of the hunter is Y j+1=Y j +1, and the worst strategy of the hunter is $$Y_{j} = \mathcal {H}_{0}^{(N)}$$ for any j.

A Proof of Proposition 1

The first inequality in (6) comes from (3) in Theorem 1. To prove the last inequality in (6), we will use Corollary 2 and 5 instead of Proposition 3 and Corollary 4. The same argument as showing the last inequality in (3) gives the last inequality in (6). □

B Proof of Proposition 2

We consider the case when X 1 takes three values −1,0,1 with equal probability. In this case, X 1 satisfies (A1), (A2) and
$$\phi (\theta)= 1 -\frac{1}{3} \vert \theta \vert^{2}+ O\left(\vert \theta \vert^{4}\right).$$
We can show that there exist $$\tilde {C}_{1}>0$$ and $$\tilde {N}_{1} \in {\mathbb {N}}$$ such that for $$i \geq \tilde {N}_{1}$$ and $$l \in \mathbb {Z}$$,
$$P \{ S_{i} = l \} \leq \frac{\sqrt{3} }{2 \sqrt{\pi }} \frac{1}{ i^{1/2}} \exp \left(- \frac{3l^{2}}{4i} \right) + \tilde{C}_{1} i^{-1}$$
(46)
by (25). We notice that P{|X 1|≤1}=1, then we obtain that for $$N \in \mathbb {N} \setminus \{ 1 \}$$,
$$1+ \sum_{i=1}^{N-1} P \{ S_{i} \in [i]_{N} \} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad$$
$$= 1+ \sum_{i=1}^{N-1} P \{ S_{i}=i \} + \sum_{N/2 \leq i \leq N-1} P \{ S_{i}= i-N \}$$
and
$$\sum_{i=1}^{N-1} P \{ S_{i}=i \} = \sum_{i=1}^{N-1} \left(\frac{1}{3} \right)^{i} \leq \frac{1}{2}.$$
With the help of e x ≤1/x (x>0), (46) implies that for $$N \geq 2 \tilde {N}_{1}$$,
$$\begin{array}{@{}rcl@{}} &&{}\sum_{N/2 \leq k \leq N-1} P \{ S_{k}= k-N \}\\ &&{}\leq \sum_{N/2 \leq k \leq N-1} \left\{\frac{\sqrt{3}}{ 2 \sqrt{\pi}} \frac{1}{k^{1/2}}\exp \left(- \frac{3(k-N)^{2}}{4k} \right) + \tilde{C}_{1} k^{-1} \right\}\\ &&\leq \sqrt{\frac{3}{2 \pi}} \frac{1}{ N^{1/2}} \sum_{1 \leq k \leq N/2}\exp \left(- \frac{3k^{2}}{ 4N} \right) + \tilde{C}_{1}\sum_{1 \leq k \leq N/2} \frac{2}{N}\\ &&{}\leq \sqrt{\frac{3}{2 \pi}} \frac{1}{ N^{1/2}}\left(\sum_{1 \leq k \leq N^{1/2}} 1 + \sum_{N^{1/2} < k} \frac{4N}{3k^{2}} \right) + 2 \tilde{C}_{1}\\ &&{}\leq \sqrt{\frac{3}{2 \pi}} + \frac{2 \sqrt{2}}{ \sqrt{3 \pi}} N^{1/2} \left(\frac{1}{N} + \int_{N^{1/2}}^{+ \infty} \frac{1}{x^{2}} \ dx \right)+ 2 \tilde{C}_{1}\\ &&\leq c_{13}, \end{array}$$
where $$c_{13}= \sqrt {3/ (2 \pi)}+ 4 \sqrt {2}/ \sqrt {3 \pi }+2 \tilde {C}_{1}.$$ Thus for $$N \in \mathbb {N} \setminus \{ 1 \}$$,
$$1+ \sum_{i=1}^{N-1} P \{ S_{i} \in [i]_{N} \} \leq \max \{ 2\tilde{N}_{1}, (3/2)+c_{13} \}.$$

Combining the above inequality with Corollary 3, we have (7). □

(B) To obtain (5), we use the formula
$$\int_{0}^{+ \infty} \frac{\sin bx }{ x^{\alpha }} \ dx = \frac{\pi b^{\alpha -1}}{ 2 \Gamma (\alpha) \sin (\alpha \pi /2)}$$
(47)
for α∈(0,2) and b>0. By the definition of X 1,
$$1 - \phi (\theta) = \frac{1}{a} \sum_{k=1}^{\infty} (1- \cos \vert \theta \vert k) \frac{1}{ k^{\beta +1} }.$$
A simple calculation shows that the absolute value of the difference between the right-hand side of the above and
$$\frac{1}{a} \int_{0}^{+ \infty} \frac{1- \cos \vert \theta \vert x}{ x^{\beta +1}} \ dx$$
is bounded by a constant multiple of |θ| β+(2−β)/2. It remains to show that
$$\frac{1}{a} \int_{0}^{+ \infty} \frac{1- \cos \vert \theta \vert x}{ x^{\beta +1}} \ dx = \frac{\pi }{ 2a} \frac{\vert \theta \vert^{\beta }}{ \Gamma (\beta +1) \sin (\beta \pi /2)}.$$
(48)

We perform integration by part for the left-hand side of (48) and use (47). Then we have (48) and (5).

C Proof of (8)

Let ε>0 be fixed. By Corollary 4, there exist C 2>0 and $$N_{2} \in \mathbb {N}$$ such that for iN 2,
$$P \{ S_{i} =0 \} \geq \frac{1}{ c_{*} \pi} \frac{1}{i} - C_{2}i^{-1- \delta }.$$
(49)
(49) implies that for N≥(4/ε)(N 2+1),
$$\begin{array}{@{}rcl@{}} &&1+ \sum_{1 \leq i \leq (\epsilon /4)N} P \{ S_{i} \in _{N} \} \geq \sum_{N_{2} \leq i \leq (\epsilon /4)N} P \{ S_{i} =0 \}\\ &&\geq \sum_{N_{2} \leq i \leq (\epsilon /4)N} \left(\frac{1}{ c_{*} \pi} \frac{1}{i} - C_{2}i^{-1- \delta} \right)\\ &&\geq \frac{1}{ c_{*} \pi} \int_{N_{2}}^{(\epsilon /4) N} \frac{1}{x} \ dx - C_{2} \left(\frac{1}{N_{2}^{1+ \delta}} + \int_{N_{2}}^{+ \infty} x^{-1- \delta} \ dx \right)\\ &&= \frac{1}{ c_{*} \pi} \log N + \frac{1}{ c_{*} \pi} \log \epsilon -c_{14}, \end{array}$$
(50)

where $$c_{14}= (1/ (c_{*} \pi)) \log 4 + (1/ (c_{*} \pi)) \log N_{2} + C_{2} \left \{ 1/ N_{2}^{1+ \delta }+ 1/ (\delta N_{2}^{\delta }) \right \}.$$

We can choose $$N_{4} \in \mathbb {N}$$ which satisfies
$$\min \left\{\frac{1}{2}, \frac{\epsilon }{ 8} \right\} \frac{1}{ c_{*} \pi} \log N_{4} \geq \left\vert - \frac{1}{ c_{*} \pi} \log \epsilon + c_{14} \right\vert$$
(51)
and
$$\frac{\epsilon }{4} \frac{1}{ c_{*} \pi} \log N_{4} \geq c_{2},$$
(52)

where c 2 is the same constant in (2).

Combining Remark 5 with (50) and using the left-hand side of (2), we obtain that for N≥ max{N 4,(4/ε)(N 2+1)},
$$\frac{1}{ \frac{1}{ c_{*} \pi} \log N +c_{2}} \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N} \left\{ \mathcal{R}_{n}^{(N)} =0 \right\} \right) \quad \quad$$
$$\quad \quad \quad \quad \quad \quad \quad \leq \frac{1+ (\epsilon /4) } {\frac{1}{ c_{*} \pi} \log N + \frac{1}{ c_{*} \pi} \log \epsilon - c_{14}}.$$
Hence for N≥ max{N 4,(4/ε)(N 2+1)},
$$\left\vert \left(\frac{1}{c_{*} \pi} \log N \right) \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N} \left\{ \mathcal{R}_{n}^{(N)} =0 \right\} \right) -1 \right\vert$$
$$\leq E_{N}^{(1)} + E_{N}^{(2)}, \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad$$
where
$$E_{N}^{(1)}= \left\vert \frac{\frac{1}{ c_{*} \pi} \log N}{ \frac{1}{ c_{*} \pi} \log N +c_{2} } -1 \right\vert \quad \quad \quad \quad \quad$$
and
$$E_{N}^{(2)}= \left\vert \frac{(1+ (\epsilon /4)) \frac{1}{ c_{*} \pi} \log N} { \frac{1}{ c_{*} \pi} \log N + \frac{1}{ c_{*} \pi} \log \epsilon - c_{14}} -1 \right\vert.$$
The proof is complete if we show that for N≥ max{N 4,(4/ε)(N 2+1)},
$$E_{N}^{(1)} + E_{N}^{(2)} \leq \epsilon.$$
(53)
We use (52), then
$$E_{N}^{(1)} \leq \frac{c_{2}}{ \frac{1}{ c_{*} \pi} \log N } \leq \frac{\epsilon }{4}$$
for N≥ max{N 4,(4/ε)(N 2+1)}. We can show that
$$\begin{array}{@{}rcl@{}} E_{N}^{(2)} &\leq & \frac{(\epsilon /4) \frac{1}{c_{*} \pi} \log N + \left\vert - \frac{1}{c_{*} \pi} \log \epsilon + c_{14} \right\vert }{ \frac{1}{c_{*} \pi} \log N - \left\vert - \frac{1}{c_{*} \pi} \log \epsilon + c_{14} \right\vert }\\ &\leq & \frac{\epsilon }{2} + \frac{\left\vert - \frac{1}{c_{*} \pi} \log \epsilon + c_{14} \right\vert }{ (1/2) \frac{1}{c_{*} \pi} \log N} \leq \frac{3 \epsilon }{4} \end{array}$$

for N≥ max{N 4,(4/ε)(N 2+1)} by (51). The above two inequalities yield (53). □

D Proof of (9)

We show the lower bound of Example 1. In this case, a=1, β=1, $$c_{*} = \frac {\pi }{2a}$$ and $$\varepsilon = \frac {1}{2}$$. We have |E N |=2c 11 by (40).

We note
$$\begin{array}{@{}rcl@{}} c_{11} = \frac{2^{2+\varepsilon -\beta}\pi^{\varepsilon - \beta}C_{*}}{c_{*}^{2}} = 2^{7/2}\pi^{-5/2}C_{*}. \end{array}$$
We can choose C =1.225 by (31). So we have
$$\begin{array}{@{}rcl@{}} \vert E_{N} \vert \leq c_{11} / (1+\varepsilon - \beta) \fallingdotseq 1.58452. \end{array}$$
We have
$$\begin{array}{@{}rcl@{}} \tilde{\Phi}_{N} \leq \frac{2}{\pi^{2}}\log N + \frac{2}{\pi^{2}} \end{array}$$
by (41). So we can show that
$$\begin{array}{@{}rcl@{}} &&\sum_{i=0}^{N-1}p_{i}^{(N)} \leq 1 + \tilde{\Phi}_{N} + \vert E_{N} \vert + \frac{1}{1-\rho_{*}}\\ &&\leq 1 + \frac{2a}{\pi^{2}}\log N + \frac{2}{\pi^{2}} + 1.58452 + \frac{1}{1-\rho_{*}} \end{array}$$
by (36), (38) and (39). So we have
$$\begin{array}{@{}rcl@{}} \frac{1}{\sum_{i=0}^{N-1}p_{i}^{(N)}} \geq \frac{1}{1 + \frac{2}{\pi^{2}}\log N + \frac{2}{\pi^{2}} + 1.58452 + \frac{1}{1-\rho_{*}}} \end{array}$$
by Proposition 3. It is easily to check $$r_{*} \fallingdotseq 0.212207$$ (by (32)) and $$\max _{r_{*}\le |\theta |\le \pi }|\phi (\theta)| \le 0.785802$$, then we set ρ =0.785802. Then,
$$\begin{array}{@{}rcl@{}} \frac{1}{\sum_{i=0}^{N-1}p_{i}^{(N)}} \geq \frac{1}{\frac{2}{\pi^{2}}\log N + 7.45574}. \end{array}$$

So we have (9). □

References

1. 1.
Adler, M., Räcke, H., Sivadasan, N., Sohler, C., Vöcking, B.: Randomized Pursuit-Evasion in Graphs. Combinatorics Probability Comput. 12, 225–244 (2003).
2. 2.
Aleliunas, R., Karp, R.M., Lipton, R.J., Lovász, L., Rackoff, C.: Random walks, universal traversal sequences, and the complexity of maze problems. In: In Proceedings of the 20th IEEE Symposium on Foundations of ComputerScience (FOCS), pp. 218–223 (1979).Google Scholar
3. 3.
Alpern, S.: The search game with mobile hider on the circle. In: Emilio O. Roxin, Pan-Tai Liu, Robert L. Sternberg (eds.)Differential Games and Control Theory, pp. 181–200. Marcel Dekker, New York (1974).Google Scholar
4. 4.
Babichenko, Y., Peres, Y., Peretz, R., Sousi, P., Winkler, P.: Hunter, Cauchy Rabbit, and Optimal Kakeya Sets. preprint, arXiv:1207.6389v1 (2012).Google Scholar
5. 5.
Chatzigannakis, I., Nikoletseas, S., Spirakis, P.: An efficient communication strategy for ad-hoc mobile networks. In: Proc. the 20th ACM Symposium on Principles of Distributed Computing (PODC), pp. 320–322 (2001).Google Scholar
6. 6.
Chatzigiannakis, I., Nikoletseas, S., Paspallis, N., Spirakis, P., Zaroliagis, C.: An experimental study of basic communication protocols in ad-hoc mobile networks. Lecture Notes in Computer Science 2141, pp. 159–171 (2001).Google Scholar
7. 7.
Efrat, A., Guibas, L.J., Har-Peled, S., Lin, D.C., Mitchell, J.S.B., Murali, T.M.: Sweeping Simple Polygons with a Chain of Guards (2000).Google Scholar
8. 8.
Gal, S.: Search games with mobile and immobile hider, SIAM. J. Control Optimization. 17(1), 99–122 (1979).
9. 9.
Guibas, L.J., Latombe, J.-C., LaValle, S.M., Lin, D., Motwani, R.: A visibility-based pursuit-evasion problem. Int. J. Comput. Geometry Appl. (IJCGA). 9(4), 471–493 (1999).
10. 10.
Isaacs, R.: Differential games, A mathematical theory with applications to warfare and pursuit, control and optimization. John Wiley & Sons, Inc, New York-London-Sydney (1965).
11. 11.
Kirousis, L.M., Papadimitriou, C.H.: Searching and pebbling. Theor Comput. Sci. 47, 205–218 (1986).
12. 12.
LaPaugh, A.S.: Recontamination does not help to search a graph. J. ACM. 40(2), 224–245 (1993).
13. 13.
Lawler, G.F: Intersections of Random Walks, Birkhäuser, Boston (1991).Google Scholar
14. 14.
Megiddo, N., Hakimi, S.L., Garey, M.R., Johnson, D.S., Papadimitriou, C.H.: The complexity of searching a graph. J. ACM. 35(1), 18–44 (1988).
15. 15.
Park, S.-M., Lee, J.-H., Chwa, K.-Y.: Visibility-based pursuit-evasion in a polygonal region by a searcher. Lecture Notes in Computer Science 2076, pp. 456–468 (2001).Google Scholar
16. 16.
Parsons, T.D: Pursuit-evasion in a graph. In: Alavi, Y., Lick, D. (eds.)Theory and Applications of Graphs, Lecture Notes in Mathematics 642, pp. 426–441 (1976).Google Scholar
17. 17.
Parsons, T.D.: The search number of a connected graph. In: Proc. the 9th South-eastern Conference on Combinatorics, Graph Theory and Computing, Utilitas Mathematica, Winnipeg, pp. 549–554 (1978).Google Scholar
18. 18.
Spitzer, F.: Principles of Random Walk. 2nd ed. Springer-Verlag, New York (1976).Google Scholar
19. 19.
Suzuki, I., Yamasita, M.: Searching for a mobile intruder in a polygonal region, SIAM. J. Comput. 21(5), 863–888 (1992).
20. 20.
Zelikin, M.I.: A certain differential game with incomplete information. Doklady Akademii Nauk SSSR. 202, 998–1000 (1972).

© Ikeda et al.; licensee Springer. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

• Yuki Ikeda
• 1
Email author
• Yasunari Fukai
• 2
• Yoshihiro Mizoguchi
• 3
1. 1.Graduate School of MathematicsKyushu UniversityFukuokaJapan
2. 2.Faculty of MathematicsKyushu UniversityFukuokaJapan
3. 3.Institute of Mathematics for IndustryKyushu UniversityFukuokaJapan

Personalised recommendations 