1 Introduction

In this paper, we show that there exist continuous Markov martingales which have the same one-dimensional marginals as Brownian motion.

Theorem 1.1

There is a 1-dimensional Markovian martingale \(X\) with continuous paths and Brownian one-dimensional marginals, which is not strongly Markovian.

In particular, \(X\) is a 1-dimensional martingale with Brownian one-dimensional marginals which is different from Brownian motion, that is, a fake Brownian motion in the sense of Oleszkiewicz [23].

1.1 Context on fake Brownian motion

As mentioned in the abstract, it is an intriguing question to which extent a Brownian motion is already determined by its one-dimensional marginal distributions. Clearly, one has to impose some conditions on a stochastic process \(X = (X_{t})_{t \ge 0}\) with \(\mathrm {law}(X_{t}) = \mathcal {N}(0,t)\) to guarantee that \(X\) equals (in law) to a standard Brownian motion \(B = (B_{t})_{t \ge 0}\). Most prominently, a meaningful requirement is the martingality of \(X\). This is motivated from finance applications. If \(X\) models the (discounted) price process of a stock, the martingale property of \(X\) is of course related to the absence of arbitrage. On the other hand, by Breeden and Litzenberger [6], the knowledge of the prices of all plain vanilla options is tantamount to the knowledge of all one-dimensional marginal distributions. One might ask: do the one-dimensional marginal laws of a martingale \(X\) determine already the law of the process \(X\)? The answer is a resounding no. One has to impose much more structure on \(X\) in order to achieve a positive answer to the above question. At this stage, it is natural to consider the easiest conceivable case of given marginals of an ℝ-valued martingale \(X\), namely \(\mathrm {law}(X_{t}) = \mathcal {N}(0,t)\), and to ask whether this implies that the process \(X\) is a standard Brownian motion (in law).

In the past 15 years, there have been a number of examples of such “fakes”. Madan and Yor [22] provide three different constructions of Markov martingales matching given one-dimensional marginals. The first of these is based on the Azéma and Yor [3] solution to the Skorokhod embedding problem and yields a strongly Markovian process with positive jumps and negative drift, while the second and third approaches recover Brownian motion in the setting of Brownian one-dimensional marginals. Hamza and Klebaner [10] construct a discontinuous Markov process with Brownian one-dimensional marginals by constructing the corresponding semigroup. Hobson [11] provides a fake Brownian motion which is a Markovian pure jump martingale based on a solution to a martingale transport problem constructed in Hobson and Klimmek [12]. Continuous, non-Markovian fake Brownian motions were constructed by Albin [1] using products of Bessel processes, by Oleszkiewicz [23], Baker et al. [4] and Hobson [14] using a clever mixing of different stochastic processes. Jourdain and Zhou [16] provide a class of continuous fake Brownian motions based on regime-switching stochastic local volatility models.

After these works, a driving question of this article is: what are sufficient requirements in addition to the martingality on the process \(X\) with \(\mathrm {law}(X_{t}) = \mathcal {N}(0,t)\) which make sure that \(X\) is indeed a Brownian motion? As mentioned in the abstract, the ultimate theorem on this question is due to Lowther: a continuous, strong Markov martingale with Brownian one-dimensional marginals is a Brownian motion ([17, 20]; see also Theorem 2.2 below).

1.2 Structure of the article

We present two different constructions establishing Theorem 1.1. The first, given in Sect. 2, is relatively short and builds on the result reported in Theorem 2.2 which was developed by one of the authors in a series of papers; see Lowther [1721].

The second approach is given from Sect. 3 onwards. While the first argument appears significantly more elegant, we believe that the second approach is also of interest. In contrast to the first approach, it provides a constructive proof of our main result, and its basic idea is quickly explained (see Sect. 3). While it takes some work to provide the details of the approach (in the remaining part of the article), it only builds on standard arguments of stochastic analysis. In addition, this construction seems to be more flexible when considering more general cases of diffusion processes and endeavouring to find similar fake constructions; see for example Remark 6.8.

2 Faking Brownian motion based on the theory of mimicking martingales

For technical reasons, we prove a variant of Theorem 1.1.

Proposition 2.1

There exists a 1-dimensional Markovian martingale \(X = (X_{t})_{t \ge 0}\) with continuous paths and \(\mathrm {law}(X_{t}) = \mathcal {N}(0,t + 1)\) for all \(t\), which is not strongly Markovian.

Rather trivially, Proposition 2.1 implies Theorem 1.1. It suffices to let a standard Brownian motion \((B_{t})_{0 \le t \le 1}\) run from time \(t = 0\) to \(t = 1\), and then concatenate it with the fake Brownian motion provided by the proposition.

The basis for our first proof of Theorem 1.1 / Proposition 2.1 is the following result which is a consequence of Lowther [17, Theorem 1.3] and Lowther [20, Lemma 1.4]. We note that [17, Theorem 1.3] appears as a relatively delicate result whose proof spans several articles (see Lowther [1721] as mentioned above).

Theorem 2.2

Lowther

Let \((\mu _{t})_{t\geq 0}\) be a family of probability measures on ℝ, increasing in the convex order, and assume that \(t\mapsto \mu _{t}\) is weakly continuous and that each \(\mu _{t}\) has convex support. Then there exists a unique continuous strong Markov martingale \(X\) such that \(\mathrm {law}(X_{t})= \mu _{t}\), \(t\geq 0\).

Proof of Proposition 2.1

To start the construction of a fake Brownian motion starting at time 0 in a standard normal distribution, choose a symmetric Borel set \(A\) in ℝ such that \(A\) as well as its complement \(B := \mathbb{R}\setminus A\) are dense in ℝ and have positive Lebesgue measure. Let \(p\) be the probability that a standard normal random variable lies in \(A\) and \(q = 1-p\) that it lies in \(B\).

Let \(U\) be a standard normal random variable conditionally on being in \(A\), and \(V\) a standard normal conditionally on being in \(B\). By symmetry of the distributions, we know that the laws of \(t^{\frac{1}{2}} U\) and \(t^{\frac{1}{2}} V\) are increasing in the convex order. Indeed, we may write the laws of \(U\) and \(V\) as integrals over laws of the form \(\frac{1}{2} (\delta _{x} + \delta _{-x})\). Multiplying \(x\) with the increasing function \(t \mapsto t^{\frac{1}{2}}\), these measures are clearly increasing in the convex order. By integrating over these measures, we get the same assertion for the laws of \(t^{\frac{1}{2}} U\) and \(t^{\frac{1}{2}} V\).

Now we are in a position to apply Theorem 2.2: there exist continuous strong Markov martingales \(Y\) and \(Z\) such that \(Y_{t}\) is distributed as \(t^{\frac{1}{2}} U\) and \(Z_{t}\) as \(t^{\frac{1}{2}} V\), for all \(t \ge 0\).

Finally, let \(w\) be a Bernoulli random variable equal to 1 with probability \(p\) and independent of \(Y\) and \(Z\). Set \(X_{t} = Y_{t}\) if \(w=1\), and \(X_{t} = Z_{t}\) if \(w=0\).

This process \(X\) is Markov. Indeed, at every time \(t > 0\), the random variable \(X_{t}\) lies either in \(t^{\frac{1}{2}} A\) or in its complement. Depending on this information, the process thus follows the regime of \(Y\) or of \(Z\). In particular, this also shows that the process \(X\) is certainly not a Brownian motion. It is also clear that the one-dimensional marginal distributions of the process \(X\) are the corresponding Gaussian laws of a Brownian motion. This concludes the construction. □

3 Bare hands approach to faking Brownian motion

We now pass to a second proof of Proposition 2.1 which does not rely on Theorem 2.2 and which allows a more intuitive interpretation. We sketch here the intuitive idea and pass to a more formal treatment in Sect. 4. Fix a 1-dimensional Brownian motion \(B = (B_{t})_{t \ge 0}\). In this section, we assume that the starting value \(B_{0}\) is normally distributed with mean 0 and variance 1, in symbols \(\mathcal {N}(0,1)\), so that \(B_{t}\) is then \(\mathcal {N}(0,1+t)\)-distributed.

Let \(C\) be a Borel subset of the interval \({[0,1]}\) and write \(G=\mathbb{R}\setminus C\) for notational simplicity. The accumulated amount of time the original Brownian motion \((B_{s})_{0 \leq s \leq t}\) spends inside \(G\) is given by

A t := 0 t 1 G ( B s )ds,t0,

which defines an increasing, Lipschitz-1-continuous process; see e.g. Rogers and Williams [24, Chap. III.21]. Its right-continuous inverse

$$ \tau _{t} := \inf \{ s > 0 \colon A_{s} > t\}, \qquad t \ge 0, $$

defines a time-change of the Brownian motion \(B\). The resulting process \((B_{\tau _{t}})_{t \ge 0}\) is a strongly Markovian martingale taking values in \(G\) almost everywhere. Consider the case where \(G\) is a union of finitely many intervals. Intuitively speaking, as long as \(B_{\tau _{t}}\) takes values in the interior of one of the above intervals, this process behaves like a Brownian motion. When it hits the boundary of the interval, it is either reflected back into its interior or it jumps to the corresponding boundary of the neighbouring interval.

To imitate Brownian one-dimensional marginals not only on \(G\), but also on its complement \(C\), we write \(U\) for a random variable uniformly distributed on \([0,1]\) and independent of \(B\). With the aid of \(U\), we introduce a stopping time \(T\) whose value is determined at the start by

$$ T := \textstyle\begin{cases} 0 & \quad \text{if $B_{0} \in G$}, \\ \inf \{ t > 0 \colon U \geq \frac{p(t,B_{0})}{p(0,B_{0})} \} &\quad \text{if $B_{0} \in C$},\end{cases} $$

which allows us in the following to differentiate at each time \(t \ge 0\) between two different species of particles as will be explained in a moment. Speaking formally, the process \(X\) of interest can be defined by virtue of \(T\) and is given by

$$ X_{t} := \textstyle\begin{cases} B_{\tau _{t - T}} &\quad \text{for $t \geq T$}, \\ B_{0} &\quad \text{else.} \end{cases} $$

We consider \(X\) with respect to its natural (made right-continuous and completed) filtration \((\mathcal {F}_{t})_{t \ge 0}\). The next result gives our second proof of Theorem 1.1.

Proposition 3.1

The process \(X\) has the same one-dimensional marginals as the Brownian motion \(B\). It is a Markov martingale with càdlàg paths, and if \(C\) is closed with empty interior, \(X\) is continuous.

The formal proof is given in Sect. 6 below.

We note that \(T > 0\) almost surely on \(\{ B_{0} \in C\}\), and that the process \(X\) is constant on the interval \([0,T]\). Hence \(X\) is certainly not a Brownian motion as long as \(C\) has positive Lebesgue measure. For example, if \(C\) is a fat Cantor set (or \(\varepsilon \)-Cantor set, see e.g. Aliprantis and Burkinshaw [2, Chap. 3, before Lemma 15.8]), then it has positive measure but empty interior, so that \(X\) is a continuous Markov fake Brownian motion.

We know since Bachelier’s thesis (compare the twin paper Beiglböck et al. [5] which also contains much of the motivation for the present construction, in particular [5, Sect. 3]) that at time \(t \ge 0\), the net inflow of \(B\) into the interval \([a,\infty )\) at the left endpoint \(a\) equals \(-\frac{\frac{\partial}{\partial x}p(t,a)}{2}\), i.e.,

$$ \frac{\partial}{\partial t} \mathbb{P} [B_{t} \ge a] = {- \frac{\frac{\partial}{\partial x} p(t,a)}{2}}, $$
(3.1)

as follows immediately from the heat equation by integrating with respect to the space variable. If an interval \([a,b]\) is contained in \((0,\infty )\), we therefore find a positive net inflow at the left boundary \(a\) and a negative inflow, i.e., a net outflow, at the right boundary \(b\).

In order to construct a fake Brownian motion \(X = (X_{t})_{t \ge 0}\), let us focus our attention on an interval \([a,b] \subseteq (0,\infty )\) for the moment. The idea of the present construction is that the process \(X\) behaves like a Brownian motion as long as \(X_{t}\) takes its values in the interior of \([a,b]\). When \(X\) hits the boundaries, we have to make sure that the amounts of net respective inflows and outflows for the interval \([a,b]\) agree with the values for the original Brownian motion as given by (3.1). If we can do so, the evolution of the one-dimensional marginal densities of the Brownian motion \(B\) and the fake Brownian motion \(X\) will coincide on \([a,b]\). We still note that the process \(X\) may jump into or out of the interval \([a,b]\) at its boundary points, or it may move in or out in a continuous way.

We shall do our construction in two steps. First we consider finitely many disjoint intervals \([a_{1}^{N},b_{1}^{N}], \dots ,[a_{N}^{N},b_{N}^{N}]\) in \((0,1)\) and achieve on each interval the validity of the above program to arrive at a process \(X^{N}\) with the proper one-dimensional marginals on these intervals. This process will jump between the boundary points of neighbouring intervals. We also have to make sure that on the complement of these intervals, the distribution of \(X^{N}_{t}\) also coincides with the distribution \(p(t, \, \cdot \,)\) of \(B_{t}\).

In a second step, we let \(N\) go to infinity and pass to an infinite collection of disjoint intervals \([a_{n},b_{n}]\), contained in \((0,1)\), whose union is dense in \((0,1)\), but has Lebesgue measure strictly less than one. We thus pass to a limit \(X\) of the above processes \(X^{N}\) which will have continuous trajectories as well as the proper one-dimensional marginals. While the processes \(X^{N}\) will be strongly Markovian martingales, the limiting process \(X\) will still be a Markovian martingale, but fail to have the strong Markov property.

4 Construction of the processes \(X^{N}\)

We now pass to a more formal treatment of the intuitive idea sketched above. Let \([a_{n}^{N},b_{n}^{N} ]\), \(n=1, \dots ,N\), be disjoint intervals in \((0,1)\), ordered from left to right. We also write \((a_{0}^{N}, b_{0}^{N}] = (-\infty ,0]\) and \([a_{N+1}^{N},b_{N+1}^{N}) = [1,\infty )\). Let

$$ G^{N} := \bigcup _{n = 0}^{N+1} [a_{n}^{N},b_{n}^{N}], \qquad C^{N} := \bigcup _{n = 1}^{N} (b_{n}^{N},a_{n+1}^{N}), $$

so that \(C^{N}\) is precisely the complement of \(G^{N}\).

As above, we consider the increasing, Lipschitz-1-continuous process

A t N := 0 t 1 G N ( B s )ds,t0,

and its right-continuous inverse

$$ \tau ^{N}_{t} := \inf \{ s > 0 \colon A^{N}_{s} > t\}, \qquad t \ge 0, $$

which defines a time-change of the Brownian motion \(B\). Furthermore, to imitate Brownian one-dimensional marginals not only on \(G^{N}\), but also on its complement \(C^{N}\), we write \(U\) for a random variable uniformly distributed on \([0,1]\) and independent of \(B\). In complete analogy to the previous section, we introduce

$$ T^{N} := \textstyle\begin{cases} 0 &\quad \text{if $B_{0} \in G^{N}$}, \\ \inf \{ t > 0 \colon U \geq \frac{p(t,B_{0})}{p(0,B_{0})} \} & \quad \text{if $B_{0} \in C^{N}$},\end{cases}\displaystyle \qquad $$
(4.1)
$$ X^{N}_{t} := \textstyle\begin{cases} B_{\tau ^{N}_{t - T^{N}}} &\quad \text{for $t \geq T^{N}$}, \\ B_{0} &\quad \text{else.} \end{cases} $$

We consider \(X^{N}\) with respect to its natural (made right-continuous and completed) filtration \({\mathbb{F}}^{N} =(\mathcal {F}_{t}^{N})_{t \ge 0}\).

Proposition 4.1

The process \(X^{N}\) has the same one-dimensional marginals as the Brownian motion \(B\). It is a strong Markov martingale with càdlàg paths, but fails to be continuous.

The formal proof is given in Sect. 6 below. Here we only sketch the main ideas. The verification of the strong Markovianity of \(X^{N}\) is rather straightforward. The crucial issue pertains to the one-dimensional marginals of \(X^{N}\).

To verify that the one-dimensional marginals of \(X^{N}\) are indeed Brownian, we distinguish between the behaviour of the process \(X^{N}\) whether it takes values in \(G^{N}\) or in \(C^{N}\). We call particles with \(t \geq T^{N}\) busy particles as opposed to the lazy particles, which remain at their initial position in \(C^{N}\) until time \(T^{N}\) and to which we now turn our attention.

The idea – as indicated by the word “lazy” – is that these particles do not move for some time. Eventually, namely at time \(T^{N}\), they change their behaviour from the “lazy” state to follow the behaviour of the busy particles and instantaneously leave \(C^{N}\). For \(x \in C^{N}\), the density function \(p(t,x)\) of \(B_{t}\) is decreasing in time, and its infinitesimal change is given by

$$ \frac{\partial}{\partial t} p(t,x) = \bigg( \frac{x^{2}}{2(t+1)^{2}} - \frac{1}{2(t+1)}\bigg) p(t,x) < 0. $$
(4.2)

In order to match the evolution of the one-dimensional marginals of the fake Brownian motion \(X^{N}\) with that of the original Brownian motion \(B\) on the set \(C^{N}\), we note that by (4.2), the fraction \(\frac{p(t,x)}{p(0,x)}\) is strictly decreasing in time for any fixed \(x \in C^{N}\). Thus

$$ \mathbb{P} [ t < T^{N} | B_{0} = x ] = \mathbb{P}\bigg[ U < \frac{p(t,x)}{p(0,x)} \bigg] = \frac{p(t,x)}{p(0,x)}. $$
(4.3)

As the trajectories of the busy particles (that is, \(t \geq T^{N}\)) take values exclusively in \(G^{N}\), it is apparent from (4.3) that the correct amount of (lazy) particles stay at their starting position to match the Brownian one-dimensional marginals on \(C^{N}\).

To further develop our intuition, consider the Brownian motion \(B\) conditionally on the starting value \(B_{0} = x \in \, (b_{n-1}^{N},a_{n}^{N}) \subseteq C^{N}\) for \(1 \leq n \leq N + 1\). In other words, \([a_{n-1}^{N},b_{n-1}^{N}]\) and \([a_{n}^{N},b_{n}^{N}]\) are the neighbouring intervals which lie to the left and right of the starting point \(x\). As \(x \in C^{N}\), the Brownian motion \(B\) with starting value \(x\) does not spend any time in \(G^{N}\) before it hits one of the boundary points \(\{b_{n-1}^{N},a_{n}^{N}\}\), and from this hitting time onwards, it will almost surely spend its time in \(G^{N}\). As a consequence, the time-changed càdlàg process \((B_{\tau ^{N}_{t}})_{t \geq 0}\) does not start at \(x\), but rather at one of the boundary points \(\{b_{n-1}^{N},a_{n}^{N}\}\). It takes its choice of these two possibilities with probabilities

$$\begin{aligned} \mathbb{P}[B_{\tau _{0}^{N}} &= a_{n}^{N} | B_{0} = x] = \frac{x - b_{n-1}^{N}}{a_{n}^{N} - b_{n-1}^{N}}, \\ \mathbb{P}[B_{\tau _{0}^{N}} &= b_{n-1}^{N} | B_{0} = x] = \frac{a_{n}^{N}-x}{a_{n}^{N} - b_{n-1}^{N}}. \end{aligned}$$
(4.4)

This causes the process \(X^{N}\) to jump at time \(T^{N}\) from \(x\) to the boundary \(\{b_{n-1}^{N},a_{n}^{N}\}\) with the correct probabilities turning \((X^{N}_{{(T^{N} - t)^{+}}})_{t \ge 0}\) into a martingale. After this jump, the particle enters into the “busy” mode and exhibits the same behaviour as the Markov martingale \((B_{\tau ^{N}_{t}})_{t \ge 0}\) taking values in \(G^{N}\).

The analysis of the marginal flow on \(G^{N}\) is more delicate. We use some formal arguments in the remainder of this section to reveal the intuition behind our construction. A rigorous treatment is postponed to Sect. 6. A “busy” particle \(X^{N}_{t}\) takes values in one of the intervals \([a^{N}_{n},b_{n}^{N}]\), \(n = 0, \dots , N+1\), which make up \(G^{N}\). In this case, \(X^{N}\) behaves like a Brownian motion inside of \([a^{N}_{n},b_{n}^{N}]\) as long as it moves in the interior of one of the intervals up to hitting the boundary. As mentioned, at this stage, the particle can be either reflected back into the interval or jump into the neighbouring interval. As is well known, the intensity rate of these jumps is proportional to the local time spent by the particle \(X_{t}^{N}(\omega )\) at the respective boundary points, and inversely proportional to the distance from the neighbouring interval, i.e., to the size of this jump. Anticipating that \(X^{N}\) has at time \(t\) the correct one-dimensional marginal distribution, this observation leads to

$$\begin{aligned} \frac{1}{dt}\mathbb{P}\big[ X_{t + dt} = a_{n}^{N}, X_{t} \in [a_{n-1}^{N}, b_{n - 1}^{N}] \big] = \frac{p(t,b_{n-1}^{N})}{2(a_{n}^{N} - b_{n - 1}^{N})}, \end{aligned}$$
(4.5)
$$\begin{aligned} \frac{1}{dt}\mathbb{P}\big[ X_{t + dt} = b_{n-1}^{N}, X_{t} \in [a_{n}^{N},b_{n}^{N}] \big] = \frac{p(t,a_{n}^{N})}{2(a_{n}^{N} - b_{n - 1}^{N})}, \end{aligned}$$
(4.6)

where (4.5) describes an inflow at \(a_{n}^{N}\) from the neighbouring interval \([a_{n-1}^{N},b_{n-1}^{N}]\), whereas (4.6) describes an outflow from \([a_{n}^{N},b_{n}^{N}]\) to \([a_{n-1}^{N},b_{n-1}^{N}]\).

Simultaneously, the boundary points in \(G^{N}\) experience an additional mass inflow caused by “lazy” particles switching to the “busy” regime at time \(T^{N}\). Thanks to (4.4), we can explicitly derive the inflow at \(a_{n+1}^{N}\) caused by particles changing their behaviour from the “lazy” to the “busy” mode. The heat equation and an integration by parts yield

$$\begin{aligned} &\frac{1}{dt}\mathbb{P} [ X_{t+dt}^{N} = a_{n}^{N}, X_{t}^{N} \in (b_{n-1}^{N},a_{n}^{N}) ] \\ &= -\int _{b_{n-1}^{N}}^{a_{n}^{N}} \frac{x - b_{n-1}^{N}}{a_{n}^{N} - b_{n-1}^{N}} \frac{\partial}{\partial t} p(t,x) \, dx \\ &= -\frac{1}{2} \int _{b_{n-1}^{N}}^{a_{n}^{N}} \frac{x - b_{n-1}^{N}}{a_{n}^{N} - b_{n-1}^{N}} \frac{\partial ^{2}}{\partial x^{2}} p(t,x) \, dx \\ &= \frac{1}{2} \bigg( -\frac{\partial}{\partial x} p(t,a_{n}^{N}) + \frac{p(t,a_{n}^{N}) - p(t,b_{n-1}^{N})}{a_{n}^{N} - b_{n-1}^{N}} \bigg). \end{aligned}$$
(4.7)

Now we arrive at the crucial point of the construction: adding the effect of (4.7) to the inflows and outflows of the “busy” particles calculated in (4.5) and (4.6), we arrive precisely at the net inflow of the original Brownian motion \(B\) at the boundary point \(a_{n}^{N}\); see (3.1). Of course, a similar argument applies to the right boundary point \(b_{n}^{N}\) as well as to all the other boundary points. In conclusion, the one-dimensional marginals of \(X^{N}\) equal the one-dimensional marginals of \(B\) on \(C^{N}\) as well as on \(G^{N}\), i.e., on all of ℝ. This finishes the intuitive sketch of the ideas underlying the proof of Proposition 4.1, indicating that we have successfully constructed a fake Brownian motion with the properties detailed in Proposition 4.1. In Sect. 6, we translate this intuition into rigorous mathematics.

5 Construction of the process \(X\)

The above sequence \((X^{N})_{N \in \mathbb{N}}\) of processes allows us to pass to a limiting continuous process \(X\) which will be our desired fake Brownian motion.

As already mentioned in the introduction, we choose an infinite collection of closed disjoint intervals \([a_{n},b_{n}]\), \(n \in \mathbb{N}\), in \([0,1]\) whose union is dense in \([0,1]\), but has Lebesgue measure strictly less than one. Taking the first \(N\) intervals, ordering them from left to right and adding the intervals \((a_{0}^{N}, b_{0}^{N}]=\, (-\infty ,0]\) and \([a_{N+1}^{N},b_{N+1}^{N}) = [1,\infty )\), we are in the situation of Sect. 4 to obtain a process \(X^{N}\). We denote by \(G = \bigcup _{N=1}^{\infty }G^{N}\) the union of all these intervals, and the complement of \(G\) by \(C = \bigcap _{N = 1}^{\infty }C^{N}\). Recall the definitions of \(A^{N}\) and its inverse \(\tau ^{N}\) as

A t N := 0 t 1 G N ( B s )ds, τ t N :=inf{s>0: A s N >t}.
(5.1)

Clearly, the trajectories \((A^{N}_{t}(\omega ))_{t \ge 0}\) of the process \(A^{N}\) are increasing, Lipschitz-1-continuous and increase almost surely pointwise to the process \(A\) given by

A t := 0 t 1 G ( B s )ds, τ t :=inf { s > 0 : A s > t } .

As \(G\) is a union of intervals which is dense in ℝ, the original Brownian motion \(B\) spends almost surely a positive amount of time in \(G\) during any time window of the form \([t_{1},t_{2}]\), \(0 \leq t_{1} < t_{2} < \infty \). For this reason, \(A\) is almost surely strictly increasing, and \((\tau _{t})_{t \ge 0}\) has almost surely continuous paths. The trajectories \((\tau ^{N}_{t}(\omega ))_{t \ge 0}\) of the process \(\tau ^{N}\) decrease almost surely to the continuous trajectories \(\left (\tau _{t}(\omega )\right )_{t \ge 0}\), uniformly on compact subsets of \([0,\infty )\).

The stopping times \((T^{N})_{N \in \mathbb{N}}\) previously defined in (4.1) converge pointwise to \(T\) which is given by

$$ T := \textstyle\begin{cases} 0 &\quad \text{if $B_{0} \in G$,} \\ \inf \{ t > 0 \colon U \geq \frac{p(t,B_{0})}{p(0,B_{0})} \} &\quad \text{if $B_{0} \in C$.} \end{cases} $$
(5.2)

In conclusion, the limiting process \(X\) given by

$$ X_{t} := B_{(\tau _{t} - T)^{+}} = \textstyle\begin{cases} B_{\tau _{t - T}} &\quad \text{for $t \geq T$,} \\ B_{0} &\quad \text{else,}\end{cases} $$
(5.3)

has almost surely continuous trajectories and is almost surely the limit of the sequence \((X^{N})_{N \in \mathbb{N}}\). We shall verify that it is the desired fake Brownian motion described by Theorem 1.1.

The proof of Proposition 3.1 is given in Sect. 6 below. From an intuitive point of view, Proposition 3.1 is a rather obvious consequence of Proposition 4.1. The novel aspect is the failure of the strong Markov property. This failure follows from a general theorem of Lowther [17, Theorem 1.3]: An ℝ-valued, continuous and strongly Markovian martingale is uniquely determined by its one-dimensional marginals. In particular, if these marginals are those of a Brownian motion \(B\), there is no other process \(X\) with the mentioned properties. In other words, there is no fake Brownian motion which is a continuous, strong Markov martingale.

Corollary 5.1

The fake Brownian motion \(X\) fails to have the strong Markov property.

It is instructive to directly visualise the failure of the strong Markov property of the process \(X\) constructed above, without recourse to Lowther’s theorem. We do so by applying the technique of coupling as in Hobson [13].

Let \(X\) be the process given by (5.3) and construct an independent copy \(\tilde{X}\) of \(X\) by repeating the same construction with an independent Brownian motion \(\tilde{B}\) and uniform distribution \(\tilde{U}\). Then let \(\tilde{T}\) be an independent copy of \(T\), see (5.2), which indicates the time when \(\tilde{X}\) changes from the “lazy” mode to the “busy” mode. Both are defined on the same probability space, whose elements we write as \((\omega , \tilde{\omega})\), which we equip with the filtration \(({\mathcal{F}_{t}})_{t \ge 0}\) generated by these two processes. Define the stopping time \({\rho = \rho}(\omega , \tilde{\omega})\) as the first moment when the trajectories \(X(\omega )\) and \(\tilde{X}(\tilde{\omega})\) meet, i.e.,

$$ {\rho}(\omega ,\tilde{\omega}) := \inf \{ t > 0 \colon X_{t}( \omega ) = \tilde{X}_{t}(\tilde{\omega}) \}. $$

On the event \(\{T(\omega ) \leq {\rho}(\omega ,\tilde{\omega})\}\), we know that \(X\) is in the “busy” regime from time \({\rho}\) onwards. That means that \((X_{{\rho} + t})_{t \ge 0}\) behaves like the strong Markov process \((B_{\tau _{t}})_{t \ge 0}\) when the underlying Brownian motion has the adequate starting distribution \(B_{0} \sim X_{{\rho}}\). Hence we have, almost surely for any \(t \in [0,\infty )\) on \(\{T(\omega ) \leq {\rho}(\omega ,\tilde{\omega})\}\), that

E[ 1 G ( X ρ + t )| F ρ ]=E[ 1 G ( B τ t )| B 0 = X ρ ].
(5.4)

The sets \(\mathcal {A}, \mathcal {B} \in \mathcal{F}_{\rho}\) where the particle \(X_{\rho}(\omega )\) (resp. \(\tilde{X}_{\rho}( \tilde{\omega})\)) at time \({\rho}\) is in the “busy” mode while \({\tilde{X}}_{\rho}(\tilde{\omega})\) (resp. \(X_{\rho}( \omega )\)) is in the “lazy” mode, in symbols

$$\begin{aligned} \mathcal {A} &:= \{(\omega ,\tilde{\omega}) \colon T(\omega ) \leq { \rho}(\omega ,\tilde{\omega}) < \tilde{T}(\tilde{\omega}) \}, \\ \mathcal {B}&:= \{ (\omega ,\tilde{\omega}) \colon \tilde{T}( \tilde{\omega}) \leq {\rho}(\omega ,\tilde{\omega}) < T( \omega ) \}, \end{aligned}$$

have positive probability and \(\mathbb{P}[ \mathcal {A}] = \mathbb{P}[ \mathcal {B}]\). Define the conditional probabilities \(\mathbb{P}_{\mathcal {A}}[\,\cdot \, ] = \mathbb{P} [\,\cdot \, \cap \mathcal {A}] / \mathbb{P} [\mathcal {A}]\) and analogously \(\mathbb{P}_{\mathcal {B}}[\,\cdot \, ]\), and particularly due to symmetry

$$ \mu (dx,dt) := \mathbb{P}_{\mathcal {A}} [ X_{\rho} \in dx, { \rho} \in dt ] = \mathbb{P}_{\mathcal {B}} [ {\tilde{X}_{ \rho}} \in dx, {\rho} \in dt ]. $$
(5.5)

The conditional distribution of \(X_{{\rho} + t}\) given \(\mathcal{F}_{\rho}\) does not only depend on the present position \({X}_{\rho}(\omega ) = {\tilde{X}}_{\rho}(\tilde{\omega}) \), but also on the information whether \({X}_{\rho}(\omega ) \) is in the “busy” or the “lazy” mode. Indeed, fix \(t > 0\) and consider the probability of the event \(\{ X_{{\rho} + t} \in G \}\) conditionally on \(\mathcal {A}\), which is, by (5.4) and as \(\tau ^{N}_{t} \searrow \tau _{t}\) almost surely, given by

P A [ X ρ + t G ] = E P A [ P [ B τ t G | F ρ ] ] = E P A [ E [ 1 G ( B τ t ) | B 0 = X ρ ] ] E P A [ E [ 1 int ( G ) ( B τ t ) | B 0 = X ρ ] ] = lim N E P A [ P [ B τ t N int ( G ) | B 0 = X ρ ] ] = 1 ,

where we write \(\text{int}(G)\) for the interior of \(G\). We deduce that

P A [ X ρ + t C| X ρ ,ρ]=0and thus 1 A P[ X ρ + t C| F ρ ]=0,
(5.6)

whereas

$$ \mathbb{P}_{\mathcal {B}} [ X_{{\rho} + t} \in C | X_{\rho}, {\rho} ]>0 $$
(5.7)

so that 1 B P[ X ρ + t C| F ρ ] does not vanish. From (5.5)–(5.7), we conclude that in order to determine the conditional probability of \(\{X_{{\rho} + t} \in C\}\) given \(\mathcal {F}_{\rho}\), we require the information from the past of the process \(X\) prior to the stopping time \({\rho}\). In conclusion, the process \(X\) fails to have the strong Markov property.

Intuitively speaking, this failure stems from the fact that the “busy” particles travel through the “lazy territory” \(C\) in a continuous way. In contrast, the processes \(X^{N}\) considered in the previous section jump over the territory \(C^{N}\) which makes it impossible to “catch” them by a stopping time \({\rho}\) while they are traveling through \(C^{N}\).

6 Proofs

Lemma 6.1

Using the notation of Sect4, let \(B\) be a Brownian motion started at some \(B_{0} = x \in G^{N}\). Then the time-changed Brownian motion \((B_{\tau ^{N}_{t}})_{t \ge 0}\) is a Feller process. Its Feller generator \(\mathcal {G}\) is given by

$$ \mathcal {G}f(x) = \frac{1}{2} \frac{\partial ^{2}}{\partial x^{2}} f(x), $$
(6.1)

and its domain by

$$\begin{aligned} \bigg\{ f \in C^{2}(G^{N}) \colon & \partial _{-} f(b_{n-1}^{N}) = \frac{f(a_{n}^{N}) - f(b_{n-1}^{N})}{a_{n}^{N} - b_{n-1}^{N}} = \partial _{+} f(a_{n}^{N}) \\ & \textit{for every }n = 1,\ldots ,N+1\bigg\} . \end{aligned}$$
(6.2)

Proof

To see that \((B_{\tau _{t}^{N}})_{t \ge 0}\) defines a Feller process, we recall that it inherits the strong Markov property from \(B\). Define its resolvent

$$ R_{\lambda }f(x) := \int _{0}^{\infty }e^{-\lambda t} \mathbb{E} [f(B_{ \tau ^{N}_{t}}) | B_{0} = x ] \,dt. $$

By the Hille–Yosida theorem, it suffices to show that \((R_{\lambda})_{\lambda > 0}\) is a strongly continuous contraction resolvent on \(C_{0}(G^{N})\) (the set of continuous functions on \(G^{N}\) vanishing at \(\pm \infty \)). In view of the definition of the process \((A_{t}^{N})_{t \ge 0}\), conditionally on the starting point \(B_{0} = x \in G^{N}\), we have as a consequence of Blumenthal’s 0–1 law that almost surely, \(A_{t}^{N} > A_{0}^{N}\) for all \(t > 0\). Therefore, as \(t \mapsto A_{t}^{N}\) is increasing and continuous (in particular at 0), we find

$$ \lim _{t \searrow 0}\tau ^{N}_{t} = \lim _{t \searrow 0} A^{N}_{t} = 0. $$

Thus for any \(x \in G^{N}\),

$$ \lim _{t \searrow 0}\mathbb{E} [f(B_{\tau ^{N}_{t}}) | B_{0} = x ] = f(x) \qquad \text{and} \qquad \lim _{\lambda \to \infty} \lambda R_{ \lambda }f(x) = f(x). $$

Due to Rogers and Williams [24, Lemma III.6.7 and its proof], it remains to show that \(R_{\lambda}\) maps \(C_{0}(G^{N})\) into itself. Let \(x,y \in [a_{n}^{N},b_{n}^{N}] \subseteq G^{N}\); then we obtain as in [24, Equation III.(22.2)]

$$ \left|R_{\lambda }f(x) - R_{\lambda }f(y)\right| \leq \frac{2\lVert f\rVert}{\lambda} \mathbb{E} [1 - e^{-\lambda \tilde{H}_{y}} | B_{0} = x ], $$
(6.3)

where \(\tilde{H}_{y} := \inf \{ t > 0 \colon B_{\tau _{t}^{N}} = y\}\) is the first hitting time of \(y\) and \(\lVert \, \cdot \, \rVert \) denotes the supremum norm on \(C_{0}(G^{N})\). Since \(\tilde{H}_{y}\) is dominated by \(H_{y} := \inf \{t > 0 \colon B_{t} = y\}\) (this uses that \(\tau ^{N}\) is the inverse of \(A^{N}\) which is Lipschitz-1-continuous), we have that for \(\epsilon > 0\),

$$ \lim _{y \to x}\mathbb{P} [ \tilde{H}_{y} \geq \epsilon | B_{0} = x ] = 0, $$

whence the right-hand side in (6.3) vanishes. We have thus shown that the time-changed process \((B_{\tau _{t}^{N}})_{ t \ge 0}\) is a Feller process.

Next, we compute its generator. Let \(x \in \, (a_{n}^{N},b_{n}^{N})\) and recall that the expected amount of time the Brownian motion \(B\) started at \(B_{0} = x\) spends inside the interval \([x-\epsilon , x + \epsilon ] \subseteq [a_{n}^{N},b_{n}^{N}]\) before exiting \([x-\epsilon , x + \epsilon ]\) is given by

$$ \mathbb{E} [ H_{x - \epsilon} \wedge H_{x + \epsilon} | B_{0} = x ] = \epsilon ^{2}. $$

Due to Dynkin’s formula, we can compute the generator via

$$\begin{aligned} \mathcal {G}f(x) &= \lim _{\epsilon \searrow 0} \frac{\frac{f(x + \epsilon ) + f(x - \epsilon )}{2} - f(x)}{\mathbb{E} [H_{x -\epsilon} \wedge H_{x + \epsilon} | B_{0} = x ]} \\ &= \lim _{\epsilon \searrow 0} \frac{f(x + \epsilon ) - f(x) + f(x - \epsilon ) - f(x)}{2\epsilon ^{2}} \\ &= \lim _{\epsilon \searrow 0} \frac{f'(x + \epsilon ) - f'(x - \epsilon )}{4\epsilon} = \frac{f''(x)}{2}; \end{aligned}$$

note that \(\mathcal {G}f(x)\) can only exist if \(f\) is two times continuously differentiable at \(x\).

Next let \(x = a_{n}^{N}\), \(\ell = a_{n}^{N} - b_{n-1}^{N}\) for some \(n = 1,\ldots , N+1\). The expected amount of time a Brownian motion \(B\) started at \(B_{0} = 0\) spends inside \([0,\epsilon ]\) before \(H_{\epsilon}\) is \(\epsilon ^{2}\). Therefore we can compute the expected amount of time the Brownian motion \(B\) started at \(B_{0} = x\) spends inside \([x,x+\epsilon ] \subseteq [x = a_{n}^{N},b_{n}^{N}]\) before hitting either of \(\{b_{n-1}^{N},x + \epsilon \}\) for the first time via

ϵ 2 = E [ 0 H x + ϵ 1 [ x , x + ϵ ] ( B s ) d s | B 0 = x ] = E [ 0 H x + ϵ H b n 1 N 1 [ x , x + ϵ ] ( B s ) d s | B 0 = x ] + P [ H b n 1 N < H x + ϵ | B 0 = x ] ϵ 2 = E [ H ˜ b n 1 N H ˜ x + ϵ ] + ϵ 3 + ϵ .

Again, Dynkin’s formula allows us to compute the generator at \(x\) as

$$\begin{aligned} \mathcal {G}f(x) &= \lim _{\epsilon \searrow 0} \frac{\mathbb{E} [ f(B_{\tau ^{N}_{\tilde{H}_{b_{n - 1}^{N}} \wedge \tilde{H}_{x + \epsilon}}}) | B_{0} = x ] - f(x)}{\mathbb{E} [ \tilde{H}_{b_{n - 1}^{N}} \wedge \tilde{H}_{x + \epsilon} | {B_{0} = x} ]} \\ &= \lim _{\epsilon \searrow 0} \frac{ \frac{\ell}{\ell + \epsilon} \left ( f(x + \epsilon ) - f(x) \right ) +\frac{\epsilon}{\ell + \epsilon} (f(b_{n-1}^{N}) - f(x))}{ \epsilon ^{2} ( 1 - \frac{\epsilon}{{ \ell} + \epsilon})}. \end{aligned}$$

This limit can only exist if \(f'(x)\), \(f''(x)\) exist (where \(f'(x)\) and \(f''(x)\) denote here the adequate one-sided derivatives) and \(\ell f'(x) = f(x) - f(b_{n-1}^{N})\). In this case, we have by de l’Hôpital’s rule that

$$\begin{aligned} \mathcal {G}f(x) &= \lim _{\epsilon \searrow 0} \bigg( \frac{ \frac{\ell}{\ell +\epsilon} f'(x+\epsilon ) + \frac{f(b_{n-1}^{N}) - f(x)}{\ell + \epsilon} - \frac{1}{(\ell +\epsilon )^{2}} ( \epsilon (f(b_{n-1}^{N}) - f(x))}{2\epsilon} \\ & \hphantom{=: \lim _{\epsilon \searrow 0}\bigg(} + \frac{\ell (f(x+\epsilon ) - f(x)) ) }{2\epsilon}\bigg) \\ &= \frac{ f''(x) - \frac{f'(x)}{\ell} - \frac{2}{\ell ^{2}}(f(b_{n-1}^{N}) - f(x)) - \frac{f'(x)}{\ell} }{2} \\ &= \frac{f''(x)}{2}. \end{aligned}$$

We conclude by remarking that by analogous reasoning, we have for \(x = b_{n}^{N}\) and some \(n = 0,\ldots ,N\) that \(f\) can only be in the domain of \(\mathcal {G}\) if and only if it is two times (one-sided) differentiable at \(b_{n}^{N}\) and

$$\begin{aligned} \mathcal {G}f(b_{n}^{N}) = \frac{f''(b_{n}^{N})}{2},\qquad f'(b_{n}^{N}) = \frac{f(a_{n+1}^{N}) - f(b_{n}^{N})}{a_{n+1}^{N} - b_{n}^{N}}, \end{aligned}$$

where \(f'(b_{n}^{N})\) and \(f''(b_{n}^{N})\) are again the adequate one-sided derivatives. □

6.1 Marginals

This section is concerned with the verification that \(X^{N}\) has the correct Brownian one-dimensional marginals. On a formal level, one may argue as follows. We have shown in Lemma 6.1 that the busy particles \((B_{\tau ^{N}_{t}})_{t \ge 0}\) behave like a Feller process with generator \(\mathcal {G}\); see (6.1) and (6.2). As we know the exact form of the generator, it is possible to derive the Kolmogorov forward equation describing the time evolution of the density \(u\) of the process \((B_{\tau ^{N}_{t}})_{t \ge 0}\), that is,

$$\begin{aligned} \frac{\partial}{\partial t} u(t,x) &= \frac{\partial ^{2}}{\partial x^{2}} u(t,x), \qquad (t,x) \in (0, \infty ) \times (G^{N} \setminus \partial G^{N}), \\ \frac{\partial}{\partial x} u(t,a_{n}^{N}) &= \frac{u(t,a_{n}^{N}) - u(t,b_{n-1}^{N})}{a_{n}^{N} - b_{n-1}^{N}} = \frac{\partial}{\partial x} u(t,b_{n-1}^{N}), \qquad t \in (0, \infty ), \end{aligned}$$

for \(n = 1,\ldots ,N\). Thus relying on knowledge of the corresponding heat equations, one can show that the inflow of particles from \(C^{N}\) caused by changing their modes from “lazy” to “busy” yields at the boundary \(\partial G^{N}\) the correct compensation, whence the density \(v\) of \(X^{N}\) satisfies the heat equation

$$ \frac{\partial}{\partial t} v(t,x) = \frac{1}{2} {\frac{\partial ^{2}}{\partial x^{2}}}v(t,x), \qquad (t,x) \in (0, \infty ) \times \mathbb{R}, $$

with initial condition \(v(0,x) = p(0,x)\).

Putting regularity questions aside, the approach sketched above seems rather clear-cut. Nevertheless, to avoid subtle arguments justifying the formal reasoning, we “go back to the roots” and use a discretisation argument instead. Approximating \(B\) by a rescaled random walk \(B^{m}\) allows us to establish the form of the marginals of \(X^{N}\) without having to worry about regularity of the involved densities.

To this end, consider a random walk on ℤ with i.i.d. increments \((\zeta _{k})_{k \in \mathbb{N}}\), where

$$ \mathbb{P} [\zeta _{k} = \pm 1 ] = {\frac{1}{4}} \quad \text{and} \quad \mathbb{P} [ \zeta _{k} = 0 ] = \frac{1}{2},\qquad k \in \mathbb{N}, $$

so that for any \(j \in \mathbb{Z}\) with \(-\ell \leq j \leq \ell \in \mathbb{N}\),

$$ \tilde{p}_{\ell}(j) := \mathbb{P}\bigg[ \sum _{k = 1}^{\ell }\zeta _{k} = j \bigg] = 2^{-2\ell} \binom{2\ell}{\ell +j}, $$
(6.4)

which satisfies the discrete heat equation

$$ \tilde{p}_{\ell +1}(j) - \tilde{p}_{\ell }(j) = \frac{1}{4} \tilde{p}_{ \ell }(j-1) + \frac{1}{4} \tilde{p}_{\ell }(j+1) - \frac{1}{2} \tilde{p}_{\ell }(j). $$
(6.5)

From Donsker’s theorem, we know that the rescaled random walk

$$ B_{t}^{m} := \sqrt{\frac{2}{m}} \sum _{k = 1}^{\lfloor m(1 + t) \rfloor} \zeta _{k}, \qquad t\in [0,\infty ), $$
(6.6)

converges in law to the original Brownian motion \(B\) as random variables on the Skorokhod space \(\mathcal {D}([0,\infty ))\). Recall that \(B_{0}\) is normally distributed with mean 0 and variance 1, for which reason the sum in (6.6) runs from 1 to \({\lfloor m(1 + t) \rfloor}\) rather than \({\lfloor mt \rfloor}\). Thus we observe for the one-dimensional marginal distributions that \(B_{t}^{m}\) converges in law to \(B_{t}\), and we write \(p^{m}(t,x) := \mathbb{P} [ B_{t}^{m} = x ]\) for the discrete mass evolution of \(B^{m}\).

Let \(U\) be a uniformly distributed random variable independent of \((\zeta _{k})_{k \in \mathbb{N}}\), and define in analogy to \(T^{N}\) from (4.1) the time \(T^{N,m}\) which indicates when a particle \(X^{N,m}\) changes its behaviour from “lazy” to “busy”, i.e.,

$$ T^{N,m} := \textstyle\begin{cases} 0 &\quad \text{if $B_{0}^{m} \in G^{N}$,} \\ \inf \{ t > 0 \colon U \geq \frac{p^{m}(t, B_{0}^{m})}{p^{m}(0,B_{0}^{m})} \} & \quad \text{if $B_{0}^{m} \in C^{N}$.} \end{cases} $$

The time-change \(\tau ^{N,m}\) is given by

τ t N , m :=inf{s>0: A s N , m >t}, A t N , m := 0 t 1 G N ( B s m )ds.

We define the time-changed rescaled random walk \(X^{N,m}\)

$$ X_{t}^{N,m} := \textstyle\begin{cases} B_{0}^{m} &\quad \text{for $t < T^{N,m}$,} \\ B_{\tau ^{N,m}_{t - T^{N,m}}}^{m} &\quad \text{for $t \geq T^{N,m}$.} \end{cases} $$

It is evident that this defines a strong Markov process taking values in \(\sqrt{\frac{2}{m}} \mathbb{Z}\). The aim of the remainder of this subsection is to first establish that \(X^{N,m}\) preserves the one-dimensional marginals of \(B^{m}\); see Theorem 6.3. Second, we give a proof for the convergence in law of \((X^{N,m})_{m \in \mathbb{N}}\) to \(X^{N}\) as random variables on the Skorokhod space \(\mathcal {D}([0,\infty ))\); see Theorem 6.4.

6.1.1 The one-dimensional marginals of \(X^{N,m}\)

An advantage of the discrete level is that evolving the one-dimensional marginal distributions in time is achieved iteratively. Rescaling \(B^{m}\) and \(X^{N,m}\) by a factor of \(\sqrt{\frac{m}{2}}\), we obtain random walks

$$ \tilde{B}^{m}_{\ell }:= \sqrt{\frac{m}{2}} B_{\frac{\ell}{m}}^{m} = \sum _{k = 1}^{m+\ell} \zeta _{k},\quad \tilde{X}^{N,m}_{\ell }:= \sqrt{\frac{m}{2}} X^{N,m}_{\frac{\ell}{m}}, \qquad \ell \in \mathbb{N}, $$
(6.7)

on ℤ. Then \(X^{N,m}\) and \(B^{m}\) have the same one-dimensional marginals if and only if \(\tilde{X}^{N,m}\) and \(\tilde{B}^{m}\) have this as well. For the analysis of the evolution of the one-dimensional marginals of \(\tilde{X}^{N,m}\), we introduce the sets

$$\begin{aligned} \begin{aligned} G^{N,m} &:= \bigg\{ j \in \mathbb{Z}\colon j \sqrt{\frac{m}{2}} \in G^{N} \bigg\} ,\qquad C^{N,m} := \mathbb{Z}\setminus G^{N,m}, \\ \partial G^{N,m} &:= \{ j \in G^{N,m} \colon \exists k \in C^{N,m} \text{ such that } \left|j-k\right| = 1 \}, \end{aligned} \end{aligned}$$
(6.8)

on which the transition probabilities of \(\tilde{X}^{N,m}\) exhibit different behaviours. In the interior of \(G^{N,m}\), that means here \(G^{N,m} \setminus \partial G^{N,m}\), the transition probabilities behave like those of \(\tilde{B}^{m}\) and satisfy the discrete heat equation (6.5). From this observation, it is easy to conclude that

law ( X ˜ N , m ) = law ( B ˜ m ) for all  i G N , m G N , m , : we have  P [ X ˜ + 1 N , m = i ] = P [ B ˜ + 1 m = i ] .
(6.9)

The next lemma ensures that the discrete version of inequality (4.2) holds, which we use to prove that the process \(\tilde{X}^{N,m}\) jumps with the correct rate from \(C^{N}\) to \(\partial G^{N,m}\).

Lemma 6.2

Let \(\ell ,j \in \mathbb{N}\), \(\ell \geq 2 j^{2}\). Then

$$ \tilde{p}_{\ell}(j) = \mathbb{P}\bigg[\sum _{k = 1}^{\ell} \zeta _{k} = j \bigg] > \mathbb{P}\bigg[ \sum _{k = 1}^{\ell +1} \zeta _{k} = j \bigg] = \tilde{p}_{\ell +1}(j). $$
(6.10)

Proof

The probabilities in (6.10) have a closed form given by (6.4). We compute the ratio of the left-hand side to the right-hand side of (6.10) as

$$\begin{aligned} \frac{\mathbb{P} [\sum _{k = 1}^{\ell} \zeta _{k} = j ]}{ \mathbb{P} [ \sum _{k = 1}^{\ell +1} \zeta _{k} = j ]} &= \frac{2^{-2\ell}}{2^{-2(\ell +1)}} \frac{\binom{2\ell}{\ell +j}}{\binom{2(\ell +1)}{\ell +1+j}} = { 2 \frac{(\ell +1+j)(\ell +1-j)}{(\ell +1)(2 \ell + 1)} } \\ &= 1 + {\frac{ \ell + 1 - 2 j^{2}}{(\ell +1)(2\ell +1)}} > 1, \end{aligned}$$

where the last inequality holds by assumption, and we conclude with (6.10). □

Theorem 6.3

The processes \(X^{N,m}\) and \(B^{m}\) have the same one-dimensional marginal distributions.

Proof

The assertion is equivalent to showing that the one-dimensional marginal distributions of the processes defined in (6.7) taking values in ℤ coincide. We show the statement by induction; so assume that \(\mathrm {law}(\tilde{B}_{\ell}^{m}) = \mathrm {law}(\tilde{X}_{\ell}^{N,m})\) for a fixed \(\ell \in \mathbb{N}\).

We treat separately the three cases corresponding to the three sets defined in (6.8).

1) The case \(i \in G^{N,m} \setminus \partial G^{N,m}\) was already discussed in (6.9). So there we obtain that \(\mathbb{P}[\tilde{X}^{N,m}_{\ell +1} = i] = \mathbb{P}[ \tilde{B}^{m}_{ \ell +1} = i] =\tilde{p}_{m+\ell +1}(i)\).

2) Let \(i \in C^{N,m}\) and denote by \(j_{1},j_{2} \in \partial G^{N,m}\) the boundary points with \(j_{1} < i < j_{2}\) and \((j_{1},j_{2}) \, \cap \mathbb{Z}=: C_{i}^{N,m} \subseteq C^{N,m}\). We abbreviate \(x = i \sqrt{\frac{2}{m}}\) and compute the net mass change at \(i\) from time \(\ell \) to time \(\ell +1\) of \(\tilde{X}^{N,m}\). Applying Lemma 6.2 ensures that \(t \mapsto p^{m}(t,x)\) is strictly decreasing as \(m + \ell \geq 2 i^{2}\). Thus we find

$$\begin{aligned} &\mathbb{P} [ \tilde{X}_{\ell}^{N,m} = i ] - \mathbb{P} [ \tilde{X}_{ \ell +1}^{N,m} = i ] \\ &= \mathbb{P}\big[ X_{\frac{\ell}{m}}^{N,m} = x\big] - \mathbb{P}\big[ X_{ \frac{\ell +1}{m}}^{N,m} = x \big] \\ &= \mathbb{P}\bigg[ B^{m}_{0} = x, \frac{\ell}{m} < T^{N,m} \leq \frac{\ell +1}{m} \bigg] \\ &= \mathbb{P}\bigg[B_{0}^{m} = x, p^{m}\bigg(\frac{\ell +1}{m},x\bigg) \leq Up^{m} (0, x )< p^{m}\bigg(\frac{\ell}{m}, x \bigg) \bigg] \\ &= \mathbb{P} [ \tilde{B}^{m}_{\ell }= i ] - \mathbb{P} [ \tilde{B}^{m}_{ \ell +1} = i ] \\ &= \tilde{p}_{m+\ell}(i) - \tilde{p}_{m+\ell +1}(i)> 0, \end{aligned}$$

and in particular \(\mathbb{P} [ \tilde{X}_{\ell +1}^{N,m} = i ] = \tilde{p}_{m+\ell +1}(i)\). Using the optional stopping theorem for the martingale \(\tilde{B}^{m}_{k}\), \(k = -m, \dots , \infty \), allows us to compute

$$\begin{aligned} &\mathbb{P}\bigg[ \tilde{X}^{N,m}_{\ell +1} = j_{1} \bigg| \tilde{X}^{N,m}_{ \ell }= i, T^{N,m} = \frac{\ell +1}{m}\bigg] \\ &= \mathbb{P} [ \inf \{k \in \mathbb{N}\colon \tilde{B}^{m}_{k - m} = j_{1} - i \} < \inf \{k \in \mathbb{N}\colon \tilde{B}_{k-m}^{m} = j_{2} - i \} ] \\ &= \frac{j_{2} - i}{j_{2} - j_{1}}. \end{aligned}$$

Thus we obtain by the discrete version (6.5) of the heat equation and the inductive assumption that

$$\begin{aligned} & \mathbb{P} [ \tilde{X}^{N,m}_{\ell +1} = j_{1}, \tilde{X}_{\ell}^{N,m} \in C^{N,m}_{i} ] \\ &= \sum _{k = j_{1}+1}^{j_{2} - 1} \frac{j_{2}-k}{j_{2} - j_{1}} ( \mathbb{P} [ \tilde{X}_{\ell}^{N,m} = k ] - \mathbb{P} [ \tilde{X}_{\ell +1}^{N,m} = k ] ) \\ &= \sum _{k = j_{1}+1}^{j_{2} - 1} \frac{j_{2}-i}{j_{2} - j_{1}} \bigg( \frac{1}{2} \tilde{p}_{m+\ell}(k) - \frac{1}{4} \tilde{p}_{m+ \ell}(k-1) - \frac{1}{4} \tilde{p}_{m+\ell}(k+1) \bigg) \\ &= \frac{1}{4}\bigg( \tilde{p}_{m+\ell}(j_{1} + 1) - \tilde{p}_{m+\ell}(j_{1}) + \frac{\tilde{p}_{m+\ell}(j_{1}) - \tilde{p}_{m+\ell}(j_{2})}{j_{2} - j_{1}} \bigg). \end{aligned}$$
(6.11)

Analogously, we get

$$\begin{aligned} &\mathbb{P} [ \tilde{X}_{\ell +1}^{N,m} = j_{2}, \tilde{X}_{\ell}^{N,m} \in C_{i}^{N,m} ] \\ &= \frac{1}{4} \bigg( \tilde{p}_{m+\ell}(j_{2} - 1) - \tilde{p}_{m+\ell}(j_{2}) - \frac{\tilde{p}_{m+\ell}(j_{1}) - \tilde{p}_{m+\ell}(j_{2})}{j_{2} - j_{1}} \bigg). \end{aligned}$$
(6.12)

3) Finally, let \(i \in \partial G^{N,m}\) and denote by \(j_{1}, j_{2} \in G^{N,m}\) the neighbours in \(G^{N,m}\) of \(i\) with \(j_{1} < i < j_{2}\) so that \((j_{1},i) \cap \mathbb{Z}\, \subseteq C^{N,m}\) and \((i,j_{2}) \, \cap \mathbb{Z}\subseteq C^{N,m}\). The optional stopping theorem yields

$$\begin{aligned} \mathbb{P} [ \tilde{X}_{n+1}^{N,m} = j_{1} | \tilde{X}_{n}^{N,m} = i ] &= \frac{1}{4(i - j_{1})} = \mathbb{P} [ \tilde{X}_{n+1}^{N,m} = i | \tilde{X}_{n}^{N,m} = j_{1} ], \end{aligned}$$
(6.13)
$$\begin{aligned} \mathbb{P} [ \tilde{X}_{n+1}^{N,m} = j_{2} | \tilde{X}_{n}^{N,m} = i ] &= \frac{1}{4(j_{2} - i)} = \mathbb{P} [ \tilde{X}_{n+1}^{N,m} = i | \tilde{X}_{n}^{N,m} = j_{2} ]. \end{aligned}$$
(6.14)

By (6.11)–(6.14) and the inductive assumption, we find that the inflows and outflows are precisely canceling out so that \(\mathbb{P} [ \tilde{X}^{N,m}_{\ell +1} = i ] = \tilde{p}_{m+\ell +1}(i)\), which concludes the inductive step. □

6.1.2 Convergence of \(X^{N,m}\) to \(X^{N}\)

In the last subsection, we have shown that \(X^{N,m}\) has the correct one-dimensional marginals. It remains to prove the appropriate convergence of \(X^{N,m}\) to \(X^{N}\), which is the purpose of this subsection.

Theorem 6.4

As random variables on \(\mathcal {D}([0,\infty ))\), \((X^{N,m})_{m \in \mathbb{N}}\) converges weakly to \(X^{N}\).

Before proving this theorem, we prepare useful ingredients we use in its proof.

Proposition 6.5

As random variables on \(\mathcal {D}([0,\infty )) \times [0,\infty )\), \((B^{m},T^{N,m})_{m \in \mathbb{N}}\) converges weakly to \((B,T^{N})\).

Proof

Choose a 1-bounded metric \(d\) compatible with the topology on \(\mathcal {D}([0,\infty ))\) with the property that

$$ d\bigg(f,\Big(x + f\big({(t-s)^{+}}\big)\Big)_{t \geq 0}\bigg) \leq s + \left|x\right| ,\qquad (x,s) \in \mathbb{R}\times [0,\infty ). $$

For \(\mu ,\nu \in \mathcal {P}(\mathcal {D}([0,\infty )))\) and \(\tilde{\mu},\tilde{\nu}\in \mathcal {P}(\mathcal {D}([0,\infty )) \times [0, \infty ))\), we denote their 1-Wasserstein distance by

$$\begin{aligned} \begin{aligned} \mathcal {W}(\mu ,\nu ) :=& \inf \{ \mathbb{E}\left [ d(X,Y) \right ] \colon X \sim \mu , Y \sim \nu \}, \\ \tilde{\mathcal {W}}(\tilde{\mu},\tilde{\nu}) :=& \inf \{ \mathbb{E} \left [ d(X,Y) + \left|S-T\right| \right ] \colon (X,S) \sim \tilde{\mu}, (Y,T) \sim \tilde{\nu}\}. \end{aligned} \end{aligned}$$

We write \(\mu ^{N,m}\), \(\mu ^{N}\), \(\mu ^{m}\), \(\mu \) for the laws of \((B^{m},T^{N,m})\), \((B,T^{N})\), \((\sqrt{\frac{2}{m}}\sum _{k = 1}^{\lfloor mt \rfloor}\zeta _{j} )_{t \ge 0}\) and a standard Brownian motion \(B\) starting at 0, respectively, on \(\mathcal {D}([0,\infty )) \times [0,\infty )\).

For fixed \(h > 0\), consider a spatial partition \((P_{j})_{j \in \mathbb{N}}\) of ℝ into (pairwise disjoint) intervals of maximal length \(h\) with \(I_{1} \cup I_{2} = \mathbb{N}\) and

$$ \bigcup _{j \in I_{1}} P_{j} = G^{N}, \qquad \bigcup _{j \in I_{2}} P_{j} = C^{N}. $$

Recall (see (6.6)) that the distributions of \(B^{m}\) and \(B\) are converging. Thus the Portmanteau theorem yields

$$ \sum _{x \in P_{j}} p^{m}(t,x) = \mathbb{P} [ B_{t}^{m} \in P_{j} ] \longrightarrow \mathbb{P} [ B_{t} \in P_{j} ] \qquad \text{as $m \to \infty $}. $$

Additionally, we know by Lemma 6.2 that for any \(x\in C^{N} \subseteq {[0,1]}\), the probabilities \(p^{m}(t,x)\) are strictly decreasing in \(t \in [0,\infty )\). Therefore, if \(j \in I_{2}\) and \(k \in \mathbb{N}_{0} := \mathbb{N}\cup \{0\}\), then as \(m \to \infty \),

$$\begin{aligned} &\mathbb{P} [ B_{0}^{m} \in P_{j}, kh < T^{N,m} \leq (k+1)h ] \\ &= \mathbb{P} \bigg[ B_{0}^{m} \in P_{j}, \frac{p^{m}((k+1)h,B_{0}^{m})}{p^{m}(0,B_{0}^{m})} \leq U < \frac{p^{m}(kh,B_{0}^{m})}{p^{m}(0,B_{0}^{m})}\bigg] \\ & = \sum _{x \in P_{j}} p^{m}(1+kh,x) - p^{m}(1 + (k+1)h, x) \\ &\longrightarrow \mathbb{P} [ B_{kh} \in P_{j} ] - \mathbb{P} [ B_{(k+1)h} \in P_{j} ] \\ &= \mathbb{P} [ B_{0}\in P_{j}, kh < T^{N} \leq (k+1)h ]. \end{aligned}$$
(6.15)

We introduce the real-valued sequences

$$\begin{aligned} a_{j}^{m} &:= \mathbb{P} [ B_{0}^{m} \in P_{j} ] \vee \mathbb{P} [ B_{0} \in P_{j} ], \\ b_{j}^{m} &:= \mathbb{P} [ B_{0}^{m} \in P_{j} ] \wedge \mathbb{P} [ B_{0} \in P_{j} ], \\ a^{m}_{j,k} &:= \mathbb{P} [ B_{0}^{m} \in P_{j}, kh \leq T^{N,m} < (k+1)h ] \vee \mathbbm{[} B_{0} \in P_{j}, kh \leq T^{N} < (k+1)h ], \\ b^{m}_{j,k} &:= \mathbb{P} [ B_{0}^{m} \in P_{j}, kh \leq T^{N,m} < (k+1)h ] \wedge \mathbb{P} [ B_{0} \in P_{j}, kh \leq T^{N} < (k+1)h ] \end{aligned}$$

for \(j,k \in \mathbb{N}\times \mathbb{N}_{0}\). We claim that

$$ \lim _{m \to \infty}\sum _{j \in I_{1}} (a^{m}_{j} - b^{m}_{j} )= 0, \qquad \lim _{m \to \infty} \sum _{j \in I_{2}} \sum _{k \in \mathbb{N}_{0}} (a^{m}_{j,k} - b^{m}_{j,k}) = 0. $$
(6.16)

To see this, consider the probability measures \(\mu ^{m}\) and \(\mu \) on \(\mathbb{N}\times \mathbb{N}_{0}\) given by

$$\begin{aligned} \mu ^{m}(\{j,k\}) &:= \mathbb{P} [ B_{0}^{m} \in P_{j}, kh \le T^{N,m} < (k + 1)h ], \\ \mu (\{j,k\}) &:= \mathbb{P} [ B_{0} \in P_{j}, kh \le T^{N} < (k + 1)h ] \end{aligned}$$

for \((j,k) \in \mathbb{N}\times \mathbb{N}_{0}\). By (6.15), \((\mu ^{m})_{m \in \mathbb{N}}\) converges weakly to \(\mu \). Since \(\mathbb{N}\times \mathbb{N}_{0}\) is a discrete Polish space, this means that the weights corresponding to \(\mu ^{m}\) and \(\mu \) must converge in \(L^{1}(\mathbb{N}_{0} \times \mathbb{N}_{0})\) as \(m\) tends to \(\infty \). We deduce (6.16) by observing that

$$\begin{aligned} a^{m}_{j,k} - b^{m}_{j,k} &= |\mu ^{m}(\{j,k\}) - \mu (\{j,k\})|, \\ a^{m}_{j} - b^{m}_{j} &\le \sum _{k \in \mathbb{N}_{0}} |\mu ^{m}(\{j,k\}) - \mu (\{j,k\})|. \end{aligned}$$

Conditionally on \(B_{0}^{m} = x\), we have \(\mathrm {law}((B_{t}^{m})_{t \geq 0}) = \mathrm {law}((\sum _{j = 1}^{\lfloor tm \rfloor} \zeta _{j} + x)_{t \geq 0})\). We estimate the distance for particles starting in \(P_{j}\), where either \(j \in I_{1}\) or \(j \in I_{2}\), separately via

$$ \tilde{\mathcal {W}} ( \mu ^{N,m}, \mu ^{N} ) \leq \mathcal {W}(\mu ^{m}, \mu ) + 2h + \sum _{j \in I_{1}} (a^{m}_{j} - b^{m}_{j}) + \sum _{j \in I_{2}} \sum _{k \in \mathbb{N}_{0}}( a^{m}_{j,k} - b^{m}_{j,k}). $$
(6.17)

For \(j \in I_{1}\), we obtain due to the spatial partitions an error of at most \(h + \mathcal {W}_{1}(\mu ^{m},\mu )\) plus (as the metric \(d\) is bounded by 1) the excess mass \(\sum _{j \in I_{1}} (a^{m}_{j} - b^{m}_{j})\). For \(j \in I_{2}\), we have to take into account an additional error (of at most \(h\)) due to the second components of \((B^{m},T^{N,m})\) and \((B,T^{N})\). From (6.16) and (6.17), one can easily deduce that

$$ \limsup _{m \to \infty} \tilde{\mathcal {W}} ( \mu ^{N,m}, {\mu ^{N}} ) \leq 2h, $$

which yields the desired convergence as \(h > 0\) was arbitrary. □

Lemma 6.6

Let \(f \in C([0,\infty ))\) be a path satisfying (6.23) and (6.24). Let \((f^{n})_{n \in \mathbb{N}}\) be a sequence converging to \(f\) in \(\mathcal {D}([0,\infty ))\) with \(\lim _{t \to \infty}\tau _{t}^{N}(f^{n}) = \infty \), where the time change \(\tau ^{N}\) is given by (5.1). Then the time-changed paths satisfy

$$ \Big(f^{n}\big(\tau ^{N}_{t}(f^{n})\big)\Big)_{t \ge 0} \longrightarrow \Big(f\big(\tau ^{N}_{t}(f)\big)\Big)_{t \ge 0} \qquad \textit{in }\mathcal {D}\big([0,\infty )\big). $$

Proof

Since \(f\) is continuous, the convergence of \((f^{n})_{n \in \mathbb{N}}\) to \(f\) in the Skorokhod space \(\mathcal {D}([0,\infty ))\) is by Jacod and Shiryaev [15, Proposition VI.1.17] equivalent to locally uniform convergence. Moreover, the time changes \(t \mapsto \tau ^{N}_{t}(f)\) and \(t \mapsto \tau ^{N}_{t}(f^{n})\) are strictly increasing, right-continuous functions such that

$$ \Delta \tau _{t}^{N}(f) := \tau _{t}^{N}(f) - \tau _{t-}^{N}(f) > 0 \qquad \text{implies that} \qquad A^{N}_{\tau _{t-}^{N}(f)}(f) = A^{N}_{ \tau _{t}^{N}(f)}(f). $$

As \(f\) satisfies (6.24) and \((f^{n})_{n \in \mathbb{N}}\) converges locally uniformly to \(f\), we have for \(\lambda \)-almost every \(s \in [0,\infty )\) that

lim n 1 G N ( f n (s))= 1 G N G N (f(s))= 1 G N (f(s)).

In particular, the convergence of \((A^{N}_{s}(f^{n}))_{n \in \mathbb{N}}\) to \(A^{N}_{s}(f)\) holds pointwise, and for any \(t \in [0,\infty )\),

$$ \limsup _{n \to \infty} \tau _{t}^{N}(f^{n}) \leq \tau _{t}^{N}(f). $$
(6.18)

First we show that

$$ \Delta \tau _{t}^{N}(f) = 0 \quad \implies \quad \lim _{n\to \infty} \tau _{t}^{N}(f^{n}) = \tau _{t}^{N}(f). $$
(6.19)

If \(\Delta \tau _{t}^{N}(f) = 0\), we have for any sequence \(t_{k} \nearrow t\) that \(\lim _{k \to \infty} \tau _{t_{k}}^{N}(f) = \tau _{t}^{N}(f)\). The map \(t \mapsto \tau ^{N}_{t}(f)\) is strictly increasing; therefore there are \(s_{k} \nearrow \tau _{t}^{N}(f)\) with \(\tau _{t_{k}}^{N}(f) < s_{k} < \tau _{t}^{N}(f)\) and \(t_{k} < A_{s_{k}}^{N}(f) < t_{k+1}\). We find for any \(k \in \mathbb{N}\) that

$$ \limsup _{n \to \infty} \tau _{t}^{N}(f^{n}) \geq \liminf _{n \to \infty} \tau _{t_{k+1}}^{N}(f^{n}) \geq s_{k}. $$
(6.20)

Combining (6.18) with (6.20) yields (6.19). In addition, we get

$$\begin{aligned} \limsup _{n \to \infty} \Delta \tau _{t}^{N}(f^{n}) &\leq \limsup _{n \to \infty} \tau _{t}(f^{n}) - \tau _{t_{k+1}}^{N}(f^{n}) \\ &= \tau _{t}^{N}(f) - \liminf _{n \to \infty} \tau _{t_{k+1}}^{N}(f^{n}) \leq \tau _{t}^{N}(f) - s_{k}, \end{aligned}$$

which yields that

$$ \Delta \tau _{t}^{N}(f) = 0 \implies \lim _{n \to \infty} \Delta \tau _{t}^{N}(f^{n}) = \Delta \tau _{t}^{N}(f). $$

Now as \(\tau _{t}^{N}(f)\) is increasing, it can have at most countably many jumps. Therefore \((\tau _{t}^{N}(f^{n}))_{n \in \mathbb{N}}\) has \(\tau _{t}^{N}(f)\) as limit for a dense subset of \([0,\infty )\). By [15, Lemma VI.2.25], it remains to verify that for any \(t \geq 0\), there is a sequence \((t_{n})_{n \in \mathbb{N}}\) with

  1. (i)

    \(t_{n} \to t\), and \(t_{n} \leq t\) if \(t\) is continuity point of \(\tau ^{N}(f)\);

  2. (ii)

    \(\Delta \tau _{t_{n}}^{N}(f^{n}) \to \Delta \tau _{t}^{N}(f)\).

So assume that \(\Delta \tau _{t}^{N}(f) > 0\). Let \(s_{0} := \tau _{t-}^{N}(f)\), \(s_{1} := \tau _{t}^{N}(f)\) and \(\bar{s} := \frac{s_{0} + s_{1}}{2}\). Fix \(\epsilon > 0\). As \(f\) satisfies (6.23), there are \(s_{0}^{\epsilon}\), \(s_{1}^{\epsilon}\) with \(f(s_{0}^{\epsilon}), f(s_{1}^{\epsilon}) \in G^{N} \setminus \partial G^{N}\) and

$$ s_{0} - \epsilon \leq s_{0}^{\epsilon }< s_{0} < s_{1} < s_{1}^{ \epsilon }\leq s_{1} + \epsilon . $$

Since \(f^{n} \to f\) locally uniformly, \(f\) is continuous and \(C^{N}\) is the union of open intervals, there are \(s_{0}^{n}, s_{1}^{n}\) with \(f^{n}(s) \in C^{N}\), \(s \in [s_{0}^{n},s_{1}^{n}]\) and \(s_{i}^{n} \to s_{i}, i \in \{0,1\}\). Write \(\bar{s}^{n} := \frac{s_{0}^{n} + s_{1}^{n}}{2}\) and note that \(s_{1}^{n} - s_{0}^{n} \leq \Delta \tau _{A_{\bar{s}^{n}}}^{N}(f^{n})\), whence

$$ \Delta \tau _{t}^{N}(f) = \lim _{n \to \infty} (s_{1}^{n} - s_{0}^{n}) \leq \liminf _{n \to \infty} \Delta \tau _{A_{\bar{s}^{n}}}^{N}(f^{n}). $$
(6.21)

For \(n\) sufficiently large, we have that \(f^{n}(s_{0}^{\epsilon}), f^{n}(s_{1}^{\epsilon}) \in G^{N} \setminus \partial G^{N}\). Since \(f^{n}\) is a càdlàg path, it spends a positive amount of time during \([s_{0}^{\epsilon},s_{0}^{n})\) and \((s_{1}^{n},s_{1}^{\epsilon }+ \epsilon ]\) in \(G^{N}\). Therefore, when \(n\) is sufficiently large, we have

$$ \Delta \tau _{A_{\bar{s}^{n}}}^{N}(f^{n}) \leq s_{1}^{\epsilon }- s_{0}^{ \epsilon }+ \epsilon \leq \Delta \tau _{t}^{N}(f) + 3\epsilon . $$
(6.22)

By (6.21) and (6.22), \((\Delta \tau ^{N}_{A_{\bar{s}^{n}}}(f^{n}))_{n \in \mathbb{N}}\) converges to \(\Delta \tau ^{N}_{t}(f)\).

Since \((A^{N}(f^{n}))_{n \in \mathbb{N}}\) is a sequence of 1-Lipschitz functions that converges pointwise to \(A^{N}(f)\), this convergence is even locally uniform. Therefore we obtain that \(A^{N}_{\bar{s}^{n}}(f^{n}) \to A^{N}_{\bar{s}}(f) = A^{N}_{\tau _{t}^{N}(f)}(f) = t\). We conclude that setting \(t_{n} := A^{N}_{\bar{s}^{n}}(f^{n})\) provides the desired sequence \((t_{n})_{n \in \mathbb{N}}\) satisfying (i) and (ii). We have thus shown that \(\tau ^{N}(f^{n}) \to \tau ^{N}(f)\) in \(\mathcal {D}([0,\infty ))\).

By [15, Theorem VI.1.14], there are continuous bijections \((\alpha _{n})_{n \in \mathbb{N}}\) on \([0,\infty )\) with

  1. (i)

    \(\sup _{s \ge 0} |\alpha _{n}(s) - s| \to 0\);

  2. (ii)

    \(\sup _{0 \leq s \leq T} |\tau _{\alpha _{n}(s)}^{N}(f^{n}) - \tau _{s}^{N}(f)| \to 0\) for all \(T>0\).

Denote by \(w_{f}^{T}\) the modulus of continuity of \(f\) restricted to \([0,T]\). We have

$$\begin{aligned} \sup _{0 \leq s \leq T} &\bigl| f^{n}\big(\tau ^{N}_{\alpha _{n}(s)}(f^{n}) \big) - f\big(\tau ^{N}_{s}(f)\big)\bigr| \\ &\leq \sup _{0 \leq s \leq \tau ^{N}_{\alpha _{n}(T)}(f^{n})} | f^{n}(s) - f(s) | + \sup _{0 \leq s \leq T} \bigl| f\big(\tau _{\alpha _{n}(s)}^{N}(f^{n}) \big) - f\big(\tau ^{N}_{s}(f)\big) \bigr| \\ &\leq \sup _{0 \leq s \leq \tau ^{N}_{\alpha _{n}(T)}(f^{n})} | f^{n}(s) - f(s) | + w_{f}^{T} \Big(\sup _{0 \leq s \leq T} | \tau _{\alpha _{n}(s)}^{N}(f^{n}) - \tau ^{N}_{s}(f) |\Big), \end{aligned}$$

which vanishes uniformly for \(n \to \infty \). Applying once again [15, Theorem VI.1.14] establishes the assertion. □

Lemma 6.7

The original Brownian motion \(B\) puts full mass on paths having the two properties that

$$ B_{t}(\omega ) \in \partial G^{N} \quad \implies \quad \forall \epsilon > 0\, \exists s \in [t-\epsilon ,t+\epsilon ] \textit{ with } B_{s}( \omega ) \in G^{N} \setminus \partial G^{N}, $$
(6.23)
$$ \lambda \big(t \in [0,\infty ) \colon B_{t}(\omega ) \in \partial G^{N} \big) = 0. $$
(6.24)

Proof

Property (6.24) is evident from the occupation time formula and the Brownian local time. The boundary \(\partial G^{N}\) of \(G^{N}\) consists only of finitely many points. Thus to see (6.23), it suffices to show for fixed \(\epsilon > 0\) that

$$ \mathbb{P}\big[ \big\{ \forall t \in Z \, \exists s \in \big((t- \epsilon )^{+},t+\epsilon \big) \text{ with } B_{s} > 0 \big\} \big] = 1, $$
(6.25)

where \(Z := \{ t \in \mathbb{R}_{+} \colon B_{t} = 0 \}\) is the set of zeroes of the Brownian motion \(B\). It is well known that \(Z\) is almost surely a perfect set and that almost surely every local maximum of \(B\) is a strict local maximum. Therefore, we have almost surely that times \(t \in Z\) are not local maxima; hence we deduce (6.25). □

Proof of Theorem 6.4

By Proposition 6.5, we have the convergence in law of the sequence \((B^{m},T^{N,m})_{m \in \mathbb{N}}\) to \((B,T^{N})\). By the Skorokhod representation theorem, we may assume without loss of generality that this convergence holds almost surely. Due to Lemma 6.7, we can apply Lemma 6.6 and find that almost surely,

$$ \big(B_{0}^{m}, (B_{\tau ^{N,m}_{t}}^{m})_{t \ge 0},T^{N,m} \big) \longrightarrow \big(B_{0}, (B_{\tau ^{N}_{t}})_{t \ge 0}, T^{N} \big) \qquad \text{as $m \to \infty $} $$

in \(\mathbb{R}\times \mathcal {D}([0,\infty )) \times [0,\infty )\). One readily verifies that \((X^{N,m})_{m \in \mathbb{N}}\) converges almost surely to \(X^{N}\) in \(\mathcal {D}([0,\infty ))\) on the set \(\{T^{N} > 0\} \cup \{ B_{0} \in G^{N} \setminus \partial G^{N} \}\). As that set has full probability, we obtain the assertion. □

6.2 Proofs of Propositions 3.1 and 4.1

Proof of Proposition 4.1

The strong Markov property of \(X^{N}\) with respect to the right-continuous version \({\mathbb{F}}^{N}\) of its natural filtration is readily derived from Lemma 6.1. Lemma 6.7 tells us that \(\tau _{t} < \infty \) almost surely for all \(t \in [0,\infty )\); thus \(X^{N}\) is well defined. The trajectories \((\tau _{t}^{N}(\omega ))_{t \ge 0}\) are increasing and càdlàg, whence \((X_{t}^{N}(\omega ))_{t \ge 0}\) is also càdlàg.

It remains to convince ourselves that \(X^{N}\) is a martingale with the correct one-dimensional Brownian marginals. By Theorem 6.3, we have for any \(t \in [0,\infty )\) that \(\mathrm {law}(X^{N,m}_{t}) = \mathrm {law}(B^{m}_{t})\). Theorem 6.4 provides weak convergence of \((X^{N,m})_{m \in \mathbb{N}}\) to \(X^{N}\). The laws of \((X^{N,m}_{t})_{m \in \mathbb{N}}\) converge to \(X^{N}_{t}\), and hence \(\mathrm {law}(X^{N}_{t}) = \mathrm {law}(B_{t})\). The following computation establishes that the second moments of \(X^{N,m}_{t}\) are converging to \(t\); indeed, for \(m \in \mathbb{N}\),

$$\begin{aligned} \mathbb{E} [ ({B^{m}_{t} - B^{m}_{0}})^{2} ] &= \frac{2}{m^{2}} \sum _{(i_{1},i_{2}) \in \{1,\ldots , \lfloor m t \rfloor \}^{2}} \mathbb{E} [\zeta _{i_{1}}\zeta _{i_{2}} ] \\ &= \frac{1}{m} \#\big\{ (i_{1},i_{2}) \in \{1,\ldots ,\lfloor m t \rfloor \}^{2} \colon i_{1} = i_{2} \big\} = \frac{\lfloor m t \rfloor}{m}. \end{aligned}$$

Thus the second moments of \((X^{N,m}_{t})_{m \in \mathbb{N}}\) have the second moments of \(X^{N}_{t}\) as their limit. By Villani [25, Theorem 6.9], the joint distributions \((X_{t}^{N,m}, X_{s}^{N,m})_{m \in \mathbb{N}}\), which are martingale couplings when \(s \leq t\), converge for all pairs \((s,t) \in [0,\infty )^{2}\) in the 2-Wasserstein distance to the law of \((X_{t}^{N}, X_{s}^{N})\). Thus the law of \((X_{t}^{N}, X_{s}^{N})\) constitutes a martingale coupling. By the Markov property of \(X^{N}\), we find that \(X^{N}\) is a martingale as almost surely,

$$ \mathbb{E} [ X_{t}^{N} | \mathcal {F}_{s}^{N} ] = \mathbb{E} [ X_{t}^{N} | X_{s}^{N} ] = X_{s}^{N}. $$

 □

Proof of Proposition 3.1

Recall the definitions of the stopping time \(T\) and the process \(X\) from before Proposition 3.1. The trajectories \((A_{t}^{N}(\omega ))_{t \ge 0}\) increase pointwise to the almost surely strictly increasing process \((A_{t}(\omega ))_{t \ge 0}\). Thus the time changes \((\tau _{t}^{N}(\omega ))_{t \ge 0}\) converge almost surely pointwise to \((\tau _{t}(\omega ))_{t \ge 0}\). Hence we get almost surely

$$ (B_{\tau ^{N}_{t}})_{t \ge 0} \longrightarrow (B_{\tau _{t}})_{t \ge 0} \qquad \text{in }\mathcal {D}\big([0,\infty )\big). $$

From here, it is evident that \((X^{N})_{N \in \mathbb{N}}\) has \(X\) as its weak limit. By Proposition 4.1, we get that \(X\) has one-dimensional Brownian marginals. At each time \(t \in [0,\infty )\), the contribution of the “busy” particles to the mass on \(C\) vanishes, i.e., \(\mathbb{P}\left [ X_{t} \in C, T \leq t \right ] = 0\). Therefore we can consider the transition kernel of \(X\) for \(X_{t} \in C\) and \(X_{t} \in G\) separately. For \(x \in G\), the transition kernel coincides with the one of the Feller process \((B_{\tau _{t}})_{t \ge 0}\) (conditionally on \(B_{0} = x\)). For \(x \in C\), the transition kernel is given by a mix of a Dirac mass at \(x\) plus an appropriate convolution in time of the kernel of \((B_{\tau _{t}})_{t \ge 0}\) (conditionally on \(B_{0} = x\)). Altogether, it is possible to explicitly write down the Markov kernel corresponding to \(X\); thus \(X\) is a Markov process.

The martingale property follows by the same argument as in Proposition 4.1. □

Remark 6.8

We have formulated the “faking” procedure for the case of a standard Brownian motion. After all, this is the most regular and canonical situation one can imagine. But our construction applies to general martingales of the form

$$ dX_{t} = \sigma (t,X_{t}) \, dB_{t} $$

for arbitrary (sufficiently regular) \(\sigma (\, \cdot \,, \,\cdot \,)\). We only demonstrate this for the example of an exponential Brownian motion, i.e., for \(\sigma (t,x) = x\), so that

$$ X_{t} = \exp \bigg(B_{t} -\frac{t}{2}\bigg),\qquad t \ge 0, $$

where \(B\) is a standard Brownian motion starting at 0. To produce a “fake” exponential Brownian motion, find \(0 < t_{1} < t_{2} < \infty \) and \(0< a< b<\infty \) such that for each \(t_{1} \le t\le t_{2}\), the function \(x \mapsto xp(t,x)\) is concave on \([a,b]\) so that the density function \(p(t,x)\) is decreasing for \(x \in [a,b]\) in view of the heat equation

$$ \frac{\partial}{\partial t} p(t,x) = \frac{\partial ^{2}}{\partial x^{2}} \big(xp(t,x)\big). $$

Now play the same game as we did for Brownian motion \(B\) on the interval \((0,1)\) on the interval \((a,b)\), i.e., choose a sequence of disjoint intervals \([a_{n},b_{n}]\), \(n \in \mathbb{N}\), whose union is dense in \([a,b]\) but of strictly smaller Lebesgue measure than \(b-a\), etc. The earlier construction carries over verbatim. In view of the negativity of \(\frac{\partial}{\partial t} p(t,x)\), we again have a positive rate of the transition from the status of “lazy” to “busy” particles so that nothing needs to be changed.