Faking Brownian motion with continuous Markov martingales

Hamza-Klebaner posed the problem of constructing martingales with Brownian marginals that differ from Brownian motion, so called fake Brownian motions. Besides its theoretical appeal, the problem represents the quintessential version of the ubiquitous fitting problem in mathematical finance where the task is to construct martingales that satisfy marginal constraints imposed by market data. Non-continuous solutions to this challenge were given by Madan-Yor, Hamza-Klebaner, Hobson, and Fan-Hamza-Klebaner whereas continuous (but non-Markovian) fake Brownian motions were constructed by Oleszkiewicz, Albin, Baker-Donati-Yor, Hobson, Jourdain-Zhou. In contrast it is known from Gy\"ongy, Dupire, and ultimately Lowther that Brownian motion is the unique continuous strong Markov martingale with Brownian marginals. We took this as a challenge to construct examples of a"very fake'' Brownian motion, that is, continuous Markov martingales with Brownian marginals that miss out only on the strong Markov property.


Overview
In this paper, we show that there exist continuous Markov martingales which have the same marginals as Brownian motion but are different from Brownian motion: Theorem 1.1.There is a 1-dimensional Markovian martingale X with continuous paths and Brownian marginals which is not strongly Markovian.
The main part of the paper is devoted to a construction of such a fake Brownian motion X that relies entirely on familiar techniques of stochastic analysis.While the basic idea is quickly explained (see Section 2), it will then require some work to fill in the details of this construction.
Alternatively, we provide second construction in the final Section 6 which is less explicit, but is handled quickly and very easily by a theorem established in [14].

Bare hands approach to faking Brownian motion
Fix a 1-dimensional Brownian motion B = (B t ) 0≤t<∞ .In this section we shall assume that the starting value B 0 is normally distributed with mean 0 and variance 1, in symbols N (0, 1), so that B t is N (0, 1 + t)-distributed.We denote by p t (•) its density function.
Let C be a Borel subset of the interval [−1, 1] and set G = R \ C. The accumulated amount of time the original Brownian motion (B s ) 0≤s≤t spends inside G is given by which defines an increasing, Lipschitz-1-continuous process, c.f. [17,Chapter III.21].Its right continuous inverse τ t := inf{s > 0 : defines a time-change of the Brownian motion B. The resulting process (B τt ) 0≤t<∞ is a strongly Markovian martingale taking values in G almost everywhere.Consider the case where G is a union of finitely many intervals.Intuitively speaking, as long as B τt takes values in the interior of one of the above intervals, this process behaves like a Brownian motion.When it hits the boundary of the interval it is either reflected back into its interior or it jumps to the corresponding boundary of the neighbouring interval.
To imitate Brownian marginals not only on G, but also on its compliment C, we write U for a uniformly distributed random variable on [0, 1] independent of B. With the aid of U we introduce a stopping time T which will allow us in the following to differentiate at each time 0 ≤ t < ∞ between two different species of particles as will be explained in a moment.Speaking formally, the process of interest X can be defined by virtue of T , and is given by We consider X with respect to its natural (right-continuous, saturated) filtration (F t ) 0≤t<∞ .
Proposition 2.1.The process X has the same one-dimensional marginals as the Brownian motion B. It is a Markov martingale with càdlàg paths and, if C is closed with empty interior, is continuous.
The formal proof will be given in Section 5 below.We note that T > 0 almost surely on C, and that the process X is constant on the interval [0, T ].Hence, X is certainly not a Brownian motion so long as C has positive Lebesgue measure.As an example, if C is a fat Cantor set (see e.g.[2, page 140]) then it has positive measure yet has empty interior, so that X is a continuous Markov fake Brownian motion.
We know since Bachelier's thesis (compare the twin paper [4] which also contains much of the motivation for the present construction, in particular Section 3) that, at time t ≥ 0, the net inflow of B into the interval [a, ∞[ at the left end point a equals − p ′ (t,a) 2 , i.e., as follows immediately from the heat equation by integrating with respect to the space variable.
If an interval [a, b] is contained in ]0, ∞[ we therefore find a positive net inflow at the left boundary a and a negative inflow, i.e. a net outflow, at the right boundary b.
In order to construct a fake Brownian motion X = (X t ) 0≤t<∞ , let us focus our attention on an interval [a, b] ⊆ ]0, ∞[ for the moment.The idea of the present construction is that the process X behaves like a Brownian motion as long as X t takes its values in the interior of [a, b].When X hits the boundaries we have to make sure that the the amount of net in-resp.out-flows into the interval [a, b] agree with the values for the original Brownian motion as given by ( 5).If we can do so, the evolution of the marginal densities of the Brownian motion B and the fake Brownian motion X will coincide on [a, b].We still note that the process X may jump into, resp.out of, the interval [a, b] at its boundary points, or it may move in or out in a continuous way.
We shall do our construction in two steps.First we consider finitely many disjoint intervals , 1[ and achieve on each interval the validity of the above program to arrive at a process X N with the proper marginals on these intervals.This process will jump between the boundary points of neighboring intervals.We shall also have to make sure that on the remaining complement of these intervals the marginals of X N t also coincide with the marginals p t of B t .
In a second step we let N go to infinity and pass to an infinite collection of disjoint intervals [a n , b n ], contained in ]0, 1[, whose union is dense in ]0, 1[, but has Lebesgue measure strictly less than one.We thus will pass to a limit X of the above processes X N which will have continuous trajectories as well as the proper marginals.While the processes X N will be strongly Markovian martingales, the limiting process X will still be a Markovian martingale, but fail to have the strong Markov property.

Construction of the Processes
The accumulated amount of time the original Brownian motion (B s ) 0≤s≤t spends inside G N is given by which defines an increasing, Lipschitz-1-continuous process, c.f. [17,Chapter III.21].Its right continuous inverse defines a time-change of the Brownian motion B. The resulting process (B τ N t ) 0≤t<∞ is a strongly Markovian martingale taking values in G N .Intuitively speaking, as long as B τ N t takes values in the interior of one of the above intervals, this process behaves like a Brownian motion.When it hits the boundary of the interval it is either reflected back into its interior or it jumps to the corresponding boundary of the neighbouring interval.
To imitate Brownian marginals not only on G N , but also on its compliment C N , we write U for a uniformly distributed random variable on [0, 1] independent of B. With the aid of U we introduce a stopping time T N which will allow us in the following to differentiate at each time 0 ≤ t < ∞ between two different species of particles as will be explained in a moment.Speaking formally, the process of interest X N can be defined by virtue of T N , and is given by We consider X N with respect to its natural (right-continuous, saturated) filtration (F t ) 0≤t<∞ .
Proposition 3.1.The process X N has the same one-dimensional marginals as the Brownian motion B. It is a strong Markov martingale with càdlàg paths, but fails to be continuous.
The formal proof will be given in Section 5 below.Here we only sketch the main ideas.The verification of the strong Markovianity of X N is rather straight-forward.The crucial issue pertains to the marginals of X N .
To verify that the marginals of X N are indeed Brownian, we distinguish between the behavior of the process X N , whether it takes values in G N or in C N .We call particles with t ≥ T N busy particles as opposed to the lazy particles, which remain at their initial position in C N until time T N and to which we now turn our attention.
The idea is -as indicated by the word "lazy" -that these particles do not move for some time.Eventually, namely at time T N , they will change their behaviour from the "lazy" state to follow the behaviour of the busy particles.For x ∈ C N the density function p(x, t) of the Brownian motion (B t ) t≥0 is decreasing in time, and its infinitesimal change is given by the formula In order to match the evolution of the marginals of the fake Brownian motion X N with that of the original Brownian motion B on the set C N , we note that by (10) the fraction p(x,t) p(x,0) is strictly decreasing in time for any fixed x ∈ C N .Thus, As the trajectories of the busy particles (that is t ≥ T N ) take values exclusively in G N , it is apparent from (11) that the correct amount of (lazy) particles stay at their starting position to match the Brownian marginals on C N .To further develop our intuition, consider the Brownian motion B conditionally on the starting value are the neighbouring intervals which lie to the left, resp.to the right of the starting point x.As x ∈ C N , the Brownian motion B with starting value x does not spend any time in G N before it hits one of the boundary points {b N n−1 , a N n }, and from this hitting time onwards, it will almost surely spend its time in G N .As a consequence, the time changed càdlàg process (B τ N t ) t≥0 does not start at x but rather at one of the boundary points {b N n−1 , a N n }.It takes its choice of these two possibilities with probability This causes the process X N to jump at time T N from x to the boundary {b N n−1 , a N n } with the correct probabilities turning (X N T ∧t ) 0≤t<∞ into a martingale.After this jump the particle enters into the "busy" mode and exhibits the same behaviour as the Markov martingale (B τ N t The analysis of the marginal flow on G N is more delicate.We shall use some formal arguemnts in the remainder of this section to reveal the intuition behind our construction.A rigorous treatment is postponsed to Section 5. A "busy" particle as long as it moves in the interior of one of the intervals up to hitting the boundary.As mentioned, at this stage the particle can be either reflected back into the interval, or jump into the neighboring interval.As is well known, the intensity rate of these jumps is proportional to the local time spent by the particle X N t (ω) at the respective boundary points, and indirectly proportional to the distance from the neighbouring interval, i.e. to the size of this jump.Anticipating that X N has at time t the correct marginal distribution, this observation leads to where (13) describes an inflow at a . Simultaneously the boundary points in G N experience an additional mass inflow caused by "lazy" particles switching to the "busy" regime at time T N .Thanks to (12) we can explicitly derive the inflow at a N n+1 caused by particles changing their behavior from the "lazy" to the "busy" mode.The heat equation and an integration by parts yield Now we arrive at the crucial point of the construction: adding the effect of ( 15) to the in ( 13) and ( 14) calculated in-and out-flows of the "busy" particles we arrive precisely at the net inflow of the original Brownian motion B at the boundary point a N n .Of course, a similar argument applies to the right boundary point b N n as well to all the other boundary points.In conclusion, the marginals of X N t equal the marginals of B t on C N as well as on G N , i.e. on all of R.This finishes the intuitive sketch of the ideas underlying the proof of Proposition 3.1 indicating that we have successfully constructed a fake Brownian motion with the properties detailed in Proposition 3.1.In Section 5 we shall translate this intuition into rigorous mathematics.

Construction of the Process X
The above sequence (X N ) ∞ N =1 of processes will allow us to pass to a limiting continuous process X which will be our desired fake Brownian motion.
As already mentioned in the introduction, we choose an infinite collection of closed disjoint intervals ([a n , b n ]) 1≤n<∞ in [0, 1], whose union is dense in [0, 1] but has Lebesgue measure strictly less than one.Taking the first N intervals, ordering them from left to right, and adding the intervals ]a ∞[, we are in the situation of the previous section to obtain a process X N .We denote by G = ∞ N =1 G N the union of all these intervals, and the complement of G by C = ∞ N =1 C N .Recall the definitions ( 6) and ( 7) Clearly the trajectories (A N t (ω)) 0≤t<∞ of the process A N are increasing, Lipschitz-1 continuous, and increase almost surely pointwise to the process A given by As G is a union of intervals which is dense in R, the original Brownian motion B spends almost surely a positive amount of time during any time window of the form [t 1 , t 2 ], 0 ≤ t 1 < t 2 < ∞, in G.For this reason A is almost surely strictly increasing, and (τ t ) 0≤t<∞ has almost surely continuous paths.The trajectories (τ N t (ω)) 0≤t<∞ of the process τ N decrease almost surely to the continuous trajectories (τ t (ω)) 0≤t<∞ , uniformly on compact subsets of [0, ∞[.
The stopping times (T N ) N ∈N previously defined in (8) converge pointwise to T , which is given by In conclusion, the limiting process X has almost surely continuous trajectories and is almost surely the limit of the sequence (X N ) N ∈N .
We shall verify that it is the desired fake Brownian motion described by Theorem 1.1.The proof of Theorem 1.1 is given in Section 4 below.From an intuitive point of view Theorem 1.1 is a rather obvious consequence of Proposition 3.1.The novel aspect is the failure of the strong Markov property.This failure follows from a general theorem of Lowther [14, Theorem 1.3]: an R-valued, continuous, and strongly Markovian martingale is uniquely determined by its one-dimensional marginals.In particular, if these marginals are those of a Brownian motion B, there is no other process X with the mentioned properties.In other words, there is no fake Brownian motion which is a continuous, strong Markov martingale.It is instructive to directly visualize the failure of the strong Markov property of the above constructed process X, without recurse to Lowther's theorem.We do so by applying the technique of coupling as in [10].
Let X be an independent copy of the process X given by ( 19), and let T be the corresponding independent copy of T , see (18), which indicates the time when X changes from "lazy" mode to the "busy" mode.Both are defined on the same probability space which we equip with the filtration (F t ) 0≤t<∞ , generated by these two processes, whose elements we write as (ω, ω).Define the stopping time τ = τ (ω, ω) as the first moment when the trajectories X(ω) and X(ω) meet, that is On the event {T (ω) ≤ τ (ω, ω)} we know that X is in the "busy" regime from time τ onwards.That means that (X τ +t ) 0≤t<∞ behaves like the strong Markov process (B τt ) 0≤t<∞ when the underlying Brownian motion has the correct starting distribution B 0 ∼ X τ .Hence we have, almost surely for any The sets A, B ∈ F τ where the particle X τ (ω) (resp.Xτ (ω)) at time τ is in the "busy" mode while Xτ (ω) (resp.X τ (ω)) is in the "lazy" mode, in symbols have positive probability and P (A) = P (B).Define the conditional probabilities P A (• ∩ A) and P B (• ∩ B), and particularly due to symmetry The conditional probability of X τ +t with respect to F τ does not only depend on the present position X τ (ω)) = Xτ (ω) but also on the information whether X τ (ω) is in the "busy" or "lazy" mode.Indeed, fix t > 0 and consider the probability of the event {X τ +t ∈ G} conditionally on A, which is, by (21) and as τ N t ց τ t almost surely, given by where we write int(G) for the interior of G.We deduce that whereas, thus 1 B P (X τ +t ∈ C | F τ ) does not vanish.From (24), (25), and (26), we follow that in order to determine the conditional probability of {X τ +t ∈ C} given F τ we require the information from the past of the process X prior to the stopping time τ .In conclusion, the process X fails to have the strong Markov property.Intuitively speaking, this failure stems from the fact that the "busy" particles travel through the "lazy territory" C in a continuous way.In contrast, the processes X N considered in the previous section jump over the territory C N which makes it impossible to "catch" them while they are traveling through C N by a stopping time τ .

Proofs
Lemma 5.1.Using the notation of Section 3, let B be a Brownian motion started at some ) 0≤t<∞ is a Feller process.Its Feller generator G is given by and its domain by Proof.To see that (B τ N t ) 0≤t<∞ defines a Feller process, we recall that it inherits the strong Markov property from B. Define its resolvent By the Hille-Yosida theorem, it suffices to show that (R λ ) 0<λ<∞ is a strongly continuous contraction resolvent on C 0 (G N ) (the set of continuous functions on G N vanishing at ±∞).Recall the definition of the process (A N t ) 0≤t<∞ , see (6).Conditionally on the starting point B 0 = x ∈ G N , we have as a consequence of Blumenthal's 0-1 law that almost surely ∀t > 0, A N t > A N 0 .Therefore, as t → A N t is increasing and continuous (particularly at 0), we find Due to [17, Lemma 6.7 and its proof], it remains to show that where Hy := inf{t > 0 : B τ N t = y} is the first hitting time of y and • denotes the supremum norm on C 0 (G N ).Since Hy is dominated by H y := inf{t > 0 : B t = y}, we have that for ǫ > 0 lim y→x P Hy ≥ ǫ | B 0 = x = 0, whence, the right-hand side in (29) vanishes.We have shown that the time-changed process (B τ N t ) 0≤t<∞ is a Feller process.Next, we compute its generator.Let x ∈ ]a N n , b N n [ and recall that the expected time the Brownian motion B started at Due to Dynkin's formula we can compute the generator via and note that Gf (x) can only exist if f is two times continuously differentiable at x.
The expected time a Brownian motion B started at B 0 = 0 spends inside [0, ǫ] before H ǫ is ǫ 2 .Therefore, we can compute the expected time the Brownian motion B started at x + ǫ} for the first time: Again, Dynkin's formula allows us to compute the generator at x .
This limit can only exist if f ′ (x), f ′′ (x) exist (where f ′ (x) and f ′′ (x) denote here the adequate one-sided derivatives) and lf ′ (x) = f (x) − f (b N n−1 ).In this case, we have by de l'Hopital's rule We conclude by remarking that by analogous reasoning we have for b N n and some n = 0, . . ., N that f can only be in the domain of G if and only if it is two times (one-sided) differentiable at b N n and are again the adequate one-sided derivatives.

Marginals
This section is concerned with the verification that X N has the correct Brownian 1-dimensional marginals.On a formal level one may argue as follows: We have shown in Lemma 5.1 that the busy particles (B τ N t ) 0≤t<∞ behave like a Feller process with generator G, c.f. ( 27) and ( 28).As we know the exact form of the generator, it is possible to derive the Kolmogorov forward equation describing the time evolution of the density u of the process (B τ N t ) 0≤t<∞ , that is, when n = 1, . . ., N .Thus, relying on knowledge of the corresponding heat equations one can show that the inflow of particles from C N caused by changing their modes from "lazy" to "busy" yield at the boundary ∂G N the correct compensation, whence, the density v of X N satisfies the heat equation Laying regularity questions aside, the above sketched approach seems rather clear-cut.Nevertheless, to avoid subtle arguments justifying the formal reasoning we "go back to the roots" and use a discretization argument instead.Approximating B by a scaled random walk B m allows us to establish the form of the marginals of X N without having to worry about regularity of the involved densities.
To this end, consider a random walk on Z with i.i.d.increments (ζ k ) k∈N where so that for any j ∈ Z with −l ≤ j ≤ l ∈ N pl (j) := P l k=1 which satisfies the discrete heat equation From Donsker's theorem we know that the scaled random walk converges in law to the original Brownian motion B as random variables on the Skorokhod space Recall that B 0 is normally distributed with mean 0 and variance 1, for which reason the sum in (33) runs from 1 to ⌊m(1 + t)⌋ rather than ⌊mt⌋.Consequentially we observe for the 1-dimensional marginal distributions that B m t converges in law to B t , and write p m (t, x) := P (B m t = x) for the discrete mass evolution of B m .Let U be a uniformly distributed random variable independent of (ζ k ) k∈N , and define in analogy to T N , c.f. ( 8), the time T N,m which indicates when a particle X N,m changes behaviour from "lazy" to "busy": The time-change τ N,m is given by We define the time-changed scaled random walk X N,m It is evident that this defines a strong Markov process taking values in 2 m Z.The aim of the remainder of this subsection is to first establish that X N,m preserves the marginals of B m , see Theorem 5.2.Second, we give a proof for convergence of (X N,m ) m∈N to X N as random variables on the Skorokhod space D([0, ∞[) in Theorem 5.4.

The marginals of X N,m
An advantage of the discrete level is that evolving the marginal distribution in time is achieved iteratively.By rescaling B m and X N,m by a factor of m 2 , we obtain random walks on Z. Then X N,m and B m have the same 1-dimensional marginal if and only if XN,m and Bm have this as well.For the analysis of the evolution of the marginals of XN,m , we introduce the sets on which the transition probabilities of XN,m exhibit different behaviour.In the interior of G N,m , that means here G N,m \ ∂G N,m the transition probabilities behave like the one of Bm and satisfy the discrete heat equation (32).From this observation it is easy to conclude that XN,m Theorem 5.2.The processes X N,m and B m have the same 1-dimensional marginal distributions.
Proof.The assertion is equivalent to showing that the 1-dimensional marginal distributions of the processes defined in (37) taking values in Z coincide.We show the statement by induction, so assume that Bm l ∼ XN,m l for a fixed l ∈ N. We treat the three cases corresponding to the three sets defined in (38) each, separately: 1) The case i ∈ G N,m \ ∂G N,m was already discussed in (39).We have P( XN,m 2) Let i ∈ C N,m and denote by j 1 , j 2 ∈ ∂G N,m the boundary points with j 1 < i < j 2 and ]j 1 , j 2 [ ∩Z =: C N,m i ⊆ C N,m .We abbreviate x = i 2 m and compute the net mass change at i from time l to time l + 1 of XN,m .Applying Lemma 5.3 ensures that t → p m (t, x) is strictly decreasing as m + l ≥ 2i 2 .Thus we find and particularly, P XN,m l+1 = i = pm+l+1 (i).Using the optional stopping theorem for the martingale ( Bm k ) ∞ k=−m allows us to compute Thus, we obtain by the discrete version of the heat equation (32) and the inductive assumption Analogously, we get 3) Finally, let i ∈ ∂G N,m and denote by j 1 , j 2 ∈ G N,m the neighbours in G N,m of i with j 1 < i < j 2 .That means, ]j 1 , i[∩Z ⊆ C N,m and ]i, j 2 [ ∩Z ⊆ C N,m .The optimal stopping theorem yields By ( 40), (41), (42), (43), and the inductive assumption, we find that the inflows and outflows are precisely canceling out such that P XN,m l+1 = i = pm+l+1 (i), which concludes the inductive step.
The next lemma ensures that the discrete version of inequality (10) holds, which we use to prove that the process XN,m jumps with the correct rate from C N to ∂G N,m .Lemma 5.3.Let l, j ∈ N, l ≥ 2j 2 .Then Proof.The probabilties in (44) have a closed form given by (31).We compute the ratio of left-hand side to right-hand side of (44): where the last inequality holds by assumption, and conclude with (44).

Convergence of X N,m to X N
In the last subsection we have shown that X N,m has the correct marginals.It remains to prove the appropriate convergence of X N,m to X N , which is the purpose of this subsection.
Before proving this theorem we prepare useful ingredients which we use in its proof.Proof.To make the proof rigorous, we choose a 1-bounded metric d compatible with the topology on D([0, ∞[) with the property that We write µ N,m , µ N , µ m , and µ for the laws of (B m , T N,m ), (B, T N ), ( 2 m ⌊mt⌋ k=1 ζ j ) 0≤t<∞ , and a standard Brownian motion B starting at 0, respectively, on D For fixed h > 0 consider a spatial partition (P j ) j∈N of R into intervals of maximal length h with I 1 ∪ I 2 = N and Recall that the marginal distributions of B m and B are converging, thus, Portmanteau's theorem yields Additionally we know by Lemma 5.3 that for any We introduce the real-valued sequences Proof.Property (58) is evident from the occupation formula and the Brownian local time.The boundary ∂G N of G N consists only of finitely many points.Thus, to see (57), it suffices to show for fixed ǫ > 0 that Consider the countably, increasing family of stopping times where k ∈ N. We have that B t (ω) = 0 if and only if there is k Due to the strong Markov property and Blumenthal's 0-1 law we obtain for any k ∈ N that By ( 61) and (60) we get (59), which concludes the proof.
Proof of Theorem 5.4.By Proposition 5.5 we have convergence in law of (B m , T N,m ) m∈N to (B, T N ).By the Skorokhod representation theorem we may assume w.lo.g. that this convergence holds almost surely.Due to Lemma 5.7, we can apply Lemma 5.6, and find that almost surely we have almost surely that (X N,m ) m∈N has X N as its limit (in D([0, ∞[)).Moreover, {T N > 0} ∪ {B 0 ∈ G N \ ∂G N } has full probability, hence, we conclude with the assertion.

Proof of Theorem 1.1 and Proposition 3.1
Proof of Proposition 3.1.The strong Markov property w.r.t. the right-continuous version of its natural filtration F N is readily derived from Lemma 5.1.Lemma 5.7 tells us that almost surely for all t ∈ [0, ∞[ the time change τ t < ∞, thus, X N is well-defined.The trajectories (τ N t (ω)) 0≤t<∞ are increasing and càdlàg, whence, (X N t (ω)) 0≤t<∞ is also càdlàg.It remains to convince ourselves that X N is a martingale with the correct Brownian marginals.By Theorem 5.2 we have for any t ∈ [0, ∞[ that X N,m t ∼ B m t .Theorem 5.4 provides weak convergence of (X N,m ) m∈N to X N .The laws of (X N,m t ) m∈N converge to X N t , hence, X N t ∼ B t .The following computation establishes that the second moments of X N,m t are converging to t: when m ∈ N. Therefore, the second moments of (X N,m t ) m∈N have the second moments of X N t as their limit.By [18, Theorem 6.9] the joint distributions (X N,m t , X N,m s ) m∈N , which are martingale couplings when s ≤ t, converge for all pairs (s, t) ∈ [0, ∞[ 2 in 2-Wasserstein distance to the law of (X N t , X N s ).Thus, the law of (X N t , X N s ) constitutes a martingale coupling.By the Markov property of X N , we find that X N is a martingale as almost surely when (F N t ) 0≤t<∞ denotes the right-continuous version of the natural filtration of X N .
Proof of Theorem 1.1.The trajectories (A N t (ω)) 0≤t<∞ increase pointwise to the almost surely strictly increasing process (A t (ω)) 0≤t<∞ .Consequentially, the time changes (τ N t (ω)) 0≤t<∞ converge almost surely pointwise to (τ t (ω)) 0≤t<∞ .Hence, there holds almost surely (B τ N t ) 0≤t<∞ → (B τt ) 0≤t<∞ in D([0, ∞[). (63) From here, it is evident that (X N ) N ∈N has X as its weak limit.By Proposition 3.1, we get that X has Brownian marginals.At each time t ∈ [0, ∞[ the contribution of the "busy" particles to the mass on C vanishes, i.e.P (X t ∈ C, T ≤ t) = 0. Therefore we can consider the transition kernel of X for X t ∈ C and X t ∈ G seperately.For x ∈ G the transition kernel coincides with the one of the Feller process (B τt ) 0≤t<∞ (conditionally on B 0 = x).For x ∈ C the transition kernel is given by a mix of a dirac at x plus an appropriate convolution in time of the kernel of (B τt ) 0≤t<∞ (conditionally on B 0 = x).Altogether, it is possible to explicitly write down the Markov kernel corresponding to X, thus, X is a Markov process.The martingale property follows by the same argument as in Proposition 3.1.
Remark 5.8.We have formulated the "faking" procedure for the case of a standard Brownian motion.After all, this is the most regular and canonical situation one can imagine.But our construction applies to general martingales of the form for arbitrary (sufficiently regular) σ(•, •).We only demonstrate this at the hand of the example of an exponential Brownian motion, i.e., for σ(x, t) = x, so that

Corollary 4 . 1 .
The fake Brownian motion X fails to have the strong Markov property.

Lemma 5 . 7 .
which vanishes uniformly for n → ∞.Applying once again [12, Chapter VI.Theorem 1.14] establishes the assertion.The original Brownian motion B puts full mass on paths having the property that