On the establishment of a mutant

How long does it take for an initially advantageous mutant to establish itself in a resident population, and what does the population composition look like then? We approach these questions in the framework of the so called Bare Bones evolution model (Klebaner et al. in J Biol Dyn 5(2):147–162, 2011. https://doi.org/10.1080/17513758.2010.506041) that provides a simplified approach to the adaptive population dynamics of binary splitting cells. As the mutant population grows, cell division becomes less probable, and it may in fact turn less likely than that of residents. Our analysis rests on the assumption of the process starting from resident populations, with sizes proportional to a large carrying capacity K. Actually, we assume carrying capacities to be \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_1K$$\end{document}a1K and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_2K$$\end{document}a2K for the resident and the mutant populations, respectively, and study the dynamics for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\rightarrow \infty $$\end{document}K→∞. We find conditions for the mutant to be successful in establishing itself alongside the resident. The time it takes turns out to be proportional to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\log K$$\end{document}logK. We introduce the time of establishment through the asymptotic behaviour of the stochastic nonlinear dynamics describing the evolution, and show that it is indeed \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{\rho }\log K$$\end{document}1ρlogK, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho $$\end{document}ρ is twice the probability of successful division of the mutant at its appearance. Looking at the composition of the population, at times \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{\rho }\log K +n, n \in \mathbb {Z}_+$$\end{document}1ρlogK+n,n∈Z+, we find that the densities (i.e. sizes relative to carrying capacities) of both populations follow closely the corresponding two dimensional nonlinear deterministic dynamics that starts at a random point. We characterise this random initial condition in terms of the scaling limit of the corresponding dynamics, and the limit of the properly scaled initial binary splitting process of the mutant. The deterministic approximation with random initial condition is in fact valid asymptotically at all times \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{\rho }\log K +n$$\end{document}1ρlogK+n with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n\in \mathbb {Z}$$\end{document}n∈Z.


Introduction
There has been much work in stochastic adaptive dynamics and evolutionary branching, see Dieckmann and Law (1996), Metz et al. (1996), Champagnat et al. (2002), Champagnat and Méléard (2011) and Sagitov et al. (2013), to mention just a few. Here we confine ourselves to a simple mathematical model for evolution, where an established resident population is invaded by a mutant. From that moment on, the two populations compete for resources. At the moment of invasion the resident, wild-type, population is assumed to have the size near its carrying capacity a 1 K . Here K should be thought of as large, and a 1 > 0 is fixed. The size of the mutant population is initially negligible as compared to K , since it starts from one individual. It has a reproductive advantage over the resident, but as its progeny grows this advantage diminishes.
We want to answer the question of how long it takes for a mutant to become established, i.e. to grow to a size comparable to the host population. And what is the population composition then? Already the simplified model of two competing populations we consider, will require new mathematical techniques and lead to insightful results. We show that the deterministic approximation with a random initial condition is valid for times [ 1 ρ log K ]+n with any fixed n ∈ Z and a large K . However, unlike in the classical case on deterministic approximation (Kurtz 1970;Barbour 1980), some stochasticity remains and enters as a random initial condition.

The Bare Bones evolutionary model
This simple but basic model of species reproducing under interaction with their environment was introduced in Klebaner et al. (2011). It builds upon asexual binary splitting and evolves in discrete time. Thus, each individual either gets two children in the next generation or none. However, interaction with environment and population size is allowed-in contrast to classical stochastic approaches-but drastically condensed. Following the idea of Malthus, populations reach sizes proportional to available resources, and we assume that the the habitat is characterised by a carrying capacity, K > 0, thought of as large. Given the population size, individuals reproduce independently. Initially only the resident, wild-type, population is present and, at population size z, the individual probability of successful splitting is taken to be a 1 K /(a 1 K + z). Here a 1 is a constant, which determines the population size at its macroscopic (quasi-)equilibrium: when z = a 1 K , the probability of splitting is 1/2. On the average, thus, a population of this size produces one child per individual. As a result, the population size fluctuates around this (quasi-)steady state for what is presumably a very long time, cf. Jagers and Klebaner (2011).
In that stage, the population will experience its first mutation giving rise to a new population. The new, mutant population starts from a single individual, its ancestor. The basis of adaptive dynamics can then be said to be furnished by the branching mechanism, which forces the new population to either die out or else embark on exponential growth, in which case the old resident dies out, or the two populations will coexist for a time span that turns out to be exponential in the carrying capacity.
Mathematically, this dynamics can be described as follows. The branching process starts from a pair of positive integers Z(0) = Z 1 (0), Z 2 (0) , the first component denoting the size of the resident and the second that of the mutant population, at time 0, when the mutation appears. We assume that the established original population is at equilibrium at the moment of invasion n = 0, Z 1 (0) = [a 1 K ], and Z 2 (0) = 1. Each population develops by binary splitting with probabilities dependent on the numbers of cells, with transitions from generation n to n + 1 described by the recursion (1) The random variables ξ i (n, k) ∈ {0, 2} are independent, given the preceding, and only depend upon the last generation Z(n), with probabilities where a 2 > 0 is the parameter, which controls the mutant equilibrium population size, and γ is the interaction coefficient, assumed to satisfy 0 < γ < 1. The biological meaning of γ is that cells of one type encroach less upon the reproduction of the other cell type than do cells of the same type. That γ is the same in both probabilities means that influence is symmetric between the cell types.
In the absence of mutants, the established population thus has a critical reproduction, whereas the mutant population starts supercritically, provided a 2 > γ a 1 , as is assumed throughout the paper, see (C) below.

Stochastic nonlinear dynamics for the evolution of the density
Important insights into the behaviour of populations with state dependent reproduction and large carrying capacity is provided by their density process (Klebaner 1984(Klebaner , 1993. It allows representation of the process as having stochastic nonlinear dynamics, which can be separated into a deterministic part and a random perturbation. This is useful not only for the mathematical analysis but also for the biological interpretation.
The density process is the population sizes relative to K Note that the splitting probabilities (and hence the offspring distributions) in (2) are in fact functions of the density; denoting the density state by x = (x 1 , x 2 ) we see that Accordingly, the offspring mean m(x) = m 1 (x), m 2 (x) at x is also a function of the density The underlying deterministic dynamics is determined by the function f( This can be easily seen from (1) by writing the density process as The first term on the r.h.s. of (5) gives the deterministic dynamics (3), and the second term acts as the random perturbation, with These random variables have zero mean and variance 4x i p i (x) 1 − p i (x) , where p i (x) are the splitting probabilities. Therefore the random noise term in (6) is of order 1/ √ K and the density process can indeed be viewed as generated by a nonlinear dynamical system, perturbed by a small random disturbance.
Note that in the view of the above discussion, the trajectory of the deterministic system (3) depends on K through the initial condition x K (0) = [a 1 K ] K , 1 K . Similarly, the process generated by the stochastic dynamics (6), depends on K through X K (0) = [a 1 K ] K , 1 K and the noise term. Whenever appropriate, we will leave this dependence implicit, omitting it from the notation.

Deterministic dynamics
If we neglect the small random noise in (6), we obtain the deterministic dynamics (3). Fixed points (solutions to f(x) = x) play an important role in the behaviour of such systems. The trajectories are repelled from the unstable fixed points and attracted to the stable ones. Our system, generated by the function f(·) in (4), has four fixed points, Since we are concerned with both populations, the relevant case is when both coordinates of x co are nonnegative. This is true if the following co-existence condition holds It is easy to see by examining the Jacobian matrix ∇f(x), see (22) below, that the point x co is stable, and x ex unstable. The points x re and x mu are saddle points, that is, stable in one direction and unstable in another. In our theory the point x re = (a 1 , 0) plays a special role due to proximity of the initial condition [a 1 K ] K , 1 K . In the absence of a mutant, a 1 is the stable equilibrium for the resident population, and 0 is unstable for the mutant population.

The large capacity limit of the stochastic dynamics
A rigorous treatment for neglecting small noise is given by the classical results in perturbation theory of dynamical systems, see e.g. Kurtz (1970), Barbour (1980), Freidlin and Wentzell (1984) and Kifer (1988). They assert that as the noise converges to zero, that is, when K → ∞, the trajectory X K (n) of the stochastic system (6) converges on any bounded time interval to that of the deterministic dynamics (3), started from the initial condition x(0) = lim K →∞ X K (0). Namely, for an arbitrary but fixed integer N , In our setup, the initial condition turns out to be the fixed point x re , Therefore the corresponding limit trajectory is constant, x(n) = x re for all n = 1, 2, . . . Consequently, the limit (8) fails to provide any information on the transition to a new coexistence equilibrium. We shall see that if such a transition occurs, it becomes visible much later, at a time increasing with K , in fact, of order log K .
Recently, limit theorems, capable of capturing this transition, were obtained in Barbour et al. (2015Barbour et al. ( , 2016, Chigansky et al. (2018) and Baker et al. (2018). They involve a time shift which grows logarithmically in K . In Barbour et al. (2015) this shift is random and the process X K (n) is approximated by the trajectory of the deterministic system (3) with a random shift. We have learnt from a referee that a precursor to random shift theory in Barbour et al. (2015) in the context of epidemic models can be found in Metz (1978), where precise conjectures were stated and later proved in an unpublished manuscript for the simple SIR epidemic model, (Altmann 1993;Mollison 1995).
In Barbour et al. (2016), Chigansky et al. (2018) and Baker et al. (2018), the shift is deterministic, and X K (n) converges to a trajectory of (3), started from a random initial condition. While the two approaches, the random shift and the random initial condition, are related, they are not equivalent. The main building block in the random initial condition theory is a certain scaling limit of the deterministic flow, which does not appear in the random shift theory. Existence of this limit was so far established only in the one dimensional case.
This work is the first such result in two dimensions. Having established it, we can complement the "random shift" picture in Barbour et al. (2015) with that of "random initial condition" for the Bare Bones model. Recently heuristics for similar random initial conditions for selective sweeps in large populations in one dimension were given in Martin and Lambert (2015). Other stochastic approaches involving carrying capacity can be found in Lambert (2005Lambert ( , 2006.

Main results
In what follows we consider the stochastic process X K (n) generated by (6) or, equivalently, by (1). As mentioned in Introduction, the resident population initially has a critical reproduction, and is at equilibrium, when a single mutant appears, so that 0). Even though the probability of a mutant present at any time n is positive, P(X K 2 (n) > 0) > 0, we do not say that it established itself until its numbers are proportional to its carrying capacity, in other words proportional to K . This can be formalized as For example, as we have seen above X K (n) → x re = (a 1 , 0) for any fixed n as K → ∞. This conveys that the mutant is not established by any fixed time n. We show however, that it may establish itself at a time, which grows logarithmically with K . More precisely, we prove that at time and, therefore, The logarithmic order of time of the mutant's establishment can be roughly explained as follows. As the process starts near x re = (a 1 , 0), the state dependent splitting probabilities can be approximated, at least initially, by their values at x re , giving probabilities of division 1/2 and a 2 /(a 2 + γ a 1 ) for the resident and the mutant populations respectively. Note that due to coexistence condition (C), the mutant process is supercritical with mean Hence it grows at the rate ρ n , and it takes time for it to grow to the size proportional to K , as K → ∞. In fact, this heuristics is correct, and made precise in the following result, which implies both (9) and (10). We denote the fractional part of x ∈ R + by {x}.
Theorem 1 There exist a non-degenerate scalar random variable W ≥ 0 and a function H(x), whose entries are positive on the open half-plane R × R + , such that In particular, along the subsequence of exact powers K j = ρ j , j ∈ N, Let us now detail about the random variable W and the function H(·) appearing in this theorem. The approximate mutant process, mentioned in the heuristic explanation above, has the same splitting probability as the mutant component of Z(n) at x re . More precisely, it is a supercritical Galton-Watson binary splitting, started with a single ancestor, Y (0) = Z 2 (0) = 1, and for n ≥ 1 defined iteratively by where the offsprings ζ(n, j) ∈ {0, 2} are i.i.d. random variables with the constant splitting probability P ζ(n, j) is a non-negative martingale. As such it converges almost surely to a limit, which is the random variable appearing in (11).
The function H(·) in Theorem 1 is the limit of the n-fold iterated map f n (·) along the unstable manifold of the dynamics in (3).
Theorem 2 Under the basic assumptions stated, the limit exists, and the convergence is uniform on compacts.
Remark 1 (1) It is easy to see that H(·) solves the Abel functional equation H(x) = f(H(x/ρ)), subject to H(0) = x re . While much is known of such equations in one dimension, in higher dimensions the theory is more involved.
(2) Numerical calculations indicate that H(x) is constant with respect to the resident population component x 1 , see Fig. 1. This is consistent with the criticality of that population at the density a 1 : the global stability of the monomorphic dynamics makes those perturbations shrink to 0 when f is iterated. The next result describes the density process after establishment of the mutant, at times [b log K ] + n, n = 1, 2, . . . ; it shows that the population composition is governed by the deterministic nonlinear dynamics f n started at the random point, as illustrated on Fig. 2. Furthermore, it equally holds when n is a negative integer. Denote the random vector appearing in Theorem 1 by Corollary 1 For any n ∈ Z, The next corollary to Theorem 1 answers the question what is the probability of the successful establishment of the mutant? Since the argument is short we present it here. It is known that P(W = 0) is exactly the extinction probability of the Galton-Watson process Y (n). It is easily calculated to be 2/ρ − 1. But on the event {W = 0}, H (0, W ) = H((0, 0)) = x re . Since on the complimentary event W > 0, and H 2 ((0, w)) > 0 for w > 0, we have the following corollary of (11).

A preview
The proof is inspired by the observation that supercritical populations, which start from a small number of individuals and develop on a habitat with a large but bounded capacity, grow initially as the Galton-Watson supercritical branching and then follow closely a deterministic curve, determined by the underlying nonlinear dynamics. This heuristics dates back at least to 50's, e.g., Kendall (1956) and Whittle (1955), and the already mentioned (Metz 1978). A rigorous proof for epidemics is given in Mollison (1995), and in a wider context the rigorous implementations are relatively recent, see Barbour et al. (2015Barbour et al. ( , 2016 and Chigansky et al. (2018).
Let us briefly sketch the ideas. The main ingredient is the Galton-Watson branching process Y(n), whose components mimic the behaviour of those of Z(n) at the moment of mutant's appearance, that is, around the equilibrium point x re . Thus its first, resident component Y 1 (n) is critical and starts at [a 1 K ] and its second, mutant component Y 2 (n) is supercritical with the offspring mean ρ, and it starts from a single individual. The two processes Z(n) and Y(n) are constructed on the same probability space and are coupled in such a way, that they remain close at least until time n c = [log ρ K c ] with some fixed constant c ∈ (1/2, 1).
Following the above heuristics the density X(n) = Z(n)/K =: Z(n) is approximated by gluing the linearized stochastic process with the deterministic nonlinear dynamics, where Y(n) := Y(n)/K is the density of the Galton-Watson branching. The assertion (11) of Theorem 1 follows if we show that 1. the process Z(n) does indeed approximate the target density Z(n) at time n = [log ρ K ] = 1 log ρ log K = n 1 , 2. and the approximation Z(n) behaves asymptotically as claimed in Theorem 1, The main technical difficulty in proving (14) is to control the difference Z(n)− Z(n) on the time interval [0, n 1 ], which itself grows with K . It turns out that the usual technique, based on straightforward linearization of the dynamics, does not provide bounds sharp enough in this case. Instead we construct a suitable coupling in Sect. 3.3, which involves several additional auxiliary Galton-Watson processes.
The key to the limiting expression in (15) is the representation It shows that the convergence in Theorem 1 follows once we prove the limit of Theorem 2 and check that The random variable W is the martingale limit of the supercritical branching Y 2 (n), cf. (12). The most challenging element of the proof of this part is convergence (13), see Sect. 3.2 below. Previously, it has been proved in Chigansky et al. (2018) in dimension one, and analysis in higher dimensions, in our case two, requires a completely different approach. When convergence (11) is proved, the assertion of Corollary 1 follows by continuity of f(·).
The limit in equation (10) can be proved in a similar way: note that in this case, cf. (16) where, in view of (17), We omit the proof of this part, which closely follows that of Theorem 1 with obvious adjustments.

The limit H(x)
In this section we construct limit (13), by means of a convergent telescopic series.

An auxiliary recursion in dimension one
Let us start with an auxiliary one dimensional quadratic recursion x m,n = ρx m−1,n 1 + C x m−1,n , m = 1, . . . , n subject to initial condition x 0,n = x/ρ n with x > 0, where C ≥ 0 and ρ > 1 are constant coefficients. In what follows we will need the following estimate on its solution.
Lemma 1 There exists a finite function ψ : R + → R + , such that Proof Let us first note that no generality will be lost if C = 1 is assumed. Indeed, if (18) To this end, consider the Schröder functional equation where s =: 1/ρ ∈ (0, 1) and f (x) = 1 4 + sx − 1 2 is the inverse of the parabola p(·) on R + . The function f (x) satisfies the following conditions

f is continuous and strictly increasing on
Under these conditions (Seneta 1969) shows that the limit exists, solves (21) and satisfies the following properties Changing the variable in (21) to y = f (x), we get φ(y) = sφ( p(y)), y ∈ R + , and, inverting, we obtain the conjugacy Hence In particular, (20) and, therefore also (19), hold.

Properties of f(·)
Let us summarize some relevant properties of the function f(·), which governs the deterministic dynamics in (3). Since f 1 (x) − x 1 and f 2 (x) − x 2 change signs across the lines x 1 + γ x 2 = a 1 and γ x 1 + x 2 = a 2 respectively, as shown at the phase portrait (Fig. 3), the coexistence equilibrium x co is globally stable. The local behaviour around the unstable fixed point x re = (a 1 , 0) is determined the Jacobian of f(·) at x re , To study perturbations around x re it will be convenient to consider the translation with g(0) = 0 and Jacobian D g(0) = A. In particular, existence of the limit (13) follows from that of lim n→∞ g n (x). Note that for x 1 ∈ [x co 1 , ∞) and x 2 ∈ [0, x co 2 ], formulas (4) for the entries of the function f(·) and the configuration of the fixed points (7) imply Hence the subset E : , namely, f( E) ⊆ E. Then by (23) the subset  Fig. 3 The phase-portrait of the deterministic system (3); the trajectory from a small vicinity of the resident equilibrium x re to the coexistence equilibrium x co is depicted in red (colour figure online) In what follows, · stands for the ∞ norm for vectors and the corresponding operator norm for matrices. In particular, the matrix A defined in (22) satisfies A = ρ > 1. The linear subspace E 0 = {x ∈ R 2 : x 2 = 0} is invariant under A and sup x∈E 0 Below C, C 1 , etc. stand for constants, which depend only on a 1 , a 2 and γ and whose values may change from line to line. The first coordinate of g(·) can be written as and, similarly, Hence g(x) has the form where matrix B(x) satisfies the bound with a constant C. Similar calculation also shows that for x, y ∈ E where matrix F(x, y) satisfies These formulas and the bound from Lemma 1 give the following growth estimate.

Lemma 2
For any x ∈ R × R + and all n large enough, with a finite function ψ(r ), r ≥ 0.
Proof For any x ∈ R × R + and all n large enough x/ρ n ∈ E and, since E is invariant, g m (x/ρ n ) ∈ E for all m. Hence by (25), the sequence x m,n = g m (x/ρ n ) satisfies By induction x m,n ≤ x m,n , where x m,n solves (18) subject to x 0,n = x /ρ n , and the claim follows from Lemma 1.

Proof of Theorem 2
We will argue that the increments of g n (x/ρ n ) are absolutely summable, uniformly over compacts in R × R + . Let n be large enough so that x/ρ n ∈ E and therefore, by invariance, g m (x/ρ n ) ∈ E for all m ≥ 1. Consider the array For m = 1, due to (25), where u ∈ E 0 and, in view of (26), v n is a sequence of vectors, whose norm is bounded uniformly in n. Both vectors u and v n depend continuously on x, which is omitted from the notations. For m ≥ 1, (27) implies and, letting F m,n := F g m (x/ρ n+1 ), g m−1 (x/ρ n ) , we get Since A = ρ, by virtue of (29) and (28) A + F m,n F 3,n A 2 + · · · + A + F n,n F n−1,n A n−2 + F n,n A n−1 + A n .
Since u ∈ E 0 and E 0 is invariant under A, we have A k u ≤ (1/2) k u due to (24). Therefore for all k = 0, . . . , n − 2 n m=k+2 A + F m,n F k+1,n A k u where all C j 's depend continuously on x. Consequently n m=1 A + F m,n ρ −n u ≤ C 6 ρ −n .
Plugging this and (31) into (30) yields where C 7 depends continuously on x. This implies that g n (x/ρ n ) converges as n → ∞, uniformly on compacts. Existence of the limit H(x) in (13) now follows from (23), and Theorem 2 is proved.

The main approximation
In this section we construct the random variable W and prove convergence (11). To this end, let U (n, j) and V (n, j) be i.i.d. random variables distributed uniformly over the unit interval [0, 1] and define ξ 1 (n, j) and ξ 2 (n, j) in (1) as Define Galton-Watson branching process Y(n) with components and Y 2 (0) = 1, and the corresponding density process Y(n) = Y(n)/K . Note that Y 2 (n) coincides in distribution with the process defined in (12) and Finally, for a fixed constant c ∈ ( 1 2 , 1] define In particular, n 1 (K ) = [b log K ] = log ρ K , cf. Theorem 1.

Proof of (15)
Since and hence for any c ∈ ( 1 2 , 1), This along with (33) implies (17), and in view of representation (16), the limit in (15) follows by the continuous mapping theorem and the uniform convergence in (13).

Proof of (14)
Since it suffices to prove that and Let us first prove (36). Recall that the density process X(n) = Z(n) solves (6), and hence the difference δ(n) := Z(n) − f n−n c (Z(n c )) satisfies subject to δ(n c ) = 0. A direct calculation shows that the Jacobian of f(·) is bounded Hence f(·) is ρ-Lipschitz on R 2 + with respect to ∞ norm and Let β := log ρ ρ > 1, then where convergence holds if c is chosen close enough to 1. This proves (36).
To check (37), write Since, by (33) and (35), the sequence ρ −n c (Y(n c ) − K x re ) converges to (0, W ) in probability and, by Theorem 2, the sequence f n (x re + x/ρ n ) converges uniformly on compacts to H(x), it suffices to show that where c ∈ ( 1 2 , 1) has been already fixed in the previous calculations. We prove (38) for j = 2, omitting the similar proof for the case j = 1.