1 Introduction

1.1 Brief Description of the Model and Framework

This paper studies fluctuations of a corner growth model that can be viewed as a discrete analogue of the Hammersley process [29] or an independent analogue of the longest common subsequence (LCS) problem, introduced in [10].

The model under consideration was introduced in [46]. It is a directed corner growth model on the positive quadrant \({\mathbb {Z}}_+^2\). Each site \(\mathbf{v}\) of \({\mathbb {Z}}^2_+\) is assigned a random weight \(\omega _\mathbf{v}\). The collection \(\{\omega _\mathbf{v}\}_{\mathbf{v} \in {\mathbb {Z}}_+^2}\) is the random environment and it is i.i.d. under the environment measure \({\mathbb {P}}\), with Bernoulli marginals

$$\begin{aligned} {\mathbb {P}}\{ \omega _\mathbf{v} = 1\} = p, \quad {\mathbb {P}}\{ \omega _\mathbf{v}= 0\} = 1 -p. \end{aligned}$$

Throughout the article we exclude the values \(p = 0\) or \( p =1\). One way to view the environment, is to treat site \(\mathbf{v}\) as present when \(\omega _\mathbf{v} = 1\) and as deleted when \(\omega _\mathbf{v} = 0\). With this interpretation, the longest strictly increasing Bernoulli path up to (mn) is a sequence of present sites

$$\begin{aligned} \pi ^{\max }_{m,n} = \{ \mathbf{v}_1 = (i_1, j_1), \mathbf{v}_2 = (i_2, j_2), \ldots , \mathbf{v}_M =(i_M, j_M)\} \end{aligned}$$

so that \(0< i_1< i_2< \cdots < i_M \le m\) and \( 0< j_1< j_2< \cdots < j_M \le n\) and so that if \(\{\mathbf{w}_1, \mathbf{w}_2, \ldots , \mathbf{w}_K\}\) is a different strictly increasing sequence of present sites, then it must be the case that \(K \le M\). The cardinality of \( \pi ^{\max }_{m,n} \) is a random variable, denoted by \(L_{m,n}\). It denotes the maximum number fo Bernoulli points that one can collect on a strictly increasing path up to point (mn).

In this article we cast the random variable \(L_{m,n}\) as a last passage time as in the framework of [23]. With the previous description, an admissible step of a potential optimal path up to (mn) can take one of O(mn) values—any site is accessible as long as it has strictly larger coordinates from the previous site. However, any integer vector of positive coordinates can be written as a linear combination of \(e_1, e_2\) and \(e_1+e_2\) steps. Our set of admissible steps is then restricted to \({\mathcal {R}} = \{ e_1, e_2, e_1+e_2\}\) and an admissible path from (0, 0) to (mn) is an ordered sequence of sites

$$\begin{aligned} \pi _{0, (m,n)} = \{0 =\mathbf{v}_0, \mathbf{v}_1, \mathbf{v}_2, \ldots , \mathbf{v}_M = (m,n)\}, \end{aligned}$$

so that \(\mathbf{v}_{k+1} - \mathbf{v}_k \in {\mathcal {R}}\). The collection of all these paths is denoted by \(\Pi _{0, (m,n)}\). In order to obtain the same variable \(L_{m,n}\) over this set of paths as the one from only strictly increasing steps, we need to specify the measurable potential function \(V(\omega , z):{\mathbb {R}}^{{\mathbb {Z}}^2_+}\times {\mathcal {R}}\rightarrow {\mathbb {R}}\) defined

$$\begin{aligned} V(\omega , z) = \omega _{e_1 + e_2}{\mathbb {1}}\{ z = e_1+e_2\}. \end{aligned}$$
(1.1)

This way, the path \(\pi \) will collect the Bernoulli weight at site v if and only there exists a k such that \(\mathbf{v}_{k+1} = \mathbf{v}\) and \(\mathbf{v}_{k} = \mathbf{v} - e_1 - e_2\). No gain can be made through a horizontal or vertical step. Using this potential function V we define the last passage time as

$$\begin{aligned} G^V_{0,(m,n)} = \max _{\pi \in \Pi _{0,(m,n)}} \bigg \{ \sum _{ v_i \in \pi } V(T_{\mathbf{v}_i}\omega , \mathbf{v}_{i+1}-\mathbf{v}_i) \bigg \}. \end{aligned}$$
(1.2)

Above we used \(T_{v_i}\) as the environment shift by \(v_i\) in \({\mathbb {Z}}^2_+\). Now one can see that \(G^V_{0,(m,n)} = L_{m,n}\).

The law of large numbers for \(G^V_{0,(m,n)}\) was first obtained in [46]. To be precise, what was shown is the following: There exists an explicit function \(g^{(p)}_{pp}(s,t)\) that only depends on the environment parameter p so that for any \((s,t) \in {\mathbb {R}}^2_+\), the law of large numbers is given by

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{ G^V_{0,(\left\lfloor {ns}\right\rfloor ,\left\lfloor {nt}\right\rfloor )}}{n} = g^{(p)}_{pp}(s,t) ={\left\{ \begin{array}{ll} s, &{}\quad t\ge \frac{s}{p},\\ \frac{1}{1-p}\big (2\sqrt{pst}-p(t+s)\big ), &{}\quad ps\le t<\frac{s}{p},\\ t, &{}\quad t\le ps. \end{array}\right. } \end{aligned}$$
(1.3)

The function \(g^{(p)}_{pp}(s,t)\) is the point-to-point shape function. This is a concave, symmetric, 1-homogeneous differentiable function which is continuous up to the boundaries of \({\mathbb {R}}^2_+\). It was the first completely explicit shape function for which strict concavity is not valid. In fact, the formula indicates two flat edges, for \(t > s/p\) or \(t < ps\). The result was proven by first obtaining invariant distributions for an embedded totally asymmetric particle system.

It is precisely this methodology that invites the characterization ‘discrete Hammersley process’, as the particle system can be viewed as a discretized version of the Aldous-Diaconis process [1] which find the law of large numbers limit for the number of Poisson(1) points that can be collected from a strictly increasing path in \({\mathbb {R}}^2_+\).

The original problem is mentioned as Ulam’s problem in the literature and it was about the limiting law of large numbers for the length of longest increasing subsequence (LIS) of a random permutation of the first n numbers, denoted by \(I_n\). Already in [18] it was shown that \(I_n \ge \sqrt{n}\) and an elementary proof via a pigeonhole argument can be found in [29]. This gave the correct scaling and it was proven in [33, 50] that limiting constant of \( n^{-1/2}I_n\) is 2. Then the combinatorial arguments of these papers where changed to softer probabilistic arguments in [1, 27, 45] where the full law of large numbers was obtained for a sequence of increasing Poisson points.

The argument used in [46] to obtain the formula in directions of the flat edge can also be used in an identical way to obtain the law of large numbers in the same direction for the much more correlated longest common subsequence (LCS) model [10]. Comparisons between the discrete Hammersley and the LCS are tantilizing. The Bernoulli environment \(\eta = \{\eta _{i,j}\}\) for the LCS model is uniquely determined by two infinite random strings \(\mathbf{x} = (x_1, x_2, \ldots )\) and \(\mathbf{y} = (y_1, y_2, \ldots )\) where each digit is uniformly chosen from a k-ary alphabet (i.e. \(x_i, y_j \in \{1, 2, \ldots , k \}\)). Then the environment \(\eta _{i,j} = \mathbb {1}\{ x_i = y_j\}\) and it takes the value 1 with probability \(p = 1/k\). The random variable \({\mathcal {L}}_{n,n}^{(k)}\) represents the longest increasing sequence of Bernoulli points in this environment, which corresponds to the longest common subsequence between the two words, of size n. The limit \(c_k = \lim _{n \rightarrow \infty } n^{-1} {\mathcal {L}}_{n,n}^{(k)} \) is called in the literature as the Chvatal–Sankoff constant, and it was already observed in [46] that \(g^{(1/k)}_{pp}(1,1)\) of the discrete Hammersley lies between the known computational upper and lower bounds for \(c_k\).

A formal connection between the discrete Hammersley, LCS and Hammersley models arises in the small p (large alphabet size k) limit. Sankoff and Mainville conjectured in [44] that

$$\begin{aligned} \lim _{k\rightarrow \infty }\frac{c_k}{\sqrt{k}}=2. \end{aligned}$$

For the discrete Hammersley model this is an immediate computation in (1.3) for \(p= 1/k\) when we change \(c_k\) with \(g^{(1/k)}_{pp}(1,1)\). For the LCS, this was proven in [31]. The value 2 is the limiting law of large numbers value for the longest increasing sequence of Poisson points in \({\mathbb {R}}^2_+\).

1.2 Solvable Models of Lattice Last Passage Percolation and KPZ Exponents

Identifying the explicit shape function is the first step in computing fluctuations and scaling limits for last passage time quantities. When precise calculations can be performed and explicit scaling laws can be computed the model is classified as an explicitly solvable model of last passage percolation. There are only a handful of these models, and each one requires an individual treatment.

In [3] it is proven that the fluctuations around the mean of the longest increasing subsequence (LIS) of n numbers are of order \(n^{1/6}\) and the scaling limit is a Tracy–Widom distribution using a determinantal approach. The fluctuation exponent 1 / 3 is often used to associate a model to the Kardar–Parisi–Zhang (KPZ) class (see [11] for a review), and determinental/combinatorial approaches were developed for a variety of solvable growth models in order to compute among other things explicit weak limits and formulas for Laplace transforms of last passage times and polymer partition functions. Lattice examples include the corner growth model with i.i.d. geometric weights, (admissible steps \(e_1, e_2\)) [30], the log-gamma polymer [8, 12], introduced in [48], the Brownian polymer [36, 49], the strict-weak lattice polymer [13, 37], the random walk in a beta-distributed random potential, where the zero-temperature limit is the Bernoulli-Exponential first passage percolation [5]. Particularly for percolation in Bernoulli environment see [26], where Tracy–Widom distributions where obtained for a class of models that also include the homogeneous model of [47]. The result of [30] was also used to derive explicit formulas for the discrete Hammersley [39] with no boundaries via a particle system coupling using a mathematical physics approach.

It is expected that under some minimal moment conditions the order of fluctuations of 1 / 3 of the last passage time or the polymer partition function is environment-independent. A general theory that is a step towards universality can be found at the law of large numbers level [23, 41,42,43] where a series of variational formulas for the limiting free energy density of polymer models and shape functions for last passage percolation where proven. A variational formula for the time constant in first passage percolation was proven in [32]. For two-dimensional last passage models with \(e_1, e_2\) admissible steps the analysis and results can be sharpened; early universal results on the shape near the edge were obtained in [7, 35]. A general approach and a range of results including solutions to the variational formulas and existence of directional geodesics using invariant boundary models were developed via the use of cocycles in [24, 25]. Similar techniques are utilized in the present article, since we prove the existence of an invariant boundary model for the discrete Hammersley.

A more probabilistic approach to estimate the order of the variance (but not the explicit scaling limit), was developed in [9, 28] where by adding Poisson distributed ‘sinks’ and ‘sources’ on the axes, they could create invariant versions of the model. For the discrete Hammersley, an invariant model with sinks and sources has been described in [6] and it was used to re-derive the law of large numbers for \(G^V_{0,(m,n)}\). In the present article we show another way to use boundaries on the axes and create invariant boundary models. Our approach is similar to those in [4, 48, 49] where a Burke type property is first proven for the model with boundary and then exploited to obtain the order of fluctuations.

1.3 The Flat Edge in Lattice Percolation Models

The discrete Hammersley is a model for which the shape function \(g^{(p)}_{pp}(s,t)\) exhibits two flat edges, for any value of p. Flat edge in percolation is not uncommon. A flat edge for the contact process was observed in [15, 16]. A simple explicitly solvable first passage (oriented) bond percolation model introduced in [47] allows for an exact derivation of the limiting shape function and it also exhibits a flat edge. In this model the random weight was collected only via a horizontal step, while vertical steps had a deterministic cost. For the i.i.d. oriented bond percolation where each lattice edge admits a random Bernoulli weight, a flat edge result for the shape was proved in [14] when the probability of success p is larger than some critical value and percolation occurs. This was later extended in [34] where further properties were derived. In [2] differentiability has been proven for the shape at the onset of the flat edge.

These properties for oriented bond percolation can be transported to oriented site percolation and further extended to corner growth models when the environment distribution has a percolating maximum. For a general treatment to this effect, for non-exactly solvable models, see Section 3.2 in [25]. For directed percolation in a correlated environment, a shape result with flat edges can be found in [17].

Local laws of large numbers of the passage time near the flat edge of the discrete Hammersley model can be found in [21]. This work was later extended in [22], where limiting Tracy–Widom laws were obtained in special cases, using also the edge results of [7]. These ‘edge results’ are for the last passage time in directions that are below the critical line (nn / p) and into the concave region of \(g^{(p)}_{pp}\) by a mesoscopic term of \(n^a\), \(0< a < 1\). When \(a > 1/2\) the order of the fluctuations is between \(O(n^{1/3})\) and O(1). In the present article we further prove that in directions above the critical line (in the flat edge of \(g^{(p)}_{pp}\)) the variance of the passage time is bounded above by a constant that tends to 0 (see Sect. 7).

1.4 Structure of the Paper

The paper is organised as follows: In Sect. 2 we state our main results after describing the boundary model. In Sect. 3 we prove Burke’s property for the invariant boundary model and compute the solution to the variational formula that gives the law of large numbers for the shape function of the model without boundaries. The main theorem of this paper is the order of the variance of the model with boundaries in characteristic directions. The upper bound for the order can be found in Sect. 4. The lower bound is proven in Sect. 5. For the order of the variance in off-characteristic directions see Sect. 6 and for the results for the model with no boundaries, including the order of the variance in directions in the flat edge see Sect. 7. Finally, in Sect. 8 we prove the path fluctuations in the characteristic direction, again in the model with boundaries.

1.5 Common Notation

Throughout the paper, \({\mathbb {N}}\) denotes the natural numbers, and \({\mathbb {Z}}_+\) the non-negative integers. When we write inequality between two vectors \(\mathbf{v} = (k, \ell ) \le \mathbf{w }= (m,n)\) we mean \(k \le m\) and \(\ell \le n\). Boldface letters \(e.g.\ \mathbf v\) denote two dimensional vectors, that can substitute indices \( e.g.\ (m,n)\) when notation becomes heavy.

We reserve the symbol G for last passage times. We omit from the notation the superscript V that was used to denote the dependence of potential function in (1.2), since for the sequence we fix V as in (1.1), unless otherwise mentioned. So far we only introduced passages times from \(0=(0,0)\); when the starting point is 0, we simplify the notation slightly by omitting it from the subscript, and denote \(G_{0, (m,n)}\) simply by \(G_{m,n}\). We will need passage times from an initial point \((k, \ell )\) to (mn), for arbitrary \((k, \ell ) \le (m,n)\). In that case passage times are denoted by \(G_{(k, \ell ), (m,n)}\).

Throughout the presentation, we make use of several versions of passage times, which are all denoted by G. We compiled a table of these symbols in the Appendix, as well as alerting the reader or defining them in the main text.

The symbol \(\pi \) is reserved for a generic admissible path.

So far, the symbol \(g^{(p)}_{pp}\) was used for the point-to-point shape function, when the environment parameter is p. From this point onwards, the superscript (p) will be omitted as the intended bulk environment parameter is always p.

2 The Model and Its Invariant Version

2.1 The Invariant Boundary Model

The boundary model has altered distributions of weights on the two axes. The new environment there will depend on a parameter \(u \in (0,1)\) that will be under our control. Each u defines different boundary distributions. At the origin we set \(\omega _0 = 0\). For weights on the horizontal axis, for any \(k \in {\mathbb {N}}\) we set \( \omega _{ke_1} \sim \text {Bernoulli}( u ), \) with independent marginals

$$\begin{aligned} {\mathbb {P}}\{ \omega _{ke_1} = 1\} = u= 1 - {\mathbb {P}}\{ \omega _{ke_1} = 0\}. \end{aligned}$$
(2.1)

On the vertical axis, for any \(k \in {\mathbb {N}}\), we set \(\omega _{ke_2} \sim \text {Bernoulli}\Big ( \frac{p(1-u)}{u + p(1-u)} \Big )\) with independent marginals

$$\begin{aligned} {\mathbb {P}}\{ \omega _{ke_2} = 1 \} = \frac{p(1-u)}{u + p(1-u)} = 1 - {\mathbb {P}}\{ \omega _{ke_2} = 0\}. \end{aligned}$$
(2.2)

The environment in the bulk \(\{ \omega _\mathbf{v}\}_{\mathbf{v} \in {\mathbb {N}}^2}\) remains unchanged with i.i.d. Ber(p) marginal distributions. Denote this environment by \(\omega ^{(u)}\) to emphasise the different distributions on the axes that depend on u.

In summary, for any \(i \ge 1, j \ge 1\), the \(\omega ^{(u)}\) marginals are independent under a background environment measure \({\mathbb {P}}\) with marginals

$$\begin{aligned} \omega ^{(u)}_{i,j} \sim {\left\{ \begin{array}{ll} \text {Ber}(p), &{}\quad \text { if }\, (i,j) \in {\mathbb {N}}^2,\\ \text {Ber}(u), &{}\quad \text { if }\, i \in {\mathbb {N}}, j =0,\\ \text {Ber}\Big (\frac{p(1-u)}{u + p(1-u)} \Big ), &{}\quad \text { if }\, i =0, j \in {\mathbb {N}}, \\ 0, &{}\quad \text { if }\, i =0, j =0. \end{array}\right. } \end{aligned}$$
(2.3)

In this environment we slightly alter the way a path can collect weight on the boundaries. Consider any path \(\pi \) from 0. If the path moves horizontally before entering the bulk, then it collects the Bernoulli(u) weights until it takes the first vertical step, and after that, it collects weight according to the potential function (1.1). If \(\pi \) moves vertically from 0 then it also collects the Bernoulli weights on the vertical axis, and after it enters the bulk, it collects according to (1.1).

Fix a parameter \(u \in (0, 1)\). Denote the last passage time from 0 to w in environment \(\omega ^{(u)}\) by \(G^{(u)}_{0, \mathbf v}\). The variational equality, using the above description, is

$$\begin{aligned} G^{(u)}_{0, \mathbf v} =&\max _{1\le k \le \mathbf{v}\cdot e_1}\max _{z \in \{e_2, e_1+e_2\}} \left\{ \sum _{i=1}^k \omega ^{(u)}_{i e_1} + V(T_{ke_1}\omega ^{(u)}, z) + G_{ke_1+z, \mathbf v} \right\} \nonumber \\&\bigvee \max _{1\le k \le \mathbf{v}\cdot e_2} \max _{z \in \{e_1, e_1+e_2\}} \left\{ \sum _{j=1}^k \omega ^{(u)}_{je_2} + V(T_{ke_2}\omega ^{(u)}, z) +G_{z+ke_2, \mathbf v} \right\} . \end{aligned}$$
(2.4)

If the starting point is (0, 0) and no confusion arises, we simply denote \(G_{(0,0),(m,n)}\) or \(G^{(u)}_{(0,0),(m,n)}\) with \(G_{m,n}\) or \(G^{(u)}_{m,n}\). Our two first statements give the explicit formula for the shape function.

Theorem 2.1

(Law of large numbers for \(G^{(u)}_{\left\lfloor {Ns}\right\rfloor , \left\lfloor {Nt}\right\rfloor }\)) For fixed parameter \(0 < u \le 1\) and \((s,t) \in {\mathbb {R}}^2_+\) we have

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{G^{(u)}_{\left\lfloor {Ns}\right\rfloor , \left\lfloor {Nt}\right\rfloor }}{N} = su + t\, \frac{p(1-u)}{ u + p(1-u) } , \quad {\mathbb {P}}-a.s. \end{aligned}$$
(2.5)

The result of the next theorem has been proven in [6, 46] using two different techniques. In [46] the last passage time process is embedded in a totally asymmetric exclusion type process, for which invariant distributions for the inter-particle distances where inferred. Using those and a hydrodynamic limit, the Legendre transform of the level curve of the shape function is explicitly computed and then inverted in order to obtain the shape function. A more geometric approach was used in [6], where the limit was obtained by finding the invariant model using sources and sinks of particles on the boundaries and creating level surfaces for the last passage time; in fact there is a correspondence between these level surfaces and the positions of particles in the particle system. In the present proof we only use algebraic properties arising from Burke’s property.

Theorem 2.2

Fix p in (0, 1) and \((s,t) \in {\mathbb {R}}^2_+\). Then we have the explicit law of large numbers limit

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{G_{\left\lfloor {Ns}\right\rfloor , \left\lfloor {Nt}\right\rfloor }}{N}&= \inf _{0< u \le 1}\left\{ s {\mathbb {E}}(\omega ^{(u)}_{1,0}) + t {\mathbb {E}}(\omega ^{(u)}_{0,1})\right\} \nonumber \\&={\left\{ \begin{array}{ll} s, &{}\quad t\ge \frac{s}{p},\\ \frac{1}{1-p}\big (2\sqrt{pst}-p(t+s)\big ),&{}\quad ps\le t<\frac{s}{p},\\ t, &{}\quad t\le ps. \end{array}\right. } \end{aligned}$$
(2.6)

The main theorems of this article verify with probabilistic techniques the variance of \(G^{(u)}\) along deterministic directions. For a given boundary parameter u, there will exist a unique direction \((m_u, n_u)\) along which the last passage time at point \(N(m_u, n_u)\) will have variance of order \(O(N^{2/3})\) for large N. That is what we call the characteristic direction. The form of the characteristic direction will become apparent from the variance formula in Eq. 4.5; it is precisely the direction for which the higher order variance terms cancel out. As it turns out, the characteristic direction ends up being

$$\begin{aligned} (m_{u}(N), n_{u}(N))=\left( N, \Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \right) . \end{aligned}$$
(2.7)

Throughout the paper we will often compare last passage times over two different boundaries that have different characteristic directions. For this reason we explicitly denote the parameter in the subscript.

Note that as \(N \rightarrow \infty \), the scaled direction converges to the macroscopic characteristic direction

$$\begin{aligned} N^{-1}(m_{u}(N), n_{u}(N)) \rightarrow \left( 1, \frac{\big (p+(1-p)u\big )^2}{p}\right) , \end{aligned}$$
(2.8)

which gives that for large enough N the endpoint \((m_{u}(N), n_{u}(N))\) is always between the two critical lines \(y = \frac{x}{p}\) and \(y=px\) that separate the flat edges from the strictly concave part of \(g_{pp}\).This defines the macroscopic set of characteristic directions

$$\begin{aligned} {\mathfrak {J}}_p = \left\{ \left( 1, \frac{\big (p+(1-p)u\big )^2}{p}\right) : u \in (0,1) \right\} . \end{aligned}$$

Note that any \((s,t) \in {\mathbb {R}}^2_+\) for which \((1,ts^{-1} ) \in {\mathfrak {J}}_p\), the shape function \(g_{pp}\) has a strictly positive curvature at (st).

Theorem 2.3

Fix a parameter \(u \in (0,1)\) and let \((m_u(N), n_u(N))\) as in (2.7). Then there exists constants \(C_1\) and \(C_2\) that depend on p and u so that

$$\begin{aligned} C_1 N^{2/3} \le {{\,\mathrm{Var}\,}}\left( G^{(u)}_{m_u(N), n_u(N)}\right) \le C_2N^{2/3}. \end{aligned}$$
(2.9)

In the off-characteristic direction, the process \(G^{(u)}_{m(N), n(N)}\) satisfies a central limit theorem, and therefore the variance is of order N. This is due to the boundary effect, as we show that maximal paths spend a macroscopic amount of steps along a boundary, and enter the bulk at a point which creates a characteristic rectangle with the projected exit point.

Theorem 2.4

Fix a \(c\in {\mathbb {R}}\). Fix a parameter \(u \in (0,1)\) and let \((m_u, n_u)\) the characteristic direction corresponding to u as in (2.7). Then for \(\alpha \in (2/3, 1]\),

$$\begin{aligned}&\lim _{N\rightarrow \infty }\frac{G^{(u)}_{m_u(N), n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor }-{\mathbb {E}}\left[ G^{(u)}_{m_u(N), n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor }\right] }{N^{\alpha /2}}\\&\quad {\mathop {\longrightarrow }\limits ^{{\mathcal {D}}}} Z \sim \sqrt{|c| u(1-u)} {\mathcal {N}}\left( 0, \mathbb {1}\{c<0\}+\frac{p}{(u+p(1-u))^2} \mathbb {1}\{c>0\}\right) . \end{aligned}$$

Remark 2.5

The set \(\mathfrak {J}_p\) contains only the directions (1, t) for which \(p< t < 1/p\). Any other directions with \(t < p\) or \(t > p^{-1}\) -that also correspond to the flat edge of the non-boundary model- and for an arbitrary \(u \in (0,1)\), are necessarily off-characteristic directions and along those, the last passage time satisfies a central limit theorem. \(\square \)

We also have partial results for the model without boundaries. Recall the definition of the function \(g_pp(x,y)\) in (1.3) where we have dropped the superscript. The approach does not allow access to the variance of the non-boundary model directly, but we have

Theorem 2.6

Fix \(x,y\in (0,\infty )\) so that \(p< y/x < p^{-1}\). Then, there exist finite constants \(N_0\) and \(C=C(x, y, p)\), such that, for \(b\ge C\), \(N\ge N_0\) and any \(0<\alpha <1\),

$$\begin{aligned} {\mathbb {P}}\left\{ |G_{(1,1),(\lfloor Nx\rfloor ,\lfloor Ny\rfloor )}-Ng_{pp}(x,y)|\ge bN^{1/3}\right\} \le Cb^{-3\alpha /2}. \end{aligned}$$
(2.10)

In particular, for \(N > N_0\), and \(1\le r<3\alpha /2\) we get the moment bound

$$\begin{aligned} {\mathbb {E}}\bigg [\bigg |\frac{G_{(1,1),(\lfloor Nx\rfloor ,\lfloor Ny\rfloor )}-Ng_{pp}(x,y)}{N^{1/3}}\bigg |^r\bigg ]\le C(x,y,p,r)<\infty . \end{aligned}$$
(2.11)

The bounds in the previous theorem work in directions where the shape function is strictly concave. In directions of flat edge we have

Theorem 2.7

Fix \(x,y\in (0,\infty )\) so that \(p > y/x\) or \( y/x > p^{-1}\). Then, there exist finite constants \(c = c(x,y,p)\) and \(C=C(x, y, p)\), such that

$$\begin{aligned} {{\,\mathrm{Var}\,}}(G_{(1,1),(\lfloor Nx\rfloor ,\lfloor Ny\rfloor )}) \le C N^2 e^{-cN}\rightarrow 0 \quad (N \rightarrow \infty ). \end{aligned}$$
(2.12)

For finer asymptotics on the variance and also weak limits, particularly close to the critical lines \(y = px\) and \(y = p^{-1}x\) we direct the reader to [21, 22].

We have already alluded to the maximal paths. Maximal paths are admissible paths that attain the last passage time. In the literature they can also be found as random geodesics. For this particular model, the maximal path is not unique—this is because of the discrete nature of the environment distribution, so we need to enforce an a priori condition that makes our choice unique when we refer to it. Unless otherwise specified, the maximal path we select is the right-most one (it is also the down-most maximal path).

Definition 2.8

An admissible maximal path from 0 to (mn)

$$\begin{aligned} {\hat{\pi }}_{0, (m,n)} = \{ \{ (0,0) = {\hat{\pi }}_0, {\hat{\pi }}_1, \ldots , {\hat{\pi }}_K = (m,n) \} \end{aligned}$$

is the right-most (or down-most) maximal path if and only if it is maximal and if \( {\hat{\pi }} _i = (v_i,w_i) \in {\hat{\pi }}_{0, (m,n)}\) then the sites \(( k,\ell ), v_i< k< m, 0 \le \ell < w_i\) cannot belong on any maximal path from 0 to (mn).

In words, no site underneath the right-most maximal path can belong to a different maximal path. An algorithm to construct the right-most path iteratively is given in (5.1).

For this right-most path \({\hat{\pi }}\) we define \(\xi ^{(u)}\) its exit point from the axes in the environment \(\omega ^{(u)}\). We indicate with \(\xi ^{(u)}_{e_1}\) the exit point from the x-axis and \(\xi ^{(u)}_{e_2}\) the exit point from the y-axis. If \(\xi ^{(u)}_{e_1}>0\) the maximal path \({\hat{\pi }}\) chooses to go through the x-axis and \(\xi ^{(u)}_{e_2}=0\) and vice versa. If \(\xi ^{(u)}_{e_1}=\xi ^{(u)}_{e_2}=0\) it means the maximal path directly enters into the bulk with a diagonal step. When we do not need to distinguish from which axes we exit, we just denote the generic exit point by \(\xi ^{(u)}\).

The exit point \(\xi ^{(u)}_{e_1}\) represents the exit of the maximal path from level 0. To study the fluctuations of this path around its enforced direction, define

$$\begin{aligned} v_0(j)=\min \{i\in \{0,\dots ,m\}:\exists k\text { such that } {\hat{\pi }}_k=(i,j)\}, \end{aligned}$$
(2.13)

and

$$\begin{aligned} v_1(j)=\max \{i\in \{0,\dots ,m\}:\exists k\text { such that } {\hat{\pi }}_k=(i,j)\}. \end{aligned}$$
(2.14)

These represent, respectively, the entry and exit point from a fixed horizontal level j of a path \({\hat{\pi }}\). Since our paths can take diagonal steps, it may be that \(v_0(j) = v_1(j)\) for some j.

Now, we can state the theorem which shows that \(N^{2/3}\) is the correct order of the magnitude of the path fluctuations. We show that the path stays in an \(\ell ^1\) ball of radius \(CN^{2/3}\) with high probability, and simultaneously, avoid balls of radius \(\delta N^{2/3}\) again with high probability for \(\delta \) small enough.

Theorem 2.9

Consider the last passage time in environment \(\omega ^{(u)}\) and let \({\hat{\pi }}_{0,(m_u(N),n_u(N))}\) be the right-most maximal path from the origin up to \((m_u(N),n_u(N))\) as in (2.7). Fix a \(0\le \tau <1\). Then, there exist constants \(C_1,C_2<\infty \) such that for \(N\ge 1\), \(b\ge C_1\)

$$\begin{aligned} {\mathbb {P}}\{v_0(\lfloor \tau n_u(N)\rfloor )<\tau m_u(N)-bN^{2/3}\text { or }v_1(\lfloor \tau n_u(N)\rfloor )>\tau m_u(N)+bN^{2/3}\}\le C_2b^{-3}.\nonumber \\ \end{aligned}$$
(2.15)

The same bound holds for vertical displacements.

Moreover, for a fixed \(\tau \in (0,1)\) and given \(\varepsilon >0\), there exists \(\delta >0\) such that

$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {P}}\big \{\exists k \text { such that } |{\hat{\pi }}_k-(\tau m_u(N),\tau n_u(N))|\le \delta N^{2/3}\big \}\le \varepsilon . \end{aligned}$$
(2.16)

3 Burke’s Property and Law of Large Numbers

In this section we prove the invariance property—traditionally called Burke’s property—that is satisfied by the environment variables in the model with boundary. We then use it to obtain the law of large numbers for the boundary model. After that we obtain the law of large numbers for the model without boundaries.

To simplify the notation in what follows, set \(w = (i,j) \in {\mathbb {Z}}_+^2\) and define the last passage time gradients by

$$\begin{aligned} I^{(u)}_{i+1,j} = G^{(u)}_{0, (i+1,j)} - G^{(u)}_{0, (i,j)}, \quad \text {and} \quad J^{(u)}_{i,j+1} = G^{(u)}_{0, (i,j+1)} - G^{(u)}_{0, (i,j)}. \end{aligned}$$
(3.1)

When there is no confusion we will drop the superscript (u) from the above. When \(j = 0\) we have that \(\{I^{(u)}_{i,0}\}_{i, \in {\mathbb {N}}}\) is a collection of i.i.d. Bernoulli(u) random variables since \(I^{(u)}_{i,0} = \omega _{(i,0)}\). Similarly, for \(i = 0\), \(\{J^{(u)}_{0,j}\}_{j \in {\mathbb {N}}}\) is a collection of i.i.d. \(\text {Bernoulli}\Big (\frac{p(1-u)}{u + p(1-u)} \Big )\) random variables.

The gradients and the passage time satisfy recursive equations. This is the content of the next lemma.

Lemma 3.1

Let \(u \in (0,1)\) and \((i, j) \in {\mathbb {N}}^2\). Then the last passage time can be recursively computed as

$$\begin{aligned} G^{(u)}_{0, (i, j)}&= \max \big \{ G^{(u)}_{0, (i, j-1)}, \,\,G^{(u)}_{0, (i-1, j)}, \,\,G^{(u)}_{0, (i-1, j-1)} + \omega _{i,j} \big \} \end{aligned}$$
(3.2)

Furthermore, the last passage time gradients satisfy the recursive equations

$$\begin{aligned} \begin{aligned} I^{(u)}_{i,j}&=\max \big \{\omega _{i,j}, \, J^{(u)}_{i-1,j}, \, I^{(u)}_{i,j-1}\big \} - J^{(u)}_{i-1,j} \\ J^{(u)}_{i,j}&= \max \big \{\omega _{i,j}, \, J^{(u)}_{i-1,j}, \, I^{(u)}_{i,j-1}\big \} - I^{(u)}_{i,j-1}. \end{aligned} \end{aligned}$$
(3.3)

Proof

Equation (3.2) is immediate from the description of the dynamics in the boundary model and the fact that (ij) is in the bulk. We only prove the recursive equation (3.3) for the J and the other one is done similarly and left to the reader. Compute

$$\begin{aligned} J^{(u)}_{i,j}&=G^{(u)}_{0,(i,j)}-G^{(u)}_{0, (i,j-1)}\\&=\max \big \{ G^{(u)}_{0, (i, j-1)}, \,\,G^{(u)}_{0, (i-1, j)}, \,\,G^{(u)}_{0, (i-1, j-1)} + \omega _{i,j} \big \} - G^{(u)}_{0,(i,j-1)} \quad \text { by }(3.2),\\&=\max \big \{0, G^{(u)}_{0, (i-1,j)}-G^{(u)}_{0, (i,j-1)}, G^{(u)}_{0,(i-1,j-1)}-G^{(u)}_{0, (i,j-1)}+\omega _{i,j}\big \}\\&=\max \big \{0, G^{(u)}_{i-1,j} -G^{(u)}_{i-1,j-1}+G^{(u)}_{i-1,j-1} -G^{(u)}_{i,j-1}, G^{(u)}_{i-1,j-1}-G^{(u)}_{i,j-1}+\omega _{i,j}\big \}\\&=\max \big \{0, J^{(u)}_{i-1,j} - I^{(u)}_{i,j-1}, - I^{(u)}_{i,j-1} +\omega _{i,j}\big \}\\&= \max \big \{\omega _{i,j}, \, J^{(u)}_{i-1,j}, \, I^{(u)}_{i,j-1}\big \} - I^{(u)}_{i,j-1}. \end{aligned}$$

\(\square \)

The recursive equations are sufficient to prove a partial independence property.

Lemma 3.2

Assume that \((\omega _{i,j}, I^{(u)}_{i, j-1}, J^{(u)}_{i-1,j})\) are mutually independent with marginal distributions given by

$$\begin{aligned} \omega _{i,j}\sim \text {Ber}(p), \quad I^{(u)}_{i, j-1} \sim \text {Ber}(u), \quad J^{(u)}_{i-1,j} \sim \text {Ber}\Big (\frac{p(1-u)}{u + p(1-u)}\Big ). \end{aligned}$$
(3.4)

Then, \(I^{(u)}_{i,j}, J^{(u)}_{i,j}\), computed using the recursive equations (3.3) are independent with marginals Ber(u) and Ber\((\frac{p(1-u)}{u + p(1-u)})\) respectively.

Proof

The marginal distributions are immediate from the definitions. To see the independence we must show that

$$\begin{aligned} {\mathbb {E}}\big (h\big (I^{(u)}_{i,j}\big )k\big (J^{(u)}_{i,j}\big )\big ) = {\mathbb {E}}\big (h\big (I^{(u)}_{i,j-1}\big )k\big (J^{(u)}_{i-1,j}\big )\big ), \end{aligned}$$

for any bounded continuous functions h, k, and use the independence of \(I^{(u)}_{i,j-1}\) and \(J^{(u)}_{i-1,j}\) by the assumption. Use Eqs. (3.3) to write

$$\begin{aligned}&{\mathbb {E}}\big (h\big (I^{(u)}_{i,j}\big )k\big (J^{(u)}_{i,j}\big )\big ) \\&\quad = {\mathbb {E}}\big (h\big ( \omega _{i,j}\vee J^{(u)}_{i-1,j} \vee I^{(u)}_{i,j-1} - J^{(u)}_{i-1,j}\big ) k \big (\omega _{i,j}\vee J^{(u)}_{i-1,j} \vee I^{(u)}_{i,j-1} - I^{(u)}_{i,j-1}\big )\big ). \end{aligned}$$

Then use the fact that \(\omega _{i,j}, J^{(u)}_{i-1,j}, I^{(u)}_{i,j-1}\) are independent with known distributions to compute the expected value and obtain the result. \(\square \)

A down-right path \(\psi \) on the lattice \({\mathbb {Z}}^2_+\) is an ordered sequence of sites \(\{ v_i \}_{i \in {\mathbb {Z}}}\) that satisfy

$$\begin{aligned} v_i - v_{i-1} \in \{ e_1, - e_2 \}. \end{aligned}$$
(3.5)

For a given down-right path \(\psi \), define \(\psi _i = v_i - v_{i-1}\) to be the i-th edge of the path and set

$$\begin{aligned} L_{\psi _i} = {\left\{ \begin{array}{ll} I^{(u)}_{v_i}, &{}\quad \text {if }\, \psi _i = e_1\\ J^{(u)}_{v_{i-1}}, &{}\quad \text {if }\, \psi _i = -e_2. \end{array}\right. } \end{aligned}$$
(3.6)

The first observation is that the random variables in the collection \(\{ L_{\psi _i}\}_{i \in {\mathbb {Z}}}\) satisfy the following:

Lemma 3.3

Fix a down-right path \(\psi \). Then the random variables \( \{L_{\psi _i}\}_{i \in {\mathbb {Z}}}\) are mutually independent, with marginals

$$\begin{aligned} L_{\psi _i} \sim {\left\{ \begin{array}{ll} \text {Ber}(u), &{} \text {if } \psi _i = e_1\\ \text {Ber}\Big ( \frac{p(1-u)}{ u + p(1-u)} \Big ), &{} \text {if } \psi _i = -e_2. \end{array}\right. } \end{aligned}$$

Proof

The proof goes by an inductive “corner - flipping” argument: The base case is the path that follows the axes, and there the result follows immediately by the definitions of boundaries. Then we flip the corner at zero, i.e. we consider the down right path

$$\begin{aligned} \psi ^{(1)} = \{ \ldots , (0,2), (0,1), (1,1), (1,0), (2,0), \ldots \}. \end{aligned}$$

Equivalently, we now consider the collection \(\big \{ \{ J^{(u)}_{0,j} \}_{j \ge 2},\, I^{(u)}_{1,1}, J^{(u)}_{1,1}, \{ I^{(u)}_{i, 0} \}_{i \ge 2} \big \}\). The only place where the independence or the distributions may have been violated, is for \(I^{(u)}_{1,1}\), \(J^{(u)}_{1,1}\). Lemma 3.2 shows this does not happen. As a consequence, variables on the new path satisfy the assumption of Lemma 3.2. We can now repetitively use Lemma 3.3 by flipping down-right west-south corners into north-east corners. This way, starting from the axes we can obtain any down-right path, while the distributional properties are maintained. The details are left to the reader. \(\square \)

For any triplet \((\omega _{i,j}, I^{(u)}_{i,j-1}, J^{(u)}_{i-1,j})\) with \(i \ge 1, j\ge 1\), we define the event

$$\begin{aligned} {\mathcal {B}}_{i,j} = \big \{ \big (\omega _{i,j}, I^{(u)}_{i,j-1}, J^{(u)}_{i-1,j}\big ) \in (1,0,0), (0,1,0), ( 0,0,1), (1,0,1), (1,1,0) \big \}. \end{aligned}$$
(3.7)

Using the gradients (3.3), the environment \(\{\omega _{i,j}\}_{(i,j) \in {\mathbb {N}}^2}\) and the events \({\mathcal {B}}_{i,j}\) we also define new random variables \(\alpha _{i,j}\) on \({\mathbb {Z}}_+^2\)

$$\begin{aligned} \alpha _{i-1,j-1}= {\mathbb {1}}\big \{ I^{(u)}_{i,j-1}=J^{(u)}_{i-1,j} =1\big \} + \beta _{i-1,j-1} {\mathbb {1}}\{ {\mathcal {B}}_{i,j}\} \qquad \text {for }(i,j)\in {\mathbb {N}}^2. \end{aligned}$$
(3.8)

\(\beta _{i-1,j-1}\) is a \(\text {Ber}(p)\) random variable and is independent of everything else. Note that \(\alpha _{i-1,j-1}\) is automatically 0 when \(\omega _{i,j} = I^{(u)}_{i,j-1}=J^{(u)}_{i-1,j} = 0\) and check, with the help of Lemma 3.2, that \(\alpha _{i-1,j-1} {\mathop {=}\limits ^{D}} \omega _{i,j}\).

Remark 3.4

The following lemma gives the distribution of the triple \((I^{(u)}_{i,j}, J^{(u)}_{i,j}, \alpha _{i-1,j-1})\) also known as Burke’s property. The connection between Burke’s Theorem and property comes from the M/M/1 queues in tandem interpretation of last passage time in exponential environment. All the details about the connection can be found in [4], (see particularly the proof of their Lemma 4.2). Since then, lattice exactly solvable models of LPP have proven to have invariant versions in which the boundaries follow special independent distributions and satisfy appropriate increment equations. Traditionally, whenever such a property can be obtained, is called Burke’s property and we do so here as well.

Lemma 3.5 below is a generalization of Lemma 3.2. In order to prove it (following similar steps as before), we will need to know the joint distribution of \(( I^{(u)}_{i,j}, J^{(u)}_{i,j})\) which is now given by Lemma 3.2.

Lemma 3.5

(Burke’s property) Let \((\omega _{i,j}, I^{(u)}_{i,j-1}, J^{(u)}_{i-1,j})\) be mutually independent Bernoulli random variables with distributions

$$\begin{aligned} \omega _{i,j} \sim \text {Ber}(p), \quad I^{(u)}_{i,j-1} \sim \text {Ber}(u), \quad J^{(u)}_{i-1,j} \sim \text {Ber}\left( \frac{p(1-u)}{u + p(1-u)}\right) . \end{aligned}$$

Then the random variables \((\alpha _{i-1,j-1}, I^{(u)}_{i,j}, J^{(u)}_{i,j})\) are mutually independent with marginal distributions

$$\begin{aligned} \alpha _{i-1,j-1} \sim \text {Ber}(p), \quad I^{(u)}_{i,j} \sim \text {Ber}(u), \quad J^{(u)}_{i,j} \sim \text {Ber}\left( \frac{p(1-u)}{u + p(1-u)}\right) . \end{aligned}$$

Proof

Let ghk be bounded continuous functions. To simplify the notation slightly, set \(\ell = \ell (u) = \frac{p(1-u)}{u + p(1-u)}\). In the computation below we use Eqs. (3.3) without special mention.

$$\begin{aligned}&{\mathbb {E}} \Big (g(\alpha _{i-1, j-1})h\big (I^{(u)}_{i, j}\big )k\big (J^{(u)}_{i,j}\big )\Big ) \\&\quad = g(1){\mathbb {E}}\Big (h\big (I^{(u)}_{i, j}\big )k\big (J^{(u)}_{i,j}\big ){\mathbb {1}}\Big \{\ I^{(u)}_{i,j-1} = J^{(u)}_{i-1,j} =1 \Big \}\Big ) \\&\qquad +g(0) {\mathbb {E}}\Big (h\big (I^{(u)}_{i, j}\big )k\big (J^{(u)}_{i,j}\big ){\mathbb {1}}\big \{\omega _{i,j} = I^{(u)}_{i,j-1} = J^{(u)}_{i-1,j} = 0\big \} \Big )\\&\qquad + {\mathbb {E}}\Big (g(\beta _{i,j})h\big (I^{(u)}_{i, j}\big )k\big (J^{(u)}_{i,j}\big ) {\mathbb {1}}\{ {\mathcal {B}}_{i,j}\}\Big )\\&\quad = g(1) h(0) k(0) u\ell +g(0) h(0)k(0)(1-p)(1-u)(1-\ell )\\&\qquad +{\mathbb {E}}(g(\beta _{i,j})){\mathbb {E}}\Big (h\big (I^{(u)}_{i, j}\big )k\big (J^{(u)}_{i,j}\big ) {\mathbb {1}}\{ {\mathcal {B}}_{i,j}\}\Big )\\&\quad = h(0)k(0)(1-u)(1-\ell ) ( pg(1) + (1-p) g(0))\\&\qquad + {\mathbb {E}}(g(\beta _{i,j})) \sum _{x \in {\mathcal {B}}_{i,j}}{\mathbb {E}} \Big (h\big (I^{(u)}_{i,j}\big )k\big (J^{(u)}_{i,j}\big ){\mathbb {1}}\{ x \in {\mathcal {B}}_{i,j}\}\Big )\\&\quad = h(0)k(0)(1-u)(1-\ell ) ( pg(1) + (1-p) g(0))\\&\qquad + {\mathbb {E}}(g(\beta _{i,j})) \\&\qquad \times \Big ( h(1)k(1) p (1-u)(1-\ell ) + h(0)k(1)[(1-p)(1-u)\ell + p(1-u)\ell )] \\&\qquad +h(1)k(0)[(1-p)u(1-\ell ) + pu(1-\ell )] \Big )\\&\quad = h(0)k(0)(1-u)(1-\ell ) (pg(1) + (1-p) g(0))\\&\qquad + {\mathbb {E}}(g(\beta _{i,j})) \Big ( h(1)k(1) u\ell + h(0)k(1)(1-u)\ell +h(1)k(0)u(1-\ell )\Big )\\&\quad =(pg(1) + (1-p) g(0)) {\mathbb {E}}\big ( h\big (I^{(u)}_{i,j}\big ) \big ){\mathbb {E}}\big (k\big (J^{(u)}_{i,j}\big )\big )\\&\quad ={\mathbb {E}}\big ( g\big (\alpha _{i-1,j-1}\big )\big ) {\mathbb {E}}\big ( h\big (I^{(u)}_{i,j}\big ) \big ){\mathbb {E}}\big (k\big (J^{(u)}_{i,j}\big )\big ). \end{aligned}$$

The equality before last follows from Lemma 3.2. \(\square \)

The last necessary preliminary step is a corollary of Lemma 3.5 which generalises Lemma 3.3 by incorporating the random variables \(\{ \alpha _{i-1, j-1}\}_{i,j \ge 1}\). To this effect, for any down-right path \(\psi \) satisfying (3.5), define the interior sites \({\mathcal {I}} _{\psi }\) of \(\psi \) to be

$$\begin{aligned} {\mathcal {I}} _{\psi } = \{ w \in {\mathbb {Z}}^2_+: \exists \, v_i \in \psi \text { s.t. } \,w < v_i \text { coordinate-wise} \}. \end{aligned}$$
(3.9)

Then

Corollary 3.6

Fix a down-right path \(\psi \) and recall definitions (3.6), (3.9). The random variables

$$\begin{aligned} \{ \{\alpha _w\}_{w \in {\mathcal {I}} _{\psi }}, \{L_{\psi _i}\}_{i \in {\mathbb {Z}}} \} \end{aligned}$$

are mutually independent, with marginals

$$\begin{aligned} \alpha _{w} \sim \text {Ber(p)}, \quad L_{\psi _i} \sim {\left\{ \begin{array}{ll} \text {Ber}(u), &{} \text {if } \psi _i = e_1\\ \text {Ber}\Big ( \frac{p(1-u)}{ u + p(1-u) } \Big ), &{} \text {if } \psi _i = -e_2. \end{array}\right. } \end{aligned}$$

The proof is similar to that of Lemma 3.3 and we omit it.

3.1 Law of Large Numbers for the Boundary Model

Proof of Theorem 2.1

From Eqs. (3.1) we can write

$$\begin{aligned} G^{(u)}_{\left\lfloor {Ns}\right\rfloor , \left\lfloor {Nt}\right\rfloor } = \sum _{j=1}^{\left\lfloor {Nt}\right\rfloor }J^{(u)}_{0, j} + \sum _{i=1}^{\left\lfloor {Ns}\right\rfloor }I^{(u)}_{i, \left\lfloor {Nt}\right\rfloor } \end{aligned}$$

since the IJ variables are increments of the passage time. By the definition of the boundary model, the variables are i.i.d. Ber\((p(1-u)/(u+p(1-u))\). Scaled by N, the first sum converges to \(t {\mathbb {E}}(J_{0,1})\) by the law of large numbers.

By Corollary 3.6 they are i.i.d. Ber(u), since they belong on the down-right path that follows the vertical axes from \(\infty \) down to \((0, \left\lfloor {Nt}\right\rfloor )\) and then moves horizontally. We cannot immediately appeal to the law of large numbers as the whole sequence changes with N so we first appeal to the Borel–Cantelli lemma via a large deviation estimate. Fix an \(\varepsilon >0\).

$$\begin{aligned} {\mathbb {P}}\left\{ N^{-1} \sum _{i=1}^{\left\lfloor {Ns}\right\rfloor }I^{(u)}_{i, \left\lfloor {Nt}\right\rfloor } \notin (u-\varepsilon , u+ \varepsilon ) \right\}&={\mathbb {P}}\left\{ N^{-1} \sum _{i=1}^{\left\lfloor {Ns}\right\rfloor }I^{(u)}_{i, 0} \notin ( su-\varepsilon , su+ \varepsilon ) \right\} \\&\le e^{-c(u, s ,\varepsilon )N}, \end{aligned}$$

for some appropriate positive constant \(c(u, s, \varepsilon )\). By the Borel–Cantelli lemma we have that for each \(\varepsilon >0\) there exists a random \(N_{\varepsilon }\) so that for all \(N > N_\varepsilon \)

$$\begin{aligned} su - \varepsilon < N^{-1}\sum _{i=1}^{\left\lfloor {Ns}\right\rfloor }I^{(u)}_{i, \left\lfloor {Nt}\right\rfloor } \le su + \varepsilon . \end{aligned}$$

Then we have

$$\begin{aligned}&su + t\, \frac{p(1-u)}{ u + p(1-u) } - \varepsilon \le \varliminf _{N \rightarrow \infty } \frac{G^{(u)}_{\left\lfloor {Ns}\right\rfloor , \left\lfloor {Nt}\right\rfloor }}{N} \le \varlimsup _{N \rightarrow \infty } \frac{G^{(u)}_{\left\lfloor {Ns}\right\rfloor , \left\lfloor {Nt}\right\rfloor }}{N} \le su \\&\quad +\, t\, \frac{p(1-u)}{ u + p(1-u) } + \varepsilon . \end{aligned}$$

Let \(\varepsilon \) tend to 0 to finish the proof. \(\square \)

Remark 3.7

(The cases for \(u = 0\) and \(u=1\)) Consider first the case \(u = 0\). This makes the horizontal boundary identically 0 and the vertical boundary identically 1. The optimal path for \(G_{\left\lfloor {Ns}\right\rfloor , {Nt}}\) would never move on the horizontal axis; it either enters the bulk or moves on the vertical axis, at which point it may only collect weight bounded above by \(\left\lfloor { Nt}\right\rfloor \). This is precisely the weight on the vertical axis, so the maximal path just follows that. In this case we have deterministic LPP. Similarly, LPP is deterministic when \(u = 1\).

3.2 Law of Large Numbers for the i.i.d. Model

Proof of Theorem 2.2

Let \(g_{pp}(s,t) = \lim _{N \rightarrow \infty } N^{-1} G_{\left\lfloor {Ns}\right\rfloor , \left\lfloor {Nt}\right\rfloor }\) and denote by \(g_{pp}^{(u)}(s,t) = \lim _{N \rightarrow \infty } N^{-1} G^{(u)}_{\left\lfloor {Ns}\right\rfloor , \left\lfloor {Nt}\right\rfloor }\). Shape function \(g_{pp}(s,t)\) can be proven apriori to be 1-homogeneous and concave. This is a direct consequence of the super-additivity of passage times and the ergodicity and stationarity of the environment. We omit the details, but the interested reader can follow the steps of the proof in [35].

The starting point is Eq. (2.4). Scaling that equation by N gives us the macroscopic variational formulation

$$\begin{aligned}&g_{pp}^{(u)}(1,1) \nonumber \\&\quad =\sup _{0\le z\le 1}\big \{g_{pp}^{(u)}(z,0)+g_{pp}(1-z,1)\big \}\bigvee \sup _{0\le z\le 1}\big \{g_{pp}^{(u)}(0, z)+g_{pp}(1,1-z)\big \} \nonumber \\&\quad = \sup _{0\le z\le 1}\big \{z {\mathbb {E}}(I^{(u)})+g_{pp}(1-z,1)\big \}\bigvee \sup _{0\le z\le 1}\big \{ z {\mathbb {E}}(J^{(u)})+g_{pp}(1,1-z)\big \}. \end{aligned}$$
(3.10)

We postpone the proof of (3.10) until the end. Assume (3.10) holds. Observe that since \(g_{pp}(s,t)\) is symmetric then \(g_{pp}(1-z,1) = g_{pp}(1,1-z)\) which we abbreviate with \(g_{pp}(1-z,1) = \psi (1-z)\). Therefore

$$\begin{aligned} g_{pp}^{(u)}(1,1)= \sup _{0\le z\le 1}\{z {\mathbb {E}}(I^{(u)})+\psi (1-z)\}\bigvee \sup _{0\le z\le 1}\{ z {\mathbb {E}}(J^{(u)})+\psi (1-z)\}. \end{aligned}$$
(3.11)

Moreover if \(u \in [\frac{\sqrt{p}}{1+\sqrt{p}},1]\) then \({\mathbb {E}}(I^{(u)})\ge {\mathbb {E}}(J^{(u)})\). We restrict the parameter u to the subset \(u \in [\frac{\sqrt{p}}{1+\sqrt{p}},1]\) of its original range \(u\in (0,1]\). Then we can drop the second expression in the braces from the right-hand side of (3.11) because at each z-value the first expression in braces dominates. Then

$$\begin{aligned} u+\frac{p(1-u)}{ u + p(1-u) }=\sup _{0\le z\le 1}\{zu+\psi (1-z)\}. \end{aligned}$$
(3.12)

Set \(x = 1 -z\). x still ranges in [0, 1] and after a rearrangement of the terms, we obtain

$$\begin{aligned} -\frac{p(1-u)}{ u + p(1-u) }=\inf _{0\le x\le 1}\{xu-\psi (x)\}. \end{aligned}$$
(3.13)

The expression on the right-hand side is the Legendre transform of \(\psi \), and we have that its concave dual \(\psi ^{*}(u)= -\frac{p(1-u)}{ u + p(1-u) }\) with \(u \in [\frac{\sqrt{p}}{1+\sqrt{p}},1]\). Since \(\psi (x)\) is concave, the Legendre transform of \(\psi ^*\) will give back \(\psi \), i.e. \(\psi ^{**} = \psi \). Therefore,

$$\begin{aligned} g_{pp}(x, 1) = \psi (x)&=\psi ^{**}(x)=\inf _{\frac{\sqrt{p}}{1+\sqrt{p}}\le u\le 1}\{xu-\psi ^*(u)\}=\inf _{\frac{\sqrt{p}}{1+\sqrt{p}}\le u\le 1}\Big \{xu+\frac{p(1-u)}{ u + p(1-u) }\Big \}\nonumber \\&= \inf _{\frac{\sqrt{p}}{1+\sqrt{p}}\le u\le 1}\big \{x{\mathbb {E}}(I^{(u)})+{\mathbb {E}}(J^{(u)})\big \}, \quad \text { for all } x \in [0, 1]. \end{aligned}$$
(3.14)

Since \(g_{pp}(s,t) = t g_{pp}(st^{-1}, 1)\), the first equality in (2.6) is now verified. For the second equality, we solve the variational problem (3.14). The derivative of the expression in the braces has a critical point \(u^* \in [\frac{\sqrt{p}}{1+\sqrt{p}},1]\) only when \(p<x < 1\). In that case, the infimum is achieved at

$$\begin{aligned} u^* =\frac{1}{1-p}\Big (\sqrt{\frac{p}{x}}-p\Big ) \end{aligned}$$

and \(g_{pp}(x, 1) =1/(1-p)[2\sqrt{xp}-p(x+1)]\). Otherwise, when \(x \le p\) the first derivative for \(u \in [\frac{\sqrt{p}}{1+\sqrt{p}},1]\) is always negative and the minimum occurs at \(u^* = 1\). This gives \(g_{pp}(x, 1) = x\). Again, extend to all (st) via the relation \(g_{pp}(s,t) = t g_{pp}(st^{-1}, 1)\). This concludes the proof for the explicit shape under (3.10) which we now prove.

For a lower bound, fix any \(z \in [0, 1]\). Then

$$\begin{aligned} G^{(u)}_{N, N} \ge \sum _{i=1}^{\left\lfloor {Nz}\right\rfloor } I^{(u)}_{i, 0} + G_{(\left\lfloor {Nz}\right\rfloor , 1), (N, N)}. \end{aligned}$$

Divide through by N. The left hand side converges a.s. to \(g^{(u)}_{pp}(1,1)\). The first term on the right converges a.s. to \(z {\mathbb {E}}(I^{u})\). The second on the right, converges in probability to the constant \(g_{pp}(1-z, 1)\). In particular, we can find a subsequence \(N_k\) such that the convergence is almost sure for the second term. Taking limits on this subsequence, we conclude

$$\begin{aligned} g_{pp}^{(u)}(1,1) \ge z {\mathbb {E}}(I^{(u)}) + g_{pp}(1-z, 1). \end{aligned}$$

Since z is arbitrary we can take supremum over z in both sides of the equation above. The same arguments will work if we move on the vertical axis. Thus, we obtain the lower bound for (3.10). For the upper bound, fix \(\varepsilon >0\) and let \(\{ 0 =q_0, \varepsilon =q_1, 2\varepsilon =q_2, \ldots , \left\lfloor {\varepsilon ^{-1}}\right\rfloor \varepsilon , 1=q_M \}\) a partition of (0, 1). We partition both axes. The maximal path that optimises \(G^{(u)}_{N,N}\) has to exit between \(\left\lfloor {Nk\varepsilon }\right\rfloor \) and \(\left\lfloor {N(k+1)\varepsilon }\right\rfloor \) for some k. Therefore, we may write

$$\begin{aligned} G^{(u)}_{N, N}&\le \max _{0 \le k \le \left\lfloor {\varepsilon ^{-1}}\right\rfloor }\left\{ \sum _{i=1}^{\left\lfloor {N(k+1)\varepsilon }\right\rfloor } I^{(u)}_{i, 0} + G_{(\left\lfloor {Nk\varepsilon }\right\rfloor , 1), (N, N)}\right\} \\&\quad \bigvee \max _{0 \le k \le \left\lfloor {\varepsilon ^{-1}}\right\rfloor }\left\{ \sum _{j=1}^{\left\lfloor {N(k+1)\varepsilon }\right\rfloor } J^{(u)}_{0, j} + G_{1, (\left\lfloor {Nk\varepsilon }\right\rfloor ), (N, N)}\right\} . \end{aligned}$$

Divide by N. The right-hand side converges in probability to the constant

$$\begin{aligned}&\max _{0 \le k \le \left\lfloor {\varepsilon ^{-1}}\right\rfloor }\{(k+1)\varepsilon u + g_{pp}(1-\varepsilon k, 1)\}\\&\qquad \bigvee \max _{0 \le k \le \left\lfloor {\varepsilon ^{-1}}\right\rfloor }\bigg \{(k+1)\varepsilon \frac{p(1-u)}{ u + p(1-u) }+ g_{pp}(1, 1-\varepsilon k) \bigg \}\\&\quad =\max _{q_k }\{q_k u + g_{pp}(1-q_k, 1)\} + \varepsilon u\\&\qquad \bigvee \max _{q_k}\bigg \{q_k \frac{p(1-u)}{ u + p(1-u) }+ g_{pp}(1, 1-q_k) \bigg \} +\varepsilon \frac{p(1-u)}{ u + p(1-u) }\\&\quad \le \sup _{0 \le z \le 1}\{z u + g_{pp}(1-z, 1)\} + \varepsilon u\\&\qquad \bigvee \max _{0\le z \le 1}\bigg \{z \frac{p(1-u)}{ u + p(1-u) } + g_{pp}(1, 1- z) \bigg \} +\varepsilon \frac{p(1-u)}{ u + p(1-u) }. \end{aligned}$$

The convergence becomes a.s. on a subsequence. The upper bound for (3.10) now follows by letting \(\varepsilon \rightarrow 0\) in the last inequality. \(\square \)

4 Upper Bound for the Variance in Characteristic Directions

In this section we prove the upper bound for the variance, and show that the order is bounded above by \(N^{2/3}\). In the process, we derive bounds for the exit points of the maximal paths from the boundaries; this is crucial, since as it turns out in the characteristic directions the orders of variance and exit points are comparable.

We follow the approach of [4, 48] in order to find the order of the variance. All the difficulties and technicalities in our case arise from two facts: First that the random variables are discrete and small perturbations in the distribution do not alter the value of the random weight. Second, we have three potential steps to contest with rather than then usual two.

4.1 The Variance Formula

Let (mn) be a generic lattice site. Eventually we will define how mn grow to infinity using the parameter \(u \in (0,1)\). The cases \(u=0, u=1\) are omitted since by Remark 3.7 lead to a deterministic LPP.

Define the passage time increments (labelled by compass directions) by

$$\begin{aligned} {\mathcal {W}}=G_{0,n}^{(u)}-G_{0,0}^{(u)},\quad {\mathcal {N}}=G_{m,n}^{(u)}-G_{0,n}^{(u)},\quad {\mathcal {E}}=G_{m,n}^{(u)}-G_{m,0}^{(u)},\quad {\mathcal {S}}=G_{m,0}^{(u)}-G_{0,0}^{(u)}. \end{aligned}$$

From Corollary 3.6 we get that each of \({\mathcal {W}}, {\mathcal {N}}, {\mathcal {E}}\) and \({\mathcal {S}}\) is a sum of i.i.d. random variables and most importantly, \({\mathcal {N}}\) is independent of \({\mathcal {E}}\) and \({\mathcal {W}}\) is independent of \({\mathcal {S}}\) by the definition of the boundary random variables. From the definitions it is immediate to show the cocycle property for the whole rectangle \([m]\times [n]\)

$$\begin{aligned} {\mathcal {W}}+ {\mathcal {N}}= G^{(u)}_{m,n} = {\mathcal {S}}+ {\mathcal {E}}. \end{aligned}$$
(4.1)

We can immediately attempt to evaluate the variance of \(G^{(u)}_{m,n}\) using these relations, by

$$\begin{aligned} {{\,\mathrm{Var}\,}}(G^{(u)}_{m,n})&={{\,\mathrm{Var}\,}}({\mathcal {W}}+{\mathcal {N}})\nonumber \\&={{\,\mathrm{Var}\,}}({\mathcal {W}})+{{\,\mathrm{Var}\,}}({\mathcal {N}})+2{{\,\mathrm{Cov}\,}}({\mathcal {S}}+{\mathcal {E}}-{\mathcal {N}},{\mathcal {N}})\nonumber \\&={{\,\mathrm{Var}\,}}({\mathcal {W}})-{{\,\mathrm{Var}\,}}({\mathcal {N}})+2{{\,\mathrm{Cov}\,}}({\mathcal {S}},{\mathcal {N}}), \end{aligned}$$
(4.2)

Equivalently, one may use \({\mathcal {E}}\) and \({\mathcal {S}}\) to obtain

$$\begin{aligned} {{\,\mathrm{Var}\,}}(G^{(u)}_{m,n}) ={{\,\mathrm{Var}\,}}({\mathcal {S}})-{{\,\mathrm{Var}\,}}({\mathcal {E}})+2{{\,\mathrm{Cov}\,}}({\mathcal {E}},{\mathcal {W}}). \end{aligned}$$
(4.3)

In the sequence, when several Bernoulli parameters will need to be considered simultaneously, will add a superscript (u) on the quantities \({\mathcal {N}}, {\mathcal {E}}, {\mathcal {W}}, {\mathcal {S}}\) to explicitly denote dependance on parameter u.

The covariance is not an object that can be computed directly, so the biggest proportion of this subsection is dedicated in finding a different way to compute the covariance that also allows for estimates and connects fluctuations of the maximal path with fluctuations of the last passage time.

In the exponential exactly solvable model there is a clear expression for the covariance term. Unfortunately this does not happen here, so we must estimate the order of magnitude.

We first introduce a bit of notation for convenience, that we will use in the sequence:

$$\begin{aligned} A_{{\mathcal {N}}^{(u)}} = \frac{ {{\,\mathrm{Cov}\,}}({\mathcal {S}},{\mathcal {N}})}{u(1-u)} \quad \text { and } \quad A_{{\mathcal {E}}^{(u)}} = - \frac{ {{\,\mathrm{Cov}\,}}({\mathcal {E}},{\mathcal {W}})}{u(1-u)}. \end{aligned}$$
(4.4)

With this notation, we have that Eqs. (4.2), (4.3) can be written as

$$\begin{aligned} \begin{aligned} {{\,\mathrm{Var}\,}}(G_{m,n}^{(u)})&= n\frac{pu(1-u)}{[u+p(1-u)]^2}-mu(1-u)+2u(1-u)A_{{\mathcal {N}}^{(u)}}\\&= mu(1-u)-n\frac{pu(1-u)}{[u+p(1-u)]^2}-2u(1-u) A_{{\mathcal {E}}^{(u)}}. \end{aligned} \end{aligned}$$
(4.5)

We now introduce the perturbed model. Fix any \(u \in (0,1)\). Pick an \(\varepsilon >0\) and define a new parameter \(u_\varepsilon \) so that \( u_\varepsilon = u+ \varepsilon < 1\). For any fixed realization of \(\omega ^{(u)} = \{ \omega ^{(u)}_{i,0}, \omega ^{(u)}_{0,j}, \omega ^{(u)}_{i,j} \}\) with marginal distributions (2.3) we use the parameter \(\varepsilon \) to modify the weights on the south boundary only.

To this effect introduce a sequence of independent Bernoulli random variables \({\mathscr {H}}^{(\varepsilon )}_i \sim \text {Ber}\big (\frac{\varepsilon }{1-u} \big )\), \(1 \le i \le m\) that are independent of the \(\omega ^{(u)}\). Denote their joint distribution by \(\mu _\varepsilon \). Then construct \(\omega ^{u_\varepsilon }\) in the following way:

$$\begin{aligned} \omega ^{u_\varepsilon }_{i,0} = {\mathscr {H}}^{(\varepsilon )}_i \vee \omega ^{(u)}_{i,0} = \omega ^{(u)}_{i,0} + {\mathscr {H}}^{(\varepsilon )}_i - {\mathscr {H}}^{(\varepsilon )}_{i,0} \omega ^{(u)}_{i,0}. \end{aligned}$$
(4.6)

Then \(\{ \omega ^{u_\varepsilon }_{i,0} \}_{1 \le i \le m}\) is a collection of independent Ber\((u_\varepsilon )\) r.v. It also follows that

$$\begin{aligned} \omega ^{u_\varepsilon }_{i,0} - \omega ^{(u)}_{i,0} \le {\mathscr {H}}^{(\varepsilon )}_i. \end{aligned}$$
(4.7)

Under this modified environment,

$$\begin{aligned} \omega ^{u_\varepsilon }_{i, 0} \sim \text {Ber}(u_\varepsilon ), \quad \omega ^{(u)}_{i,j} \sim \text {Ber}(p), \quad \omega ^{(u)}_{0,j}\sim \text {Ber}\Big ( \frac{p(1-u)}{ u + p(1-u) } \Big ), \end{aligned}$$
(4.8)

the passage time is denoted by \(G^{ {\mathcal {S}}^{u_\varepsilon }}\) and when we are referring to quantities in this model we will distinguish them with a superscript \(u_\varepsilon \). With these definitions we have \({\mathcal {S}}^{u_\varepsilon } \sim \text {Bin}(m, u_\varepsilon )\), with mass function denoted by \(f_{{\mathcal {S}}^{u_\varepsilon }}(k) = {\mathbb {P}}\{ {\mathcal {S}}^{u_\varepsilon } = k\}\), \(0 \le k \le m\).

Remark 4.1

A more intuitive way to define the new Bernoulli weights \(\omega ^{u_\varepsilon }\) is via the conditional distributions

$$\begin{aligned}&{\mathbb {P}}\{ \omega ^{u_\varepsilon }_{i,0} = 1 | \omega ^{(u)}_{i,0} = 1 \} = 1, \nonumber \\&{\mathbb {P}}\{ \omega ^{u_\varepsilon }_{i,0} = 1 | \omega ^{(u)}_{i,0} = 0 \} = \frac{\varepsilon }{1-u}, \nonumber \\&{\mathbb {P}}\{ \omega ^{u_\varepsilon }_{i,0} = 0 | \omega ^{(u)}_{i,0} = 0 \} = 1 -\frac{\varepsilon }{1-u}, \end{aligned}$$
(4.9)

i.e. we go through the values on the south boundary, and conditioning on the environment returned a 0, we change the value to a 1 with probability \(\frac{\varepsilon }{1-u}\). Check that the weights in (4.6) satisfy (4.9).

Similarly, there will be instances for which we want to perturb only the weights of the vertical axis, again when the parameter will change from u to \(u+\varepsilon \). In that case, we denote the modified environment on the west boundary by \({\mathcal {W}}^{u_\varepsilon }\) and it is given by

$$\begin{aligned} \omega ^{(u)}_{i, 0} \sim \text {Ber}(u), \quad \omega ^{(u)}_{i,j} \sim \text {Ber}(p), \quad \omega ^{u_\varepsilon }_{0,j}\sim \text {Ber}\Big ( \frac{p(1-u-\varepsilon )}{ u+\varepsilon + p(1-u-\varepsilon ) } \Big ), \end{aligned}$$
(4.10)

Again, we use auxiliary i.i.d. Bernoulli variables \(\{ {\mathscr {V}}^{(\varepsilon )}_j \}_{1 \le j \le n}\) with

$$\begin{aligned} {\mathscr {V}}_j^{(\varepsilon )} \sim \text {Ber}\Big (1 - \varepsilon \frac{1 + u(1-p)}{(1-u)(p + u(1-p)) + (1-p)\varepsilon }\Big ), \end{aligned}$$

where we assume that \(\varepsilon \) is sufficiently small so that the distributions are well defined. Then, the perturbed weights on the vertical axis are defined by

$$\begin{aligned} \omega ^{u_\varepsilon }_{0,j} = \omega ^{(u)}_{0,j}\cdot {\mathscr {V}}_j^{(\varepsilon )}. \end{aligned}$$
(4.11)

Denote by \(\nu _\varepsilon \) the joint distribution of \({\mathscr {V}}_j^{(\varepsilon )}\). Passage time in this environment is denoted by \(G^{{\mathcal {W}}_{u_\varepsilon }}_{m,n}\).

It will also be convenient to couple the environments with different parameters. In that case we use common realizations of i.i.d. Uniform[0, 1] random variables \(\eta = \{\eta _{i,j}\}_{(i,j) \in {\mathbb {Z}}^2_+}\). The Bernoulli environment in the bulk is then defined as

$$\begin{aligned} \omega _{i,j} = \mathbb {1}\{ \eta _{i,j} < p\} \end{aligned}$$

and similarly defined for the boundary values. The joint distribution for the uniforms we denote by \({\mathbb {P}}_\eta \).

Proposition 4.2

Fix a parameter \(u \in (0,1)\). In environment (4.8), define \({\mathcal {N}}^{u_\varepsilon } = G^{ {\mathcal {S}}^{u_\varepsilon }}_{m,n} - {\mathcal {W}}^{(u)}\). Then we have the limit

$$\begin{aligned} A_{{\mathcal {N}}^{(u)}} = \frac{{{\,\mathrm{Cov}\,}}({\mathcal {S}}^{(u)}, {\mathcal {N}}^{(u)})}{u(1-u)} = \displaystyle \lim _{\varepsilon \rightarrow 0}\frac{ {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }( {\mathcal {N}}^{u_\varepsilon } - {\mathcal {N}}^{(u)})}{\varepsilon }, 0<u<1. \end{aligned}$$
(4.12)

Similarly, in environment (4.10), define \({\mathcal {E}}^{u_\varepsilon } = G^{ {\mathcal {W}}^{u_\varepsilon }}_{m,n} - {\mathcal {S}}^{(u)}\). Then we have the limit

$$\begin{aligned} A_{{\mathcal {E}}^{(u)}}= -\frac{{{\,\mathrm{Cov}\,}}({\mathcal {W}}^{(u)}, {\mathcal {E}}^{(u)}) }{u(1-u)}= \displaystyle \lim _{\varepsilon \rightarrow 0}\frac{ {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }( {\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)})}{\varepsilon }, 0<u<1. \end{aligned}$$
(4.13)

Proof of Proposition 4.2

The first equalities in the braces are the definitions of (4.4). We prove the second equality.

The conditional joint distribution of \((\omega ^{u_\varepsilon }_{i,0})_{1 \le i \le m}\) given the value of their sum \({\mathcal {S}}^{u_\varepsilon }\) is independent of the parameter \(\varepsilon \). This is because the sum of i.i.d. Bernoulli is a sufficient statistic for the parameter of the distribution. In particular this implies that \( E[ {\mathcal {N}}^{u_\varepsilon } | {\mathcal {S}}^{u_\varepsilon } = k]\)\(= E_{{\mathbb {P}}\otimes \mu _\varepsilon }[ {\mathcal {N}}^{(u)} | {\mathcal {S}}^{(u)} = k]\).

Then we can compute the \({\mathbb {E}}({\mathcal {N}}^{u_\varepsilon })\)

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }( {\mathcal {N}}^{u_\varepsilon } - {\mathcal {N}}^{(u)})&= \sum _{k=0}^m E[{\mathcal {N}}^{u_\varepsilon } | {\mathcal {S}}^{u_\varepsilon }=k] {\mathbb {P}}\otimes \mu _\varepsilon \{ {\mathcal {S}}^{u_\varepsilon } = k\} - {\mathbb {E}}_{{\mathbb {P}}}({\mathcal {N}}^{(u)}) \nonumber \\&=\sum _{k=0}^mE[{\mathcal {N}}^{(u)}| {\mathcal {S}}^{(u)}=k]{\mathbb {P}}\otimes \mu _\varepsilon \{ {\mathcal {S}}^{u_\varepsilon } = k\} - {\mathbb {E}}_{{\mathbb {P}}}({\mathcal {N}}^{(u)})\nonumber \\&= \sum _{k=0}^mE[{\mathcal {N}}^{(u)}| {\mathcal {S}}^{(u)}=k]\big ( {\mathbb {P}}\otimes \mu _\varepsilon \{ {\mathcal {S}}^{u_\varepsilon } = k\} - {\mathbb {P}}\{ {\mathcal {S}}^{(u)} = k\} \big ) \end{aligned}$$
(4.14)

To show that the limits in the statement are well defined, it suffices to compute

$$\begin{aligned}&\lim _{\varepsilon \rightarrow 0} \frac{ {\mathbb {P}}\otimes \mu _\varepsilon \{ {\mathcal {S}}^{u_\varepsilon } = k\} - {\mathbb {P}}\{ {\mathcal {S}}^{(u)} = k\} }{\varepsilon } \\&\quad = {m \atopwithdelims ()k} \lim _{\varepsilon \rightarrow 0}\frac{(u_\varepsilon )^k(1 - u_\varepsilon )^{m-k} - u^k(1 - u)^{m-k}}{\varepsilon }\\&\quad = {m \atopwithdelims ()k} \frac{d}{du} u^k(1 - u)^{m-k} = {m \atopwithdelims ()k} \frac{k - mu}{u(1-u)} u^k(1 - u)^{m-k}. \end{aligned}$$

Combine this with (4.14) to obtain

$$\begin{aligned} \displaystyle \lim _{\varepsilon \rightarrow 0}\frac{ {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }( {\mathcal {N}}^{u_\varepsilon } - {\mathcal {N}}^{(u)})}{\varepsilon }&= \frac{1}{u(1-u)} \sum _{k=0}^m E[{\mathcal {N}}^{(u)} | {\mathcal {S}}^{(u)}=k] k {\mathbb {P}}\{ {\mathcal {S}}^{(u)} =k\}\nonumber \\&\quad - \frac{mu}{u(1-u)} \sum _{k=0}^m E[{\mathcal {N}}^{(u)} | {\mathcal {S}}^{(u)}=k] {\mathbb {P}}\{ {\mathcal {S}}^{(u)}=k\} \nonumber \\&= \frac{1}{u(1-u)}\Big ({\mathbb {E}}({\mathcal {N}}^{(u)}{\mathcal {S}}^{(u)}) - {\mathbb {E}}({\mathcal {N}}^{(u)}){\mathbb {E}}({\mathcal {S}}^{(u)})\Big ) \nonumber \\&= \frac{1}{u(1-u)}{{\,\mathrm{Cov}\,}}({\mathcal {N}}^{(u)}, {\mathcal {S}}^{(u)}). \end{aligned}$$
(4.15)

Identical symmetric arguments, prove the remaining part of the proposition. \(\square \)

For the rest of this section, we prove estimates on \(A_{{\mathcal {N}}^{(u)}}\) by estimating the covariance in a different way.

Fix any boundary site \(w = (w_1, w_2) \in \{ (i,0), (0, j): 1 \le i \le m, 1\le j \le n\}\). The total weight in environment \(\omega \) collected on the boundaries by a path that exits from the axes at w is

$$\begin{aligned} {\mathscr {S}}_{w} = \sum _{i=1}^{w_1} \omega _{i,0} + \sum _{j=1}^{w_2} \omega _{0, j}, \end{aligned}$$
(4.16)

where the empty sum takes the value 0. Let \({\mathscr {S}}^{(u)}\) be the above sum in environment \(\omega ^{(u)}\) and let \({\mathscr {S}}^{u_\varepsilon }\) denote the same, but in environment (4.8).

Recall that \(\xi _{e_1}\) is the rightmost exit point of any potential maximal path from the horizontal boundary, since it is the exit point of the right-most maximal path. Similarly, if \(\xi _{e_2} > 0\) then it is the down-most possible exit point. When the dependence on the parameter u is important, we put superscripts (u) to denote that.

Lemma 4.3

Let \(0< u_1< u_2 < 1\) and let \(\xi ^{(u_i)}\) the corresponding right-most (resp. down-most) exit points for the maximal paths in environments coupled by common uniforms \(\eta \). Then

$$\begin{aligned} \xi ^{(u_1)}_{e_1} \le \xi ^{(u_2)}_{e_1} \text { and }\, \xi ^{(u_1)}_{e_2} \ge \xi ^{(u_2)}_{e_2}. \end{aligned}$$

Proof

Assume that in environment \(\omega ^{(u_1)}\) the maximal path exits from the vertical axis. Then, since \(u_2 > u_1\) and the weights coupled through common uniforms, realization by realization \(\omega ^{(u_2)}_{0,j} \le \omega ^{(u_1)}_{0,j}\). Assume by way of contradiction that \( \xi ^{(u_1)}_{e_2} < \xi ^{(u_2)}_{e_2}. \) Then for appropriate \( z_1, z_2 \in \{0,1\}\) which will correspond to the step that the maximal path uses to enter the bulk, we have

$$\begin{aligned} G_{(1, \xi ^{(u_1)}_{e_2}+z_1),(m,n)}&\ge G_{(1, \xi ^{(u_2)}_{e_2}+z_2),(m,n)} + {\mathscr {S}}^{(u_1)}_{\xi ^{(u_2)}_{e_2}}- {\mathscr {S}}^{(u_1)}_{\xi ^{(u_1)}_{e_2}}\\&\ge G_{(1, \xi ^{(u_2)}_{e_2}+z_2),(m,n)} + {\mathscr {S}}^{(u_2)}_{\xi ^{(u_2)}_{e_2}}- {\mathscr {S}}^{(u_2)}_{\xi ^{(u_1)}_{e_2}}, \end{aligned}$$

giving

$$\begin{aligned} G_{(1, \xi ^{(u_1)}_{e_2}+z_1),(m,n)} + {\mathscr {S}}^{(u_2)}_{\xi ^{(u_1)}_{e_2}} \ge G_{(1, \xi ^{(u_2)}_{e_2}+z_2),(m,n)} + {\mathscr {S}}^{(u_2)}_{\xi ^{(u_2)}_{e_2}} = G^{(u_2)}_{m,n}, \end{aligned}$$

which cannot be true because \(\xi ^{(u_2)}_{e_2}\) is the down-most exit point in \(\omega ^{(u_2)}\). The proof for a maximal path exiting the horizontal axis is similar. \(\square \)

Remark 4.4

The proof of Lemma 4.3 only depends on weight modifications of a single boundary axis. We did not use the fact that the modification also made the horizontal weights more favorable, just that it made the vertical ones less so.

Lemma 4.5

Let \(\xi \) be the exit point of the maximal path in environment \(\omega ^{(u)}\). Let \({\mathcal {N}}^{u_\varepsilon }\) denote the last passage increment in environment (4.8) of the north boundary and \({\mathscr {S}}_{\xi _{e_1}}^{u_\varepsilon }\) the weight collected on the horizontal axis in the same environment, but only up to the exit point of the maximal path in environment \(\omega ^{(u)}\). \({\mathcal {N}}^{(u)}\), \({\mathscr {S}}_{\xi _{e_1}}^{(u)}\) are the same quantities in environment \(\omega ^{(u)}\). Then

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }\bigg ({\mathscr {S}}_{\xi _{e_1}}^{u_\varepsilon } - {\mathscr {S}}^{(u)}_{\xi _{e_1}}\bigg )\le & {} {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }({\mathcal {N}}^{u_\varepsilon } - {\mathcal {N}}^{(u)} ) \le {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }\bigg ({\mathscr {S}}_{\xi _{e_1}}^{u_\varepsilon } - {\mathscr {S}}^{(u)}_{\xi _{e_1}}\bigg ) \nonumber \\&+\, C(m,u,p) \varepsilon ^{4/3}. \end{aligned}$$
(4.17)

Similarly, in environments (4.10) and \(\omega ^{(u)}\),

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\bigg ({\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon } - {\mathscr {S}}^{(u)}_{\xi _{e_2}}\bigg )\le & {} {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }({\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)} ) \le {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\bigg ({\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon } - {\mathscr {S}}^{(u)}_{\xi _{e_2}}\bigg ) \nonumber \\&+\, C(n,u,p) \varepsilon ^{4/3}. \end{aligned}$$
(4.18)

Proof

We only prove (4.18) as symmetric arguments work for (4.17). Modify the weights on the vertical axis and create environment \({\mathcal {W}}^{u_\varepsilon }\) given by (4.10).

On the event \(\{ \xi ^{u_{\varepsilon }} = \xi \}\) we have for an appropriate step \(z \in {\mathcal {R}}\) that

$$\begin{aligned} {\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)}&= G^{W^{u_\varepsilon }}_{m,n} - G^{(u)}_{m,n} = {\mathscr {S}}^{u_\varepsilon }_{\xi } + G_{(\xi +z), (m,n)} - {\mathscr {S}}^{(u)}_{\xi } - G_{(\xi + z), (m,n)} \nonumber \\&= {\mathscr {S}}^{u_\varepsilon }_{\xi } - {\mathscr {S}}^{(u)}_{\xi } = {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} - {\mathscr {S}}^{(u)}_{\xi _{e_2}}. \end{aligned}$$
(4.19)

The step z can be taken the same for both environments, since the bulk weights remain the same, and the exit point is the same.

Now consider the event \(\{ \xi ^{u_{\varepsilon }} \ne \xi \}\). We only modify weights on the vertical axis, therefore the exit point \(\xi \) of the original maximal path will be different from \(\xi ^{u_\varepsilon }\) only if \(\xi = \xi _{e_2}\).

One of two things may happen:

  1. (1)

    \(\xi ^{u_\varepsilon } \ne \xi \) and \({\mathscr {S}}^{u_\varepsilon }_{\xi ^{u_\varepsilon }} + G_{ (\xi ^{u_\varepsilon } + z),(m,n)} > {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} + G_{(1, \xi _{e_2}+z), (m,n)}\), or

  2. (2)

    \(\xi ^{u_\varepsilon } \ne \xi \) and \({\mathscr {S}}^{u_\varepsilon }_{\xi ^{u_\varepsilon }} + G_{(\xi ^{u_\varepsilon }_{e_2} +z) ,(m,n)} = {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} + G_{(1, \xi _{e_2}+z), (m,n)}\)

We use these cases to define two auxiliary events:

$$\begin{aligned} {\mathcal {A}}_{1}&= \big \{ \xi ^{u_\varepsilon } \ne \xi \text { and } {\mathscr {S}}^{u_\varepsilon }_{\xi ^{u_\varepsilon }} + G_{(\xi ^{u_\varepsilon }+z),(m,n)} > {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} + G_{(1, \xi _{e_2}), (m,n)} \big \},\\ {\mathcal {A}}_{2}&= \big \{ \xi ^{u_\varepsilon } \ne \xi \text { and } {\mathscr {S}}^{u_\varepsilon }_{\xi ^{u_\varepsilon }} + G_{(\xi ^{u_\varepsilon }+z),(m,n)} = {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} + G_{(1, \xi _{e_2}), (m,n)} \big \} \end{aligned}$$

and note that \( \{ \xi ^{u_{\varepsilon }} \ne \xi \} = {\mathcal {A}}_{1}\cup {\mathcal {A}}_{2}\). On \({\mathcal {A}}_2\) we can bound

$$\begin{aligned} {\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)}&= G^{ {\mathcal {W}}^{u_\varepsilon }}_{m,n} - G^{(u)}_{m,n} = {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} + G_{(1, \xi _{e_2}+ z), (m,n)} - {\mathscr {S}}^{(u)}_{\xi _{e_2}} - G_{(1, \xi _{e_2}+ z), (m,n)}\nonumber \\&= {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} - {\mathscr {S}}^{(u)}_{\xi _{e_2}} . \end{aligned}$$
(4.20)

Then we estimate

$$\begin{aligned} {\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)}&= ({\mathcal {E}}^{u_\varepsilon }-{\mathcal {E}}^{(u)} )\cdot \mathbb {1} \{\xi ^{u_\varepsilon }= \xi \}+({\mathcal {E}}^{u_\varepsilon }-{\mathcal {E}}^{(u)} )\cdot \mathbb {1}\{\xi ^{u_\varepsilon }\ne \xi \}\nonumber \\&=\big ({\mathscr {S}}_\xi ^{u_\varepsilon }-{\mathscr {S}}^{(u)}_\xi \big )\cdot \mathbb {1}\{\xi ^{u_\varepsilon }= \xi \}+({\mathcal {E}}^{u_\varepsilon }-{\mathcal {E}}^{(u)} )\cdot (\mathbb {1}_{{\mathcal {A}}_1} + \mathbb {1}_{{\mathcal {A}}_2} ) \end{aligned}$$
(4.21)
$$\begin{aligned}&= \big ({\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}}-{\mathscr {S}}^{(u)}_{\xi _{e_2}}\big )\cdot \big (\mathbb {1}\{\xi ^{u_\varepsilon }= \xi \}+ \mathbb {1}_{{\mathcal {A}}_2}\big )+({\mathcal {E}}^{u_\varepsilon }-{\mathcal {E}}^{(u)} )\cdot \mathbb {1}_{{\mathcal {A}}_1} \end{aligned}$$
(4.22)

The last line follows from (4.19) and (4.20). We bound expression (4.22) above and below.

For a lower bound, add and subtract in (4.22) the term \(({\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon }-{\mathscr {S}}^{(u)}_{\xi _{e_2}})\cdot \mathbb {1}_{{\mathcal {A}}_1}\). Then (4.22) becomes

$$\begin{aligned} {\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)}&= {\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon }-{\mathscr {S}}^{(u)}_{\xi _{e_2}}+\bigg ({\mathcal {E}}^{u_\varepsilon }-{\mathcal {E}}^{(u)} - \bigg ({\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon }-{\mathscr {S}}^{(u)}_{\xi _{e_2}}\bigg )\bigg )\cdot \mathbb {1}_{{\mathcal {A}}_1} \nonumber \\&= {\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon }-{\mathscr {S}}^{(u)}_{\xi _{e_2}}+\bigg (G^{ {\mathcal {W}}^{u_\varepsilon }}_{m,n} - G^{(u)}_{m,n} - \bigg ({\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon }-{\mathscr {S}}^{(u)}_{\xi _{e_2}}\bigg )\bigg )\cdot \mathbb {1}_{{\mathcal {A}}_1} \nonumber \\&\ge {\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon }-{\mathscr {S}}^{(u)}_{\xi _{e_2}} + \bigg ( G_{(1, \xi _{e_2}+z), (m,n)} - \bigg (G^{(u)}_{m,n} + {\mathscr {S}}^{(u)}_{\xi _{e_2}}\bigg )\bigg )\cdot \mathbb {1}_{{\mathcal {A}}_1} \nonumber \\&= {\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon }-{\mathscr {S}}^{(u)}_{\xi _{e_2}}. \end{aligned}$$
(4.23)

Take expected values in (4.23) to obtain the left inequality of (4.18).

The remaining proof is to establish the second inequality in (4.18). For an upper bound, starting again from (4.22). First note that

$$\begin{aligned} {\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)} = G^{{\mathcal {W}}^{u_\varepsilon }}_{m,n} - G^{(u)}_{m,n} \le 0. \end{aligned}$$
(4.24)

Then (4.22) is bounded by

$$\begin{aligned} {\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)}&\le \bigg ({\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}}-{\mathscr {S}}^{(u)}_{\xi _{e_2}}\bigg )\cdot (\mathbb {1}\{\xi ^{u_\varepsilon }= \xi \}+ \mathbb {1}_{{\mathcal {A}}_2})\nonumber \\&= \bigg ({\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}}-{\mathscr {S}}^{(u)}_{\xi _{e_2}}\bigg ) + \bigg ({\mathscr {S}}^{(u)}_{\xi _{e_2}}-{\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}}\bigg ) \cdot \mathbb {1}_{{\mathcal {A}}_1}. \end{aligned}$$
(4.25)

To bound the second term above, we use Hölder’s inequality with exponents \(p = 3, q = 3/2\) to obtain

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\bigg (\Big ({\mathscr {S}}^{(u)}_{\xi _{e_2}}-{\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}}\Big )\mathbb {1}_{{\mathcal {A}}_1}\bigg )\le {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\bigg (\Big ({\mathscr {S}}^{(u)}_{\xi _{e_2}}-{\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}}\Big )^3\bigg )^{1/3}({{\mathbb {P}}\otimes \nu _\varepsilon }\{ {\mathcal {A}}_1\})^{2/3}. \end{aligned}$$
(4.26)

The first expectation on the right is bounded above by C(up)n since \({\mathcal {S}}^{(u)} \le {\mathcal {W}}^{(u)}\) which in turn is a sum of i.i.d. Bernoulli random variables that bounds above \({\mathcal {S}}^{(u)} - {\mathcal {S}}^{u_\varepsilon }\). Now to bound the probability. Define vector

$$\begin{aligned} \mathbf k = {\left\{ \begin{array}{ll} (0, k), &{}\quad 1 \le k \le n, \\ (-k, 0), &{}\quad -1 \ge k \ge -m. \end{array}\right. } \end{aligned}$$

Consider the equality of events for the appropriate steps \(z_1, z_2\) that the path uses to enter the bulk:

$$\begin{aligned} {\mathcal {A}}_{1}&= \Big \{ {\mathscr {S}}^{u_\varepsilon }_\mathbf{k} + G_{(\mathbf{k} + z_1),(m,n)}> {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} + G_{(\xi _{e_2} + z_2), (m,n)} \quad \text { for some }\quad \mathbf{k} \ne \xi _{e_2} \Big \}\\&= \Big \{ {\mathscr {S}}^{u_\varepsilon }_\mathbf{k} - {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}}> G_{ (\xi _{e_2} + z_2), (m,n)} -G_{(\mathbf{k}+z_1),(m,n)}\quad \text { for some }\quad \mathbf{k} \ne \xi _{e_2} \Big \}\\&= \Big \{ {\mathscr {S}}^{u_\varepsilon }_\mathbf{k} - {\mathscr {S}}^{u_\varepsilon }_{\xi _{e_2}} > G_{(\xi _{e_2} +z_2), (m,n)} -G_{(\mathbf{k} +z_1),(m,n)} \ge {\mathscr {S}}^{(u)}_\mathbf{k} - {\mathscr {S}}^{(u)}_{\xi _{e_2}} \quad \text { for some }\quad \mathbf{k} \ne \xi _{e_2} \Big \}. \end{aligned}$$

Coupling (4.11) implies that the events above are empty when \(\mathbf{k} > \xi _{e_2}\). Therefore, consider the case \(\xi _{e_2} > \mathbf {k}\) or when \(\mathbf{k} = (-k, 0)\). In that case, since \(\xi _{e_2}\) is the down-most possible exit point, the second inequality in the event above can be strict as well. Therefore

$$\begin{aligned} {\mathcal {A}}_1 \subseteq \bigcup _{0 \le i \le n } \bigcup _{\mathbf{k}: k < i} \Big \{ {\mathscr {S}}^{u_\varepsilon }_\mathbf{k} - {\mathscr {S}}^{u_\varepsilon }_{i} > G_{(\xi _{e_2} +z_2), (m,n)} -G_{(\mathbf{k} +z_1),(m,n)} \ge {\mathscr {S}}^{(u)}_\mathbf{k} - {\mathscr {S}}^{(u)}_{i}\Big \}. \end{aligned}$$

The strict inequalities in the event and the fact that these random variables are integer, we see that the difference \({\mathscr {S}}^{u_\varepsilon }_\mathbf{k} - {\mathscr {S}}^{u_\varepsilon }_{i} - {\mathscr {S}}^{(u)}_\mathbf{k} + {\mathscr {S}}^{(u)}_{i} \ge 2\). If \(k > 0\),

$$\begin{aligned} 2&\le {\mathscr {S}}^{u_\varepsilon }_\mathbf{k} - {\mathscr {S}}^{u_\varepsilon }_{i} - {\mathscr {S}}^{(u)}_\mathbf{k} + {\mathscr {S}}^{(u)}_{i} = - \sum _{j=k+1}^i \omega ^{u_\varepsilon }_{0,j} + \sum _{j=k+1}^i \omega ^{(u)}_{0,j}\nonumber \\&= \sum _{j=k+1}^i \Big ( \omega ^{(u)}_{0,j} - \omega ^{u_\varepsilon }_{0,j} \Big ) \quad \text {by }(4.11)\nonumber \\&\le \sum _{j=0}^n \Big ( \omega ^{(u)}_{0,j} - \omega ^{u_\varepsilon }_{0,j} \Big ) = \sum _{j=0}^n \omega ^{(u)}_{0,j}\big (1 - {\mathscr {V}}^{(\varepsilon )}_j \big ) = {\mathscr {W}}_\varepsilon . \end{aligned}$$
(4.27)

\({\mathscr {W}}_\varepsilon \) is defined by the last equality above. Similarly, if \(k < 0\), \( {\mathscr {S}}^{u_\varepsilon }_\mathbf{k} - {\mathscr {S}}^{u_\varepsilon }_{i} - {\mathscr {S}}^{(u)}_\mathbf{k} + {\mathscr {S}}^{(u)}_{i} = - {\mathscr {S}}^{u_\varepsilon }_{i} + {\mathscr {S}}^{(u)}_{i} \) and we have

$$\begin{aligned} \big \{{\mathscr {S}}^{(u)}_{i} - {\mathscr {S}}^{u_\varepsilon }_{i} \ge 2 \big \} \subseteq \{ {\mathcal {W}}^{(u)} - {\mathcal {W}}^{u_\varepsilon } \ge 2\} = \{{\mathscr {W}}_\varepsilon \ge 2 \}. \end{aligned}$$

Therefore we just showed \({\mathcal {A}}_1 \subseteq \{ {\mathscr {W}}_\varepsilon \ge 2 \}\).

The event \(\{ {\mathscr {W}}_\varepsilon \ge 2 \}\) holds if at least 2 indices j satisfy with \( \omega ^{(u)}_{0,j}\big (1 - {\mathscr {V}}^{(\varepsilon )}_j \big ) = 1\). By definition (4.27) we have that \({\mathscr {W}}_\varepsilon \) is binomially distributed with probability of success \(C\varepsilon \) under \({\mathbb {P}}\otimes \nu _\varepsilon \) and therefore, in order to have at least two successes,

$$\begin{aligned} {\mathbb {P}}\otimes \nu _\varepsilon \{ {\mathcal {A}}_1\} \le {{\mathbb {P}}\otimes \nu _\varepsilon }\{ {\mathscr {W}}_\varepsilon \ge 2 \} \le {\tilde{C}}(n, u) \varepsilon ^2. \end{aligned}$$
(4.28)

Combine (4.25), (4.26) and (4.28) to conclude

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }({\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)} ) \le {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\Big ({\mathscr {S}}_{\xi _{e_2}}^{u_\varepsilon } - {\mathscr {S}}^{(u)}_{\xi _{e_2}}\Big ) + C(n,u) \varepsilon ^{4/3}. \end{aligned}$$
(4.29)

\(\square \)

Lemma 4.6

Let \(0< u < 1\). Then,

$$\begin{aligned} A_{{\mathcal {N}}^{(u)}} \le \frac{{\mathbb {E}}\big (\xi ^{(u)}_{e_1}\big )}{1-u}, \quad \text { and } \quad A_{{\mathcal {E}}^{(u)}}\ge - \frac{p(1+u(1-p))}{(u+p(1-u))^2}{\mathbb {E}}(\xi ^{(u)}_{e_2}) \end{aligned}$$
(4.30)

Proof

We bound the first term. Compute

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }\Bigg ({\mathscr {S}}_{\xi ^{(u)}_{e_1}}^{u_\varepsilon } - {\mathscr {S}}^{(u)}_{\xi ^{(u)}_{e_1}}\Bigg )&=\sum _{y = 1}^m {\mathbb {E}}\Big [{\mathscr {S}}_{y}^{u_\varepsilon } - {\mathscr {S}}^{(u)}_{y}\Big | \xi ^{(u)}_{e_1} = y\Big ]{\mathbb {P}}\big \{ \xi ^{(u)}_{e_1} = y\big \}\nonumber \\&\le \sum _{y = 1}^m {\mathbb {E}}\Big [\sum _{i=1}^y {\mathscr {H}}_i^{(\varepsilon )} \Big | \xi ^{(u)}_{e_1} = y\Big ]{\mathbb {P}}\big \{ \xi ^{(u)}_{e_1} = y\big \}, \text { from } (4.7),\nonumber \\&=\sum _{y = 1}^m {\mathbb {E}}_{\mu _\varepsilon }\Big [\sum _{i=1}^y {\mathscr {H}}_i^{(\varepsilon )} \Big ]{\mathbb {P}}\big \{ \xi ^{(u)}_{e_1} = y\big \}, \text { since } {\mathscr {H}}_i, \omega ^{(u)}, \text { independent,}\nonumber \\&= \varepsilon \frac{{\mathbb {E}}\big (\xi ^{(u)}_{e_1}\big )}{1-u}. \end{aligned}$$
(4.31)

Now substitute in the right inequality of (4.17), divide through by \(\varepsilon \) and take the limit as \(\varepsilon \rightarrow 0\) to obtain

$$\begin{aligned} A_{{\mathcal {N}}^{(u)}} {\mathop {=}\limits ^{\text {Prop.}\,4.2}} \lim _{\varepsilon \rightarrow 0} \frac{{\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }({\mathcal {N}}^{u_\varepsilon } - {\mathcal {N}}^{(u)})}{\varepsilon } {\mathop {\le }\limits ^{\text {Eqs.}~(4.17), (4.31)}} \frac{{\mathbb {E}}\big (\xi ^{(u)}_{e_1}\big )}{1-u}. \end{aligned}$$

For the second bound, start from the left inequality of (4.18) and write

$$\begin{aligned}&{\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }({\mathcal {E}}^{u_\varepsilon } - {\mathcal {E}}^{(u)}) \ge {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\Bigg ({\mathscr {S}}^{u_\varepsilon }_{\xi ^{(u)}_{e_2}} - {\mathscr {S}}^{(u)}_{\xi ^{(u)}_{e_2}}\Bigg ) \\&\quad ={\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\left( \sum _{j=1}^{\xi ^{(u)}_{e_2}} \omega ^{u_\varepsilon }_{0,j} -\omega ^{(u)}_{0,j}\right) = - {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\left( \sum _{j=1}^{\xi ^{(u)}_{e_2}} \omega ^{(u)}_{0,j}\bigg ( 1- {\mathscr {V}}^{(\varepsilon )}_j\bigg )\right) \\&\quad = - \sum _{k=1}^{n} \sum _{j=1}^{k} {\mathbb {E}}_{{\mathbb {P}}\otimes \nu _\varepsilon }\bigg (\omega ^{(u)}_{0,j}\bigg ( 1- {\mathscr {V}}^{(\varepsilon )}_j\bigg ) \mathbb {1}\big \{ \xi _{e_2}^{(u)} = k \big \}\bigg ) \\&\quad = - \sum _{k=1}^{n} \sum _{j=1}^{k} {\mathbb {E}}_{{\mathbb {P}}}\bigg (\omega ^{(u)}_{0,j}\mathbb {1}\big \{ \xi _{e_2}^{(u)} = k \big \}\bigg ) {\mathbb {E}}_{\nu _\varepsilon }\bigg (1- {\mathscr {V}}^{(\varepsilon )}_j\bigg )\\&\quad \ge - \sum _{k=1}^{n} \sum _{j=1}^{k} {\mathbb {P}}\big \{ \xi _{e_2}^{(u)} = k \big \} {\mathbb {E}}_{\nu _\varepsilon }\bigg (1- {\mathscr {V}}^{(\varepsilon )}_j\bigg ) \\&\quad = - \varepsilon {\mathbb {E}}_{{\mathbb {P}}}\big ( \xi _{e_2}^{(u)}\big ) \cdot \frac{1+u(1-p)}{(1-u)(u + p(1-u) +(1-p)\varepsilon )}. \end{aligned}$$

Divide both sides of the inequality by \(\varepsilon \) and let it tend to 0. \(\square \)

4.2 Upper Bound

In this section we prove the upper bound in Theorem  2.3. We begin with two comparison lemmas. One is for the two functions \(A_{{\mathcal {N}}^{(u)}}, A_{{\mathcal {E}}^{(u)}}\) when we vary the parameter u. The other comparison is between variances in environments with different parameters.

Lemma 4.7

Pick two parameters \(0< u_1< u_2 < 1\). Then

$$\begin{aligned} A_{{\mathcal {N}}^{(u_1)}} \le A_{{\mathcal {N}}^{(u_2)}} + m(u_2 - u_1). \end{aligned}$$
(4.32)

Proof of Lemma 4.7

Fix an \(\varepsilon > 0\) small enough so that \(u_1 + \varepsilon < u_2\) and \(u_2 + \varepsilon < 1\). This is not a restriction as we will let \(\varepsilon \) tend to 0 at the end of the proof. We use a common realization of the Bernoulli variables \({\mathscr {H}}^{(\varepsilon )}_i\) and we couple the weights in the \(\omega ^{(u_2)}\) and \(\omega ^{(u_1)}\) environments using common uniforms \(\eta = \{\eta _{i,j}\}\) (with law \({\mathbb {P}}_\eta \)), independent of the \({\mathscr {H}}^{(\varepsilon )}_i\). We then use both bounds from (4.17).

$$\begin{aligned}&{\mathbb {E}}_{\mu _\varepsilon \otimes {\mathbb {P}}_\eta }({\mathcal {N}}^{u_2+\varepsilon }- {\mathcal {N}}^{(u_2)})-{\mathbb {E}}_{\mu _\varepsilon \otimes {\mathbb {P}}_\eta }({\mathcal {N}}^{u_1+\varepsilon } - {\mathcal {N}}^{(u_1)})\\&\quad \ge {\mathbb {E}}_{\mu _\varepsilon \otimes {\mathbb {P}}_\eta }\Bigg ({\mathscr {S}}^{u_2+\varepsilon }_{\xi ^{(u_2)}_{e_1}} - {\mathscr {S}}^{(u_2)}_{\xi ^{(u_2)}_{e_1}}\Bigg )-{\mathbb {E}}_{\mu _\varepsilon \otimes {\mathbb {P}}_\eta }\Bigg ({\mathscr {S}}^{u_1+\varepsilon }_{\xi ^{(u_1)}_{e_1}} - {\mathscr {S}}^{(u_1)}_{\xi ^{(u_1)}_{e_1}}\Bigg ) - o(\varepsilon )\\&\quad ={\mathbb {E}}_{\mu _\varepsilon \otimes {\mathbb {P}}_\eta }\left( \sum _{i=1}^{\xi ^{(u_2)}_{e_1}}\mathbb {1}\big \{ {\mathscr {H}}_i^{(\varepsilon )}=1\big \}\mathbb {1}\{ \eta _{i,0}>u_2\}\right) \\&\qquad -{\mathbb {E}}_{\mu _\varepsilon \otimes {\mathbb {P}}_\eta }\Big (\sum _{i=1}^{\xi ^{(u_1)}_{e_1}}\mathbb {1}\big \{ {\mathscr {H}}_i^{(\varepsilon )}=1\big \}\mathbb {1}\{ \eta _{i,0}>u_1\}\Big )- o(\varepsilon )\\&\quad \ge {\mathbb {E}}_{\mu _\varepsilon \otimes {\mathbb {P}}_\eta }\Big (\sum _{i=1}^{\xi ^{(u_1)}_{e_1}}\mathbb {1}\big \{ {\mathscr {H}}_i^{(\varepsilon )}=1\big \}\Big (\mathbb {1}\{ \eta _{i,0}>u_2\} - \mathbb {1}\{ \eta _{i,0}>u_1\}\Big )\Big )- o(\varepsilon )\\&\quad \ge - m {\mathbb {E}}_{\mu _\varepsilon \otimes {\mathbb {P}}_\eta }\Big (\mathbb {1}\big \{ {\mathscr {H}}_i^{(\varepsilon )}=1\big \}\Big (\mathbb {1}\{ \eta _{i,0}>u_1\} - \mathbb {1}\{ \eta _{i,0}>u_2\}\Big )\Big )- o(\varepsilon )\\&\quad =-m\varepsilon (u_2 - u_1) - o(\varepsilon ). \end{aligned}$$

Divide by \(\varepsilon \), let \(\varepsilon \rightarrow 0\) and use to get the result. \(\square \)

Lemma 4.8

(Variance comparison) Fix \(\delta _0 >0\) and parameters u, r so that \(p< p +\delta _0< u< r <1\). Then, there exists a constant \(C = C(\delta _0, p) > 0\) so that for all admissible values of u and r we have

$$\begin{aligned} \frac{{{\,\mathrm{Var}\,}}\big (G_{m,n}^{(u)}\big )}{u(1-u)} \le \frac{{{\,\mathrm{Var}\,}}\big (G_{m,n}^{(r)}\big )}{r(1-r)}+ C(m+n)(r-u). \end{aligned}$$
(4.33)

Proof

Begin from Eq. (4.5), and bound

$$\begin{aligned} \frac{{{\,\mathrm{Var}\,}}\big (G_{m,n}^{(u)}\big )}{u(1-u)}&= n\frac{p}{[u+p(1-u)]^2}-m+2A_{{\mathcal {N}}^{(u)}}\\&= n\frac{p}{[r+p(1-r)]^2}-m+2A_{{\mathcal {N}}^{(u)}} + np\left( \frac{1}{[u+p(1-u)]^2} -\frac{1}{[r+p(1-r)]^2}\right) \\&\le \frac{{{\,\mathrm{Var}\,}}\big (G_{m,n}^{(r)}\big )}{r(1-r) }+ np\left( \frac{1}{[u+p(1-u)]^2} -\frac{1}{[r+p(1-r)]^2}\right) + 2m(r-u)\\&\le \frac{{{\,\mathrm{Var}\,}}\big (G_{m,n}^{(r)}\big )}{r(1-r) }+ 2np(1-p)\frac{(r- u) }{[u+p(1-u)]^3} + 2m(r-u)\\&\le \frac{{{\,\mathrm{Var}\,}}\big (G_{m,n}^{(r)}\big )}{r(1-r) }+ 2n\frac{p(1-p)}{\delta _0^3}(r-u) + 2m(r-u). \end{aligned}$$

In the third line from the top we used Lemma 4.7. Set \(C = 2\frac{p(1-p)}{\delta _0^3}\vee 2\) to finish the proof. \(\square \)

From this point onwards we proceed by a perturbative argument. We introduce the scaling parameter N that will eventually go to \(\infty \) and the characteristic shape of the rectangle, given the boundary parameter. We will need to use the previous lemma, so we fix a \(\delta _0 > 0\), so that \(\delta _0< \lambda < 1\) and we choose a parameter \(u = u(N,b,v) < \lambda \) so that

$$\begin{aligned} \lambda -u = b\frac{v}{N} \end{aligned}$$

At this point v is free but b is a constant so that \( \delta _0< \lambda < u\) The north-east endpoint of the rectangle with boundary of parameter \(\lambda \) is defined by \((m_\lambda (N), n_{\lambda }(N))\) which is the microscopic characteristic direction corresponding to \(\lambda \) defined in (2.7).

The quantities \(G_{(1,\xi _{e_2}), (m,n)},\xi _{e_2}\) and \(G_{m,n}\) connected to these indices are denoted by \(G_{(1,\xi _{e_2}), (m,n)}(N),\xi _{e_2}(N),G_{m,n}(N)\). In the proof we need to consider different boundary conditions and this will be indicated by a superscript. When the superscript u will be used, the reader should remember that this signifies changes on the boundary conditions and not the endpoint \((m_{\lambda }(N), n_{\lambda }(N))\), which will always be defined by (2.7) for a fixed \(\lambda \).

Since the weights \(\{\omega _{i,j}\}_{i,j\ge 1}\) in the interior are not affected by changes in boundary conditions, the passage time \(G_{(z,1), (m,n)}(N)\) will not either, for any \(z < m_{\lambda }(N)\).

Proposition 4.9

Fix \( \lambda \in (0,1)\). Then, there exists a constant \(K = K(\lambda , p)> 0\) so that for any \( b < K \), and N sufficiently large

$$\begin{aligned} {\mathbb {P}}\{\xi _{e_2}^{(\lambda )}(N)>v\}\le C \frac{N^2}{bv^3}\Bigg ( \frac{{\mathbb {E}}(\xi ^{(\lambda )}_{e_2})}{bv}+1\Bigg ), \end{aligned}$$
(4.34)

for all \(v \ge 1\).

Proof

We use an auxiliary parameter \(u < \lambda \) so that

$$\begin{aligned} u = \lambda - b v N^{-1} > 0. \end{aligned}$$

Constant b is under our control. We abbreviate \((m_{\lambda }(N), n_{\lambda }(N))\)\(= \mathbf{t}_N(\lambda )\). Whenever we use auxiliary parameters we explicitly mention it to alert the reader that the environments are coupled through common realizations of uniform random variables \(\eta \). The measure that we are using for all computations is the background measure \({\mathbb {P}}_\eta \) but to keep the notation simple we omit the subscript \(\eta \).

Since \(G^{(u)}_{\mathbf{t}_N(\lambda )}(N)\) is utilised on the maximal path,

$$\begin{aligned} {\mathscr {S}}^{(u)}_z+G_{(1,z),\mathbf{t}_N(\lambda )}(N)\le G^{(u)}_{\mathbf{t}_N(\lambda )}(N) \end{aligned}$$

for all \(1\le z\le n_{\lambda }(N)\) and all parameters \(p+\delta _0<u< \lambda <1\). Consequently, for integers \(v\ge 0\),

$$\begin{aligned} {\mathbb {P}}\big \{\xi _{e_2}^{(\lambda )}(N)>v\big \}&={\mathbb {P}}\Big \{\exists \, z>v:{\mathscr {S}}^{(\lambda )}_z+G_{(1,z),\mathbf{t}_N(\lambda )}(N)= G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Big \}\nonumber \\&\le {\mathbb {P}}\Big \{\exists \,z>v:{\mathscr {S}}^{(\lambda )}_z-{\mathscr {S}}^{(u)}_z+G_{\mathbf{t}_N(\lambda )}^{(u)}(N)\ge G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Big \}\nonumber \\&={\mathbb {P}}\Big \{\exists \, z>v:{\mathscr {S}}^{(\lambda )}_z- {\mathscr {S}}^{(u)}_z +G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\ge 0\Big \}\nonumber \\&\le {\mathbb {P}}\Big \{{\mathscr {S}}^{(\lambda )}_v-{\mathscr {S}}^{(u)}_v+G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\ge 0\Big \}. \end{aligned}$$
(4.35)

The last line above follows from the fact that \(u < \lambda \), which implies that \({\mathscr {S}}^{(\lambda )}_k-{\mathscr {S}}^{(u)}_k\) is non-positive and decreasing in k when the weights are coupled through common uniforms. The remaining part of the proof goes into bounding the last probability above. For any \(\alpha \in {\mathbb {R}}\) we further bound

$$\begin{aligned}&{\mathbb {P}}\{\xi _{e_2}^{(\lambda )}(N)>v\}\le {\mathbb {P}}\{{\mathscr {S}}^{(\lambda )}_v-{\mathscr {S}}^{(u)}_v\ge -\alpha \} \end{aligned}$$
(4.36)
$$\begin{aligned}&\quad +{\mathbb {P}}\{G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\ge \alpha \}. \end{aligned}$$
(4.37)

We treat (4.36) and (4.37) separately for

$$\begin{aligned} \alpha =-{\mathbb {E}}[{\mathscr {S}}^{(\lambda )}_v-{\mathscr {S}}^{(u)}_v]-C_0\frac{v^2}{N} \end{aligned}$$
(4.38)

where \(C_0>0\). Restrictions on \(C_0\) will be enforced in the course of the proof.

Probability (4.36) That is a sum of i.i.d. random variables so we simply bound using Chebyshev’s inequality. The variance is estimated by

$$\begin{aligned} {{\,\mathrm{Var}\,}}({\mathscr {S}}_v^{(\lambda )}-{\mathscr {S}}^{(u)}_v)&= \sum _{j= 1}^v {{\,\mathrm{Var}\,}}\big (\omega ^{(\lambda )}_{0,j} - \omega ^{(u)}_{0,j} \big ) \le C_{p, \lambda } v(\lambda -u) = c_{p, \lambda }\frac{b v^2}{N}. \end{aligned}$$

Then by Chebyshev’s inequality we obtain

$$\begin{aligned} {\mathbb {P}}\Big \{{\mathscr {S}}^{(\lambda )}_v-{\mathscr {S}}^{(u)}_v\ge {\mathbb {E}}[{\mathscr {S}}^{(\lambda )}_v-{\mathscr {S}}^{(u)}_v]+C_0\frac{v^2}{N} \Big \}&\le \frac{c_{p, \lambda }}{C_0^2}\cdot b\frac{N}{v^2}. \end{aligned}$$
(4.39)

Probability (4.37) Substitute in the value of \(\alpha \) and subtract from both sides \({\mathbb {E}}[G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)]\). Then

$$\begin{aligned}&{\mathbb {P}}\bigg \{G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\ge \alpha \bigg \} \nonumber \\&\quad ={\mathbb {P}}\Bigg \{G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)-{\mathbb {E}}\Big [G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Big ]\nonumber \\&\quad \ge v( \lambda - u)\frac{p}{(p + (1 -p)u)(p +(1- p)\lambda )}-C_0\frac{v^2}{N}-{\mathbb {E}}\Big [G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Big ]\Bigg \}\nonumber \\&\quad \le {\mathbb {P}}\Bigg \{G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)-{\mathbb {E}}\Big [G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Big ]\nonumber \\&\quad \ge v( \lambda - u)C_{\lambda , p}-C_0\frac{v^2}{N}-{\mathbb {E}}\Big [G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Big ]\Bigg \}. \end{aligned}$$
(4.40)

where

$$\begin{aligned} C_{\lambda , p} = \frac{p}{(p+(1-p)\lambda )^2}. \end{aligned}$$

We then estimate

$$\begin{aligned}&{\mathbb {E}}\bigg [G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\bigg ] = m_{\lambda }(N)( u- \lambda ) + n_{\lambda }(N)\Bigg ( \frac{p(1-u)}{u + p(1-u)} - \frac{p(1-\lambda )}{\lambda + p(1-\lambda )} \Bigg )\\&\quad =m_{\lambda }(N)(u - \lambda ) - n_{\lambda }(N)\frac{p}{(p + (1 -p)u)(p +(1- p)\lambda )}( u - \lambda )\\&\quad \le N \frac{1-p}{p+(1-p)u}(\lambda - u)^2\\&\quad \le \frac{D_{u,p}}{N} b^2v^2. \end{aligned}$$

The first inequality above comes from removing the integer parts for \(n_{\lambda }(N)\). The constant \(D_{u,p}\) is defined as

$$\begin{aligned} D_{u,p} = \frac{1-p}{p+(1-p)u}. \end{aligned}$$

It is now straightforward to check that line (4.40) is non-negative when

$$\begin{aligned} b < \frac{C_{\lambda ,p}}{ 4 D_{u,p}} \,\, \text { and }\,\,C_0 = b\frac{C_{\lambda , p}}{2}. \end{aligned}$$

With values of \(b, C_0\) as are in the the display above, for any c smaller than \(b\, C_{\lambda , p} /4\), we have that

$$\begin{aligned} G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)- G^{(u)}_{\mathbf{t}_N(\lambda )}(N) - {\mathbb {E}}\big [G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)- G^{(u)}_{\mathbf{t}_N(\lambda )}(N)\big ] \ge c v^2 N^{-1} > 0. \end{aligned}$$

Using this, we can apply Chebyshev’s inequality one more time. In order, from Chebyshev’s inequality, Lemma 4.8 and finally equation (4.5).

$$\begin{aligned} \text {Probability}(4.40)&\le {\mathbb {P}}\Bigg \{\Bigg | G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N) \\&\quad - {\mathbb {E}}\Bigg [G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Bigg ] \Bigg |\ge c v^2 N^{-1}\Bigg \} \\&\le \frac{N^2}{c^2 v^4}{{\,\mathrm{Var}\,}}\bigg (G^{(u)}_{\mathbf{t}_N(\lambda )}(N)- G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\bigg )\\&\le \frac{N^2}{c^2 v^4}\Bigg ({{\,\mathrm{Var}\,}}\Big (G^{(u)}_{\mathbf{t}_N(\lambda )}(N)\Big )+{{\,\mathrm{Var}\,}}\Big ( G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Big )\Bigg )\\&\le 4\frac{N^2}{c^2 v^4}\Bigg ({{\,\mathrm{Var}\,}}\Big ( G^{(\lambda )}_{\mathbf{t}_N(\lambda )}(N)\Big )+CN(\lambda -u)\Bigg )\\&\le 4\frac{N^2}{c^2v^4}\mathcal |A_{{\mathcal {E}}^{(\lambda )}}|+Cb \frac{N^2}{c^2 v^3}. \end{aligned}$$

This together with the bound in Lemma 4.6 suffice for the conclusion of this proposition. \(\square \)

Proof of Theorem 2.3, upper bound

We first bound the expected exit point for boundary with parameter \(\lambda \). In what follows, r is a parameter under our control, that will eventually go to \(\infty \).

$$\begin{aligned} {\mathbb {E}}(\xi ^{(\lambda )}_{e_2}(N))&\le rN^{2/3}+\sum _{v=rN^{2/3}}^{n_{\lambda }(N)}{\mathbb {P}}\big \{\xi ^{(\lambda )}_{e_2}(N)>v\big \}\\&\le rN^{2/3} +\sum _{v=rN^{2/3}}^{\infty }C \frac{N^2}{v^3}\Bigg ( \frac{{\mathbb {E}}\big (\xi ^{(\lambda )}_{e_2}\big )}{v}+1\Bigg )\quad \text {by}(4.34)\\&\le rN^{2/3}+\frac{C{\mathbb {E}}\big (\xi ^{(\lambda )}_{e_2}\big )}{r^3}+\frac{C}{r^2}N^{2/3}. \end{aligned}$$

Let r sufficiently large so that \(C/ r^3 < 1\). Then, after rearranging the terms in the inequality above, we conclude

$$\begin{aligned} {\mathbb {E}}\bigg (\xi ^{(\lambda )}_{e_2}(N)\bigg )&\le C N^{2/3}. \end{aligned}$$

The variance bound now readily follows. Since (mn) follow the characteristic direction \((m_\lambda (N), n_\lambda (N))\) defined in (2.7), Eq. (4.5) gives that

$$\begin{aligned} \text {Var}\bigg (G^{(\lambda )}_{m_\lambda (N), n_\lambda (N)}\bigg ) = -2\lambda (1-\lambda )A_{{\mathcal {E}}^{(\lambda )}} + O(1). \end{aligned}$$

Equation (4.24) together with Eq. (4.13) imply \(A_{{\mathcal {E}}^{(\lambda )}} \le 0\) and therefore the right-hand side of the equation above increases if we reduce \(A_{{\mathcal {E}}^{(\lambda )}}\) further. To this end we use the lower bound in Lemma 4.6, and obtain for a suitable constant \(C_{p, \lambda }\)

$$\begin{aligned} \text {Var}\bigg (G^{(\lambda )}_{m_\lambda (N), n_\lambda (N)}\bigg ) = C_{p, \lambda } {\mathbb {E}}\big (\xi ^{(\lambda )}_{e_2}(N)\big ) \le C_{p, \lambda } N^{2/3}. \end{aligned}$$

\(\square \)

An immediate corollary of this is the following bound in probability that is obtained directly from expression (4.34) is

Corollary 4.10

Fix \( \lambda \in (0,1)\). Then, there exists a constant \(K = K(\lambda , p)> 0\) so that for any \( r > 0 \), and N sufficiently large

$$\begin{aligned} {\mathbb {P}}\Big \{\xi _{e_2}^{(\lambda )}(N)> r N^{2/3}\Big \}\le \frac{K}{r^3}. \end{aligned}$$
(4.41)

5 Lower Bound for the Variance in Characteristic Directions

We show the lower bound for the variance. The main idea is to define a ‘reversed’ environment and passage times, using Burke’s property (this is done in Sect. 5.1.3). In the other LPP solvable models, the maximal path in the reversed environment was the competition interface in the forward one. The competition interface has another interpretation that was exploited in the literature, namely it separated the sites according to the first step of the maximal path that led into them. Both interpretations are needed for the calculations at the end of the section, but since the weights are discrete there are several maximal paths to each endpoint and the two interpretations of the competition interface do not coincide and this is why we define two competition interfaces below.

5.1 Down-Most Maximal Path and Competition Interface

In this section first we want to construct the down-most maximal path and a possible competition interface. Then we identify their properties and relations which will be crucial to find the lower bound for the order of fluctuations of the maximal path.

5.1.1 The Down-Most Maximal Path

Consider the last passage time G in an environment \(\omega \) and for every (mn) compute \(G_{m,n}\). Define the increments \(I_{i,j}, J_{i,j}\) using Eq. (3.1). Note that the definitions that follow do not require an invariant model and they work irrespective of the environment so we do not include the superscript (u) unless we want to emphasize that they come from an invariant model.

From a target point (mn), we construct using backwards steps a maximal path up to (mn). Recall that the maximal path in the interior process collects weights only with a diagonal step with probability given by \(\omega \). We define the down-most maximal path \(\hat{\pi }\) starting from the target point (mn) and going backwards. Given \({\hat{\pi }}_{k+1}\), the previous site \({\hat{\pi }}_{k}\) can be found using \((I_{{\hat{\pi }}_{k+1}}, J_{{\hat{\pi }}_{k+1}}, \omega _{{\hat{\pi }}_{k+1}})\). To see this, re-write Eq. (5.1) as

$$\begin{aligned} \hat{\pi }_{k}= {\left\{ \begin{array}{ll} \hat{\pi }_{k+1}-(0,1)&{}\quad \text {if }\, J_{\hat{\pi }_{k+1}}= 0,\\ \hat{\pi }_{k+1}- (1,0) &{}\quad \text {if }\, I_{\hat{\pi }_{k+1}} =0, J_{\hat{\pi }_{k+1}} = 1 \text { and }\omega _{\hat{\pi }_{k+1}}=0,\\ \hat{\pi }_{k+1}-(1,1) &{}\quad \text {if }\, J_{\hat{\pi }_{k+1}}= 1\text { and }\omega _{\hat{\pi }_{k+1}}=1. \end{array}\right. } \end{aligned}$$
(5.1)

The moment that \(\hat{\pi }\) hits one of the two axes (or the origin) it remains on the axis, which it has hit and follows it backwards to the origin. The graphical representation is in Fig. 1.

Fig. 1
figure 1

One-step backward construction for the down-most maximal path \(\hat{\pi }\)

5.1.2 The Competition Interface

The competition interface is an infinite path \(\varphi \) which takes only the same admissible steps as the paths we optimise over. \(\varphi = \{ \varphi _0 = (0,0), \varphi _1, \ldots \}\) is completely determined by the values of IJ and \(\omega \). In particular, for any \(k \in {\mathbb {N}}\),

$$\begin{aligned} \varphi _{k+1}= {\left\{ \begin{array}{ll} \varphi _{k}+(0,1) &{}\quad \text {if } \begin{aligned} &{}G(\varphi _k+(0,1))<G(\varphi _k+(1,0))\text { or}\\ &{}G(\varphi _k+(0,1))=G(\varphi _k+(1,0)) \text { and }G(\varphi _k+(0,1))=G(\varphi _k), \end{aligned}\\ \varphi _{k}+(1,0) &{}\quad \text {if }\, G(\varphi _k+(1,0))<G(\varphi _k+(0,1)),\\ \varphi _{k}+(1,1) &{}\quad \text {if }\, G(\varphi _k+(0,1))=G(\varphi _k+(1,0)) \text { and }G(\varphi _k+(0,1))>G(\varphi _k). \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.2)

In words, the path \(\varphi \) always chooses its step according to the smallest of the possible G-values. If they are equal, the competition interface decides to go up if the last passage time of the up and right point are equal and they are also equal to the last passage time of the starting point otherwise it takes a diagonal step.

Remark 5.1

Competition interfaces in last passage percolation, were introduced in [19], building on the same notion in a first passage percolation problem [38]. The name competition interface comes from the fact that it represents the boundary between two competing growing clusters of points. All sites in the quadrant are separated according to the first step of their maximal path. This interpretation was proven useful in various coupling arguments, e.g. in [4, 20, 24, 40] but in all these cases the environment had a continuous distribution.

Since our model is discrete, and we have three (rather than two) possible steps and our maximal path is not unique, our definition of \(\varphi \) depends on our choice of maximal path; here we chose the down-most maximal path to separate the clusters, and then we accordingly defined the competition interface, so that we exploit certain good duality properties in the sequence. \(\square \)

This being said, the partition of the plane into two competing clusters is useful in some parts of the proofs that follow, so we would like to develop it in this setting. Define

$$\begin{aligned} {\mathcal {C}}_{\uparrow , \nearrow }&= \{ v = (v_1, v_2) \in {\mathbb {Z}}^2_+: \text { there exists a maximal path from }0\text { to }v \\&\quad \text {with first step }e_2\text { or }e_1 + e_2\}. \end{aligned}$$

The remaining sites of \({\mathbb {Z}}_+^2\) are sites for which all possible maximal paths to them have to take a horizontal first step. We denote that cluster by \({\mathcal {C}}_{\rightarrow } = {\mathbb {Z}}^2_+ \setminus {\mathcal {C}}_{\uparrow , \nearrow }\).

Some immediate observations follow. First note that the vertical axis \(\{ (0, v_2) \}_{v_2 \in {\mathbb {N}}} \in {\mathcal {C}}_{\uparrow , \nearrow }\) while \(\{ (v_1, 0) \}_{v_1 \in {\mathbb {N}}} \in {\mathcal {C}}_{\rightarrow }\). We include \((0,0)\in {\mathcal {C}}_{\uparrow , \nearrow } \) in a vacuous way.

Then observe that if \((v_1, v_2) \in {\mathcal {C}}_{\uparrow , \nearrow }\) then it has to be that \((v_1, y) \in {\mathcal {C}}_{\uparrow , \nearrow }\) for all \(y \ge v_2\). This is a consequence of planarity. Assume that for some \(y > v_2\) the maximal path \(\pi _{0, (v_1, y)}\) has to take a horizontal first step. Then it will intersect with the maximal path \(\pi _{0, (v_1, v_2)}\) to \((v_1, v_2)\) with a non-horizontal first step. At the point of intersection z, the two passage times are the same, so in fact there exists a maximal path to \((v_1, y)\) with a non-horizontal first step: it is the concatenation of the \(\pi _{0, (v_1, v_2)}\) up to site z and from z onwards we follow \(\pi _{0, (v_1, y)}\).

Finally, note that if \(v \ne 0\) and \(v \in {\mathcal {C}}_{\uparrow , \nearrow }\) and \(v + e_1 \in {\mathcal {C}}_{\rightarrow }\), it must be the case that

$$\begin{aligned} I _{v + e_1} = G_{0, v+e_1} - G_{0, v} = 1. \end{aligned}$$

Assume the contrary. Then, if the two passage times are the same, a potential maximal path to \(v +e_1\) is the one that goes to v without a horizontal initial step, and after v it takes an \(e_1\) step. This would also imply that \(v +e_1 \in {\mathcal {C}}_{\uparrow , \nearrow }\) which is a contradiction.

These observations allow us to define a boundary between the two clusters as a piecewise linear curve \({\tilde{\varphi }} = \{ 0= {\tilde{\varphi }}_0, {\tilde{\varphi }}_1, \ldots \} \) which takes one of the three admissible steps, \(e_1, e_2, e_1+e_2\). We first describe the first step of this curve when all of the \(\{\omega , I, J\}\) are known. (see Fig. 2).

$$\begin{aligned} {\tilde{\varphi }}_1 = {\left\{ \begin{array}{ll} (1,0), &{} \text { when } (\omega _{1,1}, I_{1,0}, J_{0,1}) \in \{ (1, 0, 1), (0,0,1)\},\\ (1,1), &{} \text { when } (\omega _{1,1}, I_{1,0}, J_{0,1}) \in \{ (1, 0, 0), (0,0,0), (1,1,0), (1,1,1), (0,1,1) \},\\ (0,1), &{} \text { when } (\omega _{1,1}, I_{1,0}, J_{0,1}) \in \{ (0,1,0)\}. \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.3)
Fig. 2
figure 2

Constructive admissible steps for \(\tilde{\varphi }_1\)

From this definition we see that \({\tilde{\varphi }}_1\) stays on the x-axis only when \(I_{1,0} = 0\) and \(J_{0,1} = 1\). If that is the case, repeat the steps in (5.3) until \({\tilde{\varphi }}\) increases its y-coordinate and changes level. Any time \({\tilde{\varphi }}\) changes level from \(\ell -1\) to \(\ell \), it takes horizontal steps (the number of steps could be 0) until a site \((v_\ell , \ell )\) where \((v_{\ell }, \ell ) \in {\mathcal {C}}_{\uparrow , \nearrow }\) but \((v_{\ell }+1, \ell ) \in {\mathcal {C}}_{\rightarrow }\). In that case, \(I_{v_{\ell }+1, \ell } = 1\), by the second and third observations above, and \({\tilde{\varphi }}\) will change level, again following the steps in (5.3).

From the description of the evolution of \({\tilde{\varphi }}\), starting from (5.3) and evolving as we describe in the previous paragraph, the definition of the competition interface \(\varphi \) in (5.2), implies as piecewise linear curves,

$$\begin{aligned} \varphi \ge {\tilde{\varphi }}, \end{aligned}$$
(5.4)

i.e. if \((x, y_1) \in \varphi \) and \((x, y_2) \in {\tilde{\varphi }}\) then, \(y_1 \ge y_2\). Similarly, if \((x_1, y) \in \varphi \) and \((x_2, y) \in {\tilde{\varphi }}\) then, \(x_1 \le x_2\). Moreover, if \(u\in {\mathbb {Z}}^2_+\not \in {\tilde{\varphi }}\) then it has to belong to one of the clusters; \({\mathcal {C}}_{\rightarrow }\) if u is below \({\tilde{\varphi }}\) and \({\mathcal {C}}_{\uparrow , \nearrow }\) otherwise. (see Fig. 3).

Fig. 3
figure 3

Graphical representation of \(\tilde{\varphi }\) and \(\varphi \). Both curves can be thought as competition interfaces. \(\tilde{\varphi }\) separates competing clusters, depending on the first step of the right-most maximal path, while \(\varphi \) follows the smallest increment of passage times with a rule to break ties. As curves they are geometrically ordered, \({\tilde{\varphi }} \le \varphi \)

5.1.3 The Reversed Process

Let (mn) with \(m,n>0\) be the target point. Define

$$\begin{aligned} G^*_{i,j} = G_{m,n} - G_{m-i, n -j},\qquad \text {for }0\le i<m\text { and }0\le j<n. \end{aligned}$$
(5.5)

It represents the time to reach point (ij) starting from (mn) for the reversed process. We also define the new edge and the bulk weights by

$$\begin{aligned} I^*_{i,j}&= I_{m-i+1, n-j}, \quad \text { when } i \ge 1, j \ge 0 \end{aligned}$$
(5.6)
$$\begin{aligned} J^*_{i,j}&= J_{m-i, n-j +1}, \quad \text { when } i \ge 0, j \ge 1 \end{aligned}$$
(5.7)
$$\begin{aligned} \omega ^*_{i,j}&= \alpha _{m-i, n-j}, \quad \text { when } i \ge 1, j \ge 1. \end{aligned}$$
(5.8)

Then we have the reverse identities.

Lemma 5.2

Let \(I^*\) and \(J^*\) be respectively the horizontal and vertical increment for the reversed process. Then, for \(0\le i<m\text { and }0\le j<n\), we have

$$\begin{aligned} I^*_{i,j}&= \omega ^*_{i,j} \vee I^*_{i,j-1} \vee J^*_{i-1,j} - J^*_{i-1,j} = G^*_{i,j} - G^*_{i-1, j} \end{aligned}$$
(5.9)
$$\begin{aligned} J^*_{i,j}&= \omega ^*_{i,j} \vee I^*_{i,j-1} \vee J^*_{i-1,j} - I^*_{i,j-1} = G^*_{i,j} - G^*_{i, j-1}. \end{aligned}$$
(5.10)

Proof

First note that

$$\begin{aligned} I_{m-i+1, n-j}&= G_{m-i+1,n-j} - G_{m-i, n-j} \\&= G_{m-i+1,n-j} - G_{m,n} + G_{m,n} - G_{m-i, n-j} = G^*_{i,j} - G^*_{i-1,j}. \end{aligned}$$

by (5.5). We also prove the other identity only for the \(I^*_{i,j}\) and leave the proof for the second set of equations to the reader. A direct substitution to the right-hand side gives

$$\begin{aligned}&\omega ^*_{i,j} \vee I^*_{i,j-1} \vee J^*_{i-1,j} - J^*_{i-1,j} \\&\quad = \alpha _{m-i, n-j}\vee I_{m-i+1, n-j+1} \vee J_{m-i+1, n-j +1} - J_{m-i+1, n-j +1}\\&\quad = (\alpha _{m-i, n-j} - J_{m-i+1, n-j +1})\vee (G_{m-i+1,n-j} - G_{m-i,n-j+1}) \vee 0\\&\quad = (\alpha _{m-i, n-j} - J_{m-i+1, n-j +1})\vee (G_{m-i+1,n-j}\\&\qquad - G_{m-i, n-j}+ G_{m-i, n-j} - G_{m-i,n-j+1}) \vee 0\\&\quad = (\alpha _{m-i, n-j} - (\omega _{m-i+1, n-j +1}\vee I_{m-i+1, n-j}\vee J_{m-i, n-j +1} - I_{m-i+1, n-j}) )\\&\qquad \vee (I_{m-i+1, n-j} - J_{m-i,n -j +1})\vee 0 \\&\quad =I_{m-i+1, n-j} + \Big ( (\alpha _{m-i, n-j} - \omega _{m-i+1, n-j +1}\vee I_{m-i+1, n-j}\vee J_{m-i, n-j +1})\\&\qquad \vee (- J_{m-i,n -j +1})\vee (-I_{m-i+1, n-j}) \Big ). \end{aligned}$$

Focus on the expression in the parenthesis. We will show that it is always 0, and therefore the lemma follows by (5.6). We use Eqs. (3.3) and (3.8). If \((I_{m-i+1, n-j}, J_{m-i, n -j +1})\)\(= (1,1)\) then \(\alpha _{m-i, n-j} =1\) and the first maximum is zero. Similarly, when the triple \((\omega _{m-i+1, n-j +1}, I_{m-i+1, n-j}, J_{m-i, n-j +1}) = (0,0,0)\), \(\alpha _{m-i, n-j} =0\) and the value is zero again. When exactly one of \(I_{m-i+1, n-j}, J_{m-i, n -j +1}\) is zero the overall maximum in the parenthesis is 0, irrespective of the values of \(\alpha _{m-i, n-j}, \omega _{m-i+1, n-j +1}\). Finally, when \(\omega _{m-i+1, n-j +1} =1\) and both the increment variables \((I_{m-i+1, n-j}, J_{m-i, n -j +1}) = (0,0)\), the first term is either 0 or \(-1\) and again the overall maximum is zero. \(\square \)

Throughout the paper quantities defined in the reversed process will be denoted by a superscript \(^*\), and they will always be equal in distribution to their original forward versions.

5.1.4 Competition Interface for the Forward Process Versus Maximal Path for the Reversed Process

We want to show that the competition interface defined in (5.2) is always below or coincides (as piecewise linear curves) with the down—most maximal path \(\hat{\pi }^*\) for the reversed process. The steps of the competition interface for the forward process coincide with those of \({\hat{\pi }}^*\) in all cases, except when \((I_{i,j},J_{i,j},\omega _{i,j}) = (0,1,1)\). In that case, \({\hat{\pi }}^*\) will go diagonally up, while \(\varphi \) will move horizontally. Thus, \(\varphi \) is to the right and below \({\hat{\pi }} ^*\) as curves.

Now, define

$$\begin{aligned} \begin{aligned} v(n)&=\inf \{i:(i,n)=\varphi _k\text { for some }k\ge 0\}\\ w(m)&=\inf \{j:(m,j)=\varphi _k\text { for some }k\ge 0\} \end{aligned} \end{aligned}$$
(5.11)

with the convention \(\inf \varnothing =\infty \). In words, the point (v(n), n) is the left-most point of the competition interface on the horizontal line \(j=n\), while (mw(m)) is the lowest point on the vertical line \(i=m\). This observation implies

$$\begin{aligned} v(n)\ge m\implies w(m)< n\qquad \text {or } \quad w(m)\ge n\implies v(n)<m. \end{aligned}$$
(5.12)

Then, on the event \(\{ w(m) \ge n\}\), we know that \({\hat{\pi }}^*\) will hit the north boundary of the rectangle at a site \(( \ell , n)\) so that

$$\begin{aligned} m - \ell = \xi ^*_{e_1}({{\hat{\pi }}^*}),\quad \ell \le v(n). \end{aligned}$$

Then, we have just showed that

Lemma 5.3

Let \(\varphi \) be the competition interface constructed for the process \(G^{(\lambda )}\) and \({\hat{\pi }}^*\) the down-most maximal path for the process \(G^{*,(\lambda )}\) defined by (5.5) from (mn) to (0, 0). Then on the event \(\{ v(n) \ge m \}\),

$$\begin{aligned} m - v(n) \le \xi ^{*(\lambda )}_{e_1}({{\hat{\pi }}^*}) \end{aligned}$$
(5.13)

Finally, note that by reversed process definition we have

$$\begin{aligned} \xi _{e_1}^{*(\lambda )}\overset{{\mathcal {D}}}{=}\xi _{e_1}^{(\lambda )}. \end{aligned}$$
(5.14)

5.2 Last Passage Time Under Different Boundary Conditions

In our setting the competition interface is important because it bounds the region where the boundary conditions on the axes are felt. For this reason we want to give a Lemma which describes how changes in the boundary conditions are felt by the increments in the interior part.

Lemma 5.4

Given two different weights \(\{\omega _{i,j}\}\) and \(\{\tilde{\omega }_{i,j}\}\) which satisfy \(\omega _{0,0}=\tilde{\omega }_{0,0}\), \(\omega _{0,j}\ge \tilde{\omega }_{0,j}\), \(\omega _{i,0}\le \tilde{\omega }_{i,0}\) and \(\omega _{i,j}=\tilde{\omega }_{i,j}\) for all \(i,j\ge 1\). Then all increments satisfy \(I_{i,j}\le \tilde{I}_{i,j}\) and \(J_{i,j}\ge \tilde{J}_{i,j}\).

Proof

By following the same corner-flipping inductive proof as that of Lemma 3.3 one can show that the statement holds for all increments between points in \(L_\psi \cup {\mathcal {I}}_\psi \) where \(L_\psi \) and \({\mathcal {I}}_\psi \) are respectively defined in (3.6) and (3.9) for those paths for which \({\mathcal {I}}_\psi \) is finite. The base case is when \({\mathcal {I}}_\psi \) is empty and the statement follows from the assumption made on the weights \(\{\omega _{i,j}\}\) and \(\{\tilde{\omega }_{i,j}\}\) and from the definition of the increments made in (3.3). \(\square \)

Lemma 5.5

We are in the settings of Lemma 5.4. Let \(G^{{\mathcal {W}}=0}\) (resp.\(G^{{\mathcal {S}}=0}\)) be the last passage times of a system where we set \(\tilde{\omega }_{0,j}=0\) for all \(j\ge 1\) (resp. \(\omega _{i,0}=0\)) and the paths are allowed to collect weights while on the boundaries. Let v(n) be given by (5.11).

Then, for \(v(n) < m_1 \le m_2\),

$$\begin{aligned} \begin{aligned} G_{(1,1),(m_2,n)}-G_{(1,1),(m_1,n)}&\le G^{{\mathcal {W}}=0}_{(0,0),(m_2,n)}-G^{{\mathcal {W}}=0}_{(0,0),(m_1,n)}\\&=G_{(0,0),(m_2,n)}-G_{(0,0),(m_1,n)}. \end{aligned} \end{aligned}$$
(5.15)

Alternatively, for \(0\le m_1 \le m_2 < v(n)\),

$$\begin{aligned} \begin{aligned} G_{(1,1),(m_2,n)}-G_{(1,1),(m_1,n)}&\ge G^{{\mathcal {S}}=0}_{(0,0),(m_2,n)}-G^{{\mathcal {S}}=0}_{(0,0),(m_1,n)}\\&=G_{(0,0),(m_2,n)}-G_{(0,0),(m_1,n)}. \end{aligned} \end{aligned}$$
(5.16)

Proof

We prove (5.16) and similar arguments prove (5.15). The first inequality in (5.16) follows from Lemma 5.4 in the case \( {\tilde{\omega }}_{0,j}= {\tilde{\omega }}_{i,0}=0\). The subsequent equality comes from the fact that if \(v(n)\ge m_1 \ge m_2\). By (5.11) the target points \((m_1,n)\) and \((m_2,n)\) are above the competition interface \(\varphi \) and therefore, by (5.4) are strictly above \({\tilde{\varphi }}\). This implies that \((m_1, n)\) and \((m_2, n)\) belong to the cluster \({\mathcal {C}}_{\uparrow , \nearrow }\) and therefore we can choose the respective maximal paths to not take a horizontal first step. In turn, the maximal path does not need to go through the x-axis and hence it does not see the boundary values \(\omega _{i, 0}\). Thus, \(G^{{\mathcal {S}}=0}_{(0,0),(m,n)}=G_{(0,0),(m_,n)}\). \(\square \)

5.3 Lower Bound

In this section we prove the lower bound for the order of the variance. Before giving the proof we need to prove two preliminary lemmas. For the rest of this section, whenever we say maximal path, we mean the down-most maximal path.

Lemma 5.6

Let \(a,b>0\) two positive numbers. Then there exist a positive integer \(N_0=N(a,b)\) and constant \(C=C(a,b)\) such that for all \(N>N_0\) we have

$$\begin{aligned}&{\mathbb {P}}\Bigg \{\sup _{0\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_z+G_{(z\vee 1,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}\ge bN^{1/3}\Bigg \}\nonumber \\&\quad \le Ca^3(b^{-3}+b^{-6}). \end{aligned}$$
(5.17)

Proof

First note that if the supremum in the probability is attained at \(z = 0\) then the expression in the braces is tautologically 0 and the statement of the lemma is vacuously true. Therefore without loss of generality, we can prove the bound for the supremum when \(1 \le z \le aN^{2/3}\).

Select and fix any parameter \(0< r < b/a\) and let N large enough. The exact dependence of r on the parameters a and b will be obtained later in the proof. Define \(\lambda \) by

$$\begin{aligned} \lambda =u-rN^{-1/3}. \end{aligned}$$
(5.18)

and use it to define boundary weights on both axes using that parameter and independently of the original boundary weights with parameter u. The environment in the bulk is the same for both processes. Let \(\varphi ^{(\lambda )}\) be the competition interface under environment \(\omega ^{(\lambda )}\) and let \(v^{(\lambda )}\) be as in Eq. (5.11). Restrict on the event \(v^\lambda (n) > m\). Define the increment \({\mathcal {V}}^{(\lambda )}_{z-1}=G^{(\lambda )}_{(0,0),(m,n)}-G^{(\lambda )}_{(0,0), (m-z+1,n)}\). Then use Lemma 5.5 to obtain

$$\begin{aligned} G_{(1,1),(m,n)}-G_{(1,1),(m-z+1,n)}\ge {\mathcal {V}}^{(\lambda )}_{z-1}. \end{aligned}$$

Recall that \({\mathcal {V}}^{(\lambda )}_{z-1}\) is a sum of i.i.d. Bernoulli\((\lambda )\) variables and it is independent of \({\mathscr {S}}^{(u)}_z\). When (mn) equals the characteristic direction \((m_u(N), n_u(N))\) corresponding to u,

$$\begin{aligned}&{\mathbb {P}}\Bigg \{\sup _{1\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_z +G_{(z,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}\ge bN^{1/3}\Bigg \} \nonumber \\&\quad \le {\mathbb {P}}\Big \{v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )\le \lfloor N\rfloor \Big \} \nonumber \\&\qquad +{\mathbb {P}}\Big \{\sup _{1\le z\le aN^{2/3}}\{{\mathscr {S}}^{(u)}_z-{\mathcal {V}}^{(\lambda )}_{z-1}\}\ge bN^{1/3}\Big \} \nonumber \\&\quad \le {\mathbb {P}}\Big \{v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )\le \lfloor N\rfloor \Big \} \end{aligned}$$
(5.19)
$$\begin{aligned}&\qquad +{\mathbb {P}}\Big \{\sup _{1\le z\le aN^{2/3}}\{{\mathscr {S}}^{(u)}_{z-1}-{\mathcal {V}}^{(\lambda )}_{z-1}\}\ge bN^{1/3}- 1\Big \}. \end{aligned}$$
(5.20)

Where the first inequality is obtained by applying the law of total probability and making \({\mathbb {P}}\Big \{\sup _{1\le z\le aN^{2/3}}\{{\mathscr {S}}^{(u)}_z +G_{(z,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\}\ge bN^{1/3}|v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )\le \lfloor N\rfloor \Big \}={\mathbb {P}}\Big \{v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )> \lfloor N\rfloor \Big \} =1\). We bound the two probabilities separately. We begin with (5.20). Define the martingale as \(M_{z-1}={\mathscr {S}}^{(u)}_{z-1}-{\mathcal {V}}^{(\lambda )}_{z-1}-{\mathbb {E}}[{\mathscr {S}}^{(u)}_{z-1}-{\mathcal {V}}^{(\lambda )}_{z-1}]\), and note that for \(1 \le z\le aN^{2/3}\),

$$\begin{aligned} {\mathbb {E}}[{\mathscr {S}}^{(u)}_{z-1}-{\mathcal {V}}^{(\lambda )}_{z-1}]=(z-1)u-(z-1)\lambda \le raN^{1/3}. \end{aligned}$$
(5.21)

From (5.21) follows that

$$\begin{aligned} {\mathscr {S}}^{(u)}_{z-1}-{\mathcal {V}}^{(\lambda )}_{z-1}\le M_{z-1}+raN^{1/3}. \end{aligned}$$

Using this result and taking N large enough so that

$$\begin{aligned} b>ra +N^{-1/3} \end{aligned}$$
(5.22)

we get by Doob’s inequality, for any \(d\ge 1\).

$$\begin{aligned}&{\mathbb {P}}\Bigg \{\sup _{1\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_{z-1}-{\mathcal {V}}^{(\lambda )}_{z-1}\big \} \ge bN^{1/3} -1\Bigg \}\nonumber \\&\quad \le {\mathbb {P}}\Bigg \{\sup _{1\le z\le aN^{2/3}}M_{z-1}\ge N^{1/3}(b-ra-N^{-1/3})\Bigg \}\nonumber \\&\quad \le \frac{C(d)N^{-d/3}}{(b-ra-N^{-1/3})^d}{\mathbb {E}}[|M_{\lfloor aN^{2/3}\rfloor }|^d]\le \frac{C(d,u)a^{d/2}}{(b-ra-N^{-1/3})^d}. \end{aligned}$$
(5.23)

Then for \(N\ge 4^3b^{-3}\) the above bound is further dominated by \( C(d,u)a^{d/2}{(\frac{3b}{4}-ra\big )^{-d}} \) which becomes \(C(d,u)a^3b^{-6}\) once we choose

$$\begin{aligned} r=\frac{b}{4a}, \end{aligned}$$
(5.24)

\(d=6\), and properly re-define the constant C(du). This concludes the bound for (5.20).

For (5.19), we rescale N as

$$\begin{aligned} N'=\Bigg (\frac{p+(1-p)u}{p+(1-p)\lambda }\Bigg )^2N. \end{aligned}$$

Then we write

$$\begin{aligned} {\mathbb {P}}\Big \{v^{(\lambda )}\Big (\Big \lfloor \frac{N'}{p}\big (p + (1-p)\lambda \big )^2\Big \rfloor \Big )<\Big \lfloor \Big (\frac{p+(1-p)\lambda }{p+(1-p)u}\Big )^2N'\Big \rfloor \Big \} \end{aligned}$$

Since \(u>\lambda \), then

$$\begin{aligned} \Big \lfloor \Big (\frac{p+(1-p)\lambda }{p+(1-p)u}\Big )^2N'\Big \rfloor \le \lfloor N' \rfloor . \end{aligned}$$

Thus, by redefining (2.7) and (5.13) with \(N'\) and \(\lambda \), we have that the event \(v^{(\lambda )}(\lfloor \frac{N'}{p}(p + (1-p)\lambda )^2\rfloor )<\lfloor (\frac{p+(1-p)\lambda }{p+(1-p)u})^2N'\rfloor \) is equivalent to

$$\begin{aligned} \xi _{e_1}^{*(\lambda )}(N')&\ge \lfloor N' \rfloor -v^{(\lambda )}\Big (\Big \lfloor \frac{N'}{p}\big (p + (1-p)\lambda \big )^2\Big \rfloor \Big )\\&>\lfloor N' \rfloor -\Big \lfloor \Big (\frac{p+(1-p)\lambda }{p+(1-p)u}\Big )^2N'\Big \rfloor . \end{aligned}$$

By (5.14), we conclude

$$\begin{aligned}&{\mathbb {P}}\Big \{v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )<\lfloor N\rfloor \Big \}\nonumber \\&\quad ={\mathbb {P}}\Big \{\xi _{e_1}^{(\lambda )}(N')>\lfloor N' \rfloor -\Big \lfloor \Big (\frac{p+(1-p)\lambda }{p+(1-p)u}\Big )^2N'\Big \rfloor \Big \}. \end{aligned}$$
(5.25)

Utilizing the definitions (5.18) and (5.24) of \(\lambda \) and r, for \(N\ge N_0\) there exists a constant \(C=C(u)\) such that

$$\begin{aligned} \lfloor N' \rfloor -\Big \lfloor \Big (\frac{p+(1-p)\lambda }{p+(1-p)u}\Big )^2N'\Big \rfloor \ge CrN'^{2/3}. \end{aligned}$$

Combining this with Corollary 4.10 and definition (5.24) of r we get the bound

$$\begin{aligned} {\mathbb {P}}\Big \{v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )<\lfloor N\rfloor \Big \}&\le {\mathbb {P}}\big [\xi _{e_1}^{(\lambda )}(N')>CrN'^{2/3}\big ]\nonumber \\&\le Cr^{-3}\le C(a/b)^3. \end{aligned}$$
(5.26)

The result now follows. \(\square \)

The other Lemma gives an asymptotic limit of the probability order of the exit point from the x-axis. We will discuss the exit point from the y-axis as a Corollary of this Lemma.

Lemma 5.7

Let \(u \in (0,1)\) and \((m_u(N), n_u(N))\) the characteristic direction. Then the exit point of a maximal path from 0 to \((m_u(N), n_u(N))\) satisfies

$$\begin{aligned} \lim _{\delta \rightarrow 0}\varlimsup _{N\rightarrow \infty }{\mathbb {P}}\Big \{0\le {\xi ^{(u)}_{e_1}(N)} \vee {\xi ^{(u)}_{e_2}(N)} \le \delta N^{2/3}\Big \}=0. \end{aligned}$$

Proof

We only show the result for \({\xi ^{(u)}_{e_1}(N)}\). The same result for \({\xi ^{(u)}_{e_2}(N)}\) follows by interchanging vertical and horizontal directions and the fact that both boundaries have Bernoulli variables.

First pick a parameter \(\delta > 0\). Recall that \(\xi ^{(u)}_{e_1}(N)=0\) if the down-most maximal path makes the first step diagonally or up. Also keep in mind that \(\xi ^{(u)}_{e_1}(N)=0\) is the right-most possible exit point, therefore all paths that exit later, have to have a obtain smaller passage time. Then, we may bound

$$\begin{aligned} {\mathbb {P}}\big \{0\le \xi ^{(u)}_{e_1}(N)\le \delta N^{2/3}\big \}&\le {\mathbb {P}}\Bigg \{\sup _{\delta N^{2/3}<x\le N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}+G_{(x,1),(m_u(N),n_u(N))}\big \}\\&<\sup _{0\le x\le \delta N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}+G_{(x\vee 1,1),(m_u(N),n_u(N))}\big \}\Bigg \}. \end{aligned}$$

Then, we subtract the term \(G_{(1,1),(m(N),n(N))}\) from both sides and we bound the resulting probability from above by

$$\begin{aligned}&{\mathbb {P}}\Bigg \{\sup _{\delta N^{2/3}<x\le N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}+G_{(x,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}\nonumber \\&\quad<\sup _{0\le x\le \delta N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}+G_{(x\vee 1,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}\Bigg \}\nonumber \\&\quad \le {\mathbb {P}}\Bigg \{\sup _{\delta N^{2/3}<x\le N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}+G_{(x,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}<bN^{1/3}\Bigg \} \end{aligned}$$
(5.27)
$$\begin{aligned}&\qquad +{\mathbb {P}}\Bigg \{\sup _{0\le x\le \delta N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}+G_{(x\vee 1,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}>bN^{1/3}\Bigg \}. \end{aligned}$$
(5.28)

(5.28) is bounded from above using Lemma 5.6 by \(C\delta ^3(b^{-3}+b^{-6})\).

To bound (5.27) we use similar arguments that we employed in the proof of Lemma 5.6. Define an auxiliary parameter \(\lambda \)

$$\begin{aligned} \lambda =u+rN^{-1/3}, \end{aligned}$$
(5.29)

where conditions on r will be specified in the course of the proof. From Lemma 5.5 the following inequalities hold

$$\begin{aligned} G_{(x,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}&\ge G^{(\lambda )}_{(0,0), (m_u(N)-x+1,n_u(N))}-G^{(\lambda )}_{(0,0),(m_u(N),n_u(N))}\\&=-{\mathcal {V}}^{(\lambda )}_{x-1}\ge -{\mathcal {V}}^{(\lambda )}_{x}. \end{aligned}$$

whenever \(v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )\le \lfloor N\rfloor -x.\) Using these, we have

$$\begin{aligned}&{\mathbb {P}}\left\{ \sup _{\delta N^{2/3}<x\le N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}+G_{(x,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}<bN^{1/3}\right\} \nonumber \\&\quad \le {\mathbb {P}}\left\{ v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )>\lfloor N\rfloor -N^{2/3}\right\} \end{aligned}$$
(5.30)
$$\begin{aligned}&\qquad +{\mathbb {P}}\left\{ \sup _{\delta N^{2/3}<x\le N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}-{\mathcal {V}}^{(\lambda )}_{x}\big \}<bN^{1/3}\right\} . \end{aligned}$$
(5.31)

We claim that, for \(\eta >0\) and parameter r, it is possible to fix \(\delta ,b>0\) small enough so that, for some \(N_0<\infty \), the probability in (5.31) satisfies

$$\begin{aligned} {\mathbb {P}}\left\{ \sup _{\delta N^{2/3}<x\le N^{2/3}}\big \{{\mathscr {S}}^{(u)}_{x}-{\mathcal {V}}^{(\lambda )}_{x}\big \}<bN^{1/3}\right\} \le \eta \quad \text {for all }\,N\ge N_0. \end{aligned}$$
(5.32)

In order to prove this, we use a scaling argument: Uniformly over \(y\in [\delta ,1]\) as \(N\rightarrow \infty \),

$$\begin{aligned} N^{-1/3}{\mathbb {E}}\left[ {\mathscr {S}}^{(u)}_{\lfloor yN^{2/3}\rfloor }-{\mathcal {V}}^{(\lambda )}_{\lfloor yN^{2/3}\rfloor }\right] =N^{-1/3}(\lfloor yN^{2/3}\rfloor u-\lfloor yN^{2/3}\rfloor (u+rN^{-1/3}))\rightarrow -ry \end{aligned}$$

and

$$\begin{aligned} N^{-2/3}{{\,\mathrm{Var}\,}}\left( {\mathscr {S}}^{(u)}_{\lfloor yN^{2/3}\rfloor }-{\mathcal {V}}^{(\lambda )}_{\lfloor yN^{2/3}\rfloor }\right)&= y\bigg (\sqrt{u(1-u)}+\sqrt{(u+rN^{-1/3})(1-u-rN^{-1/3})}\bigg )^2\\&\rightarrow 4u(1-u)y=\sigma ^2(u)y \end{aligned}$$

Since we are scaling the supremum of a random walk with bounded increments, the probability (5.32) converges as \(N\rightarrow \infty \), to

$$\begin{aligned} {\mathbb {P}}\left\{ \sup _{\delta \le y\le 1}\{\sigma (u){\mathfrak {B}}(y)-ry\}\le b\right\} \end{aligned}$$

where \({\mathfrak {B}}(\cdot )\) is a standard Brownian motion. The random variable

$$\begin{aligned} \sup _{\delta \le y\le 1}\{\sigma (u){\mathfrak {B}}(y)-ry\} \end{aligned}$$

is positive almost surely when \(\delta \) is sufficiently small. Therefore, the above probability is less than \(\eta /2\) for a suitably small b. This implies (5.32).

Finally we bound (5.30). Using (5.12) and the transpose environment \({\tilde{\omega }}_{i,j} = \omega _{j, i}\) for \(i,j \ge 0\) under the measure \({\widetilde{{\mathbb {P}}}}\)

$$\begin{aligned}&{\mathbb {P}}\Big \{v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )> \lfloor N-N^{2/3}-1\rfloor \Big \}\nonumber \\&\quad \le \widetilde{{\mathbb {P}}}\Big \{v^{\big (\frac{p(1-\lambda )}{ \lambda + p(1-\lambda ) }\big )}(\lfloor N-N^{2/3}-1\rfloor )\le \Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big \}. \end{aligned}$$
(5.33)

Under measure \({\widetilde{{\mathbb {P}}}}\) the environment is still i.i.d. and the only change is the alternation of parameter values on the boundaries. Moreover, in the transposed environment, the new competition interface \(\varphi ^{(\frac{p(1-\lambda )}{ \lambda + p(1-\lambda ) })}\) constructed using (5.2), would be above (as a curve) from the transposed competition interface \(\varphi ^{(\lambda )}\), so it would still exit from the north boundary. (see Fig. 4). From (5.29) substitute u as a function of \(\lambda \),

Fig. 4
figure 4

Comparison of various curves in \( \omega _{i,j}\) and \({\tilde{\omega }}_{i,j} =\omega _{j, i}\) environments. The thickset blue curve (color online) in the left figure is the competition interface in \(\omega _{i,j}\) and the reflected curve can be seen in the same color to the right. The green curve is the competition interface in \({\tilde{\omega }}_{i,j}\) weights which is higher than the reflected \(\varphi \) and the red curve is the right-most maximal paths in the reversed \({\tilde{\omega }}_{i,j}^*\) weights with boundaries on north and east, which is higher than both the other curves

$$\begin{aligned}&\text {Probability in }(5.33)= \widetilde{{\mathbb {P}}}\Big \{v^{\big (\frac{p(1-\lambda )}{ \lambda + p(1-\lambda ) }\big )}(\lfloor N-N^{2/3}-1\rfloor )\\&\quad \le \Big \lfloor \frac{N}{p}\big (p + (1-p)\lambda \big )^2-\frac{2}{p}(p + (1-p)\lambda )(1-p)rN^{2/3}+o(N^{2/3})\Big \rfloor \Big \}. \end{aligned}$$

Define \(N'\) as

$$\begin{aligned} N'= N-N^{2/3}-1\implies N=N'+N'^{2/3}+o(N'^{2/3}). \end{aligned}$$

Replace N with \(N'\) in the probability above to obtain

$$\begin{aligned} \text {Probability in }(5.33) \le {\widetilde{{\mathbb {P}}}} \Big \{v^{\big (\frac{p(1-\lambda )}{ \lambda + p(1-\lambda ) }\big )}(\lfloor N'\rfloor )\le \Big \lfloor \frac{N'}{p}(p + (1-p)\lambda )^2\Big \rfloor -KN'^{2/3}\Big \}, \end{aligned}$$

where \(K=p^{-1}(p+(1-p)\lambda )(2(1-p)r-(p+(1-p)\lambda ))\) which is positive for r large enough. Using (5.13) and (5.14)

$$\begin{aligned} \text {Probability}(5.33)&={\widetilde{{\mathbb {P}}}}\Big \{\xi _{e_1}^{*\big (\frac{p(1-\lambda )}{ \lambda + p(1-\lambda ) }\big )}(N')>KN'^{2/3}\Big ]={\widetilde{{\mathbb {P}}}} \Big \{\xi _{e_1}^{\big (\frac{p(1-\lambda )}{ \lambda + p(1-\lambda ) }\big )}(N')>KN'^{2/3}\Big ]\\&\le CK^{-3}, \end{aligned}$$

where the last inequality follows from Corollary 4.10.We are now ready to prove the lemma. Start with a fixed \(\eta > 0\). Then, fix an r large enough so that \(CK^{-3}<\eta \) and probability (5.30) is controlled. This also imposes a restriction on the smallest value of N that we can take, since we must have \(\lambda < 1\). Under a fixed r, we can modulate \(\delta ,b\) and select them small enough, so that (5.32) holds. Finally, make \(\delta \) smaller so that \(C\delta ^3(b^{-3}+b^{-6})<\eta \) and probability (5.28) is also controlled. Thus, unifying all these results we have

$$\begin{aligned} {\mathbb {P}}\big \{0\le \xi ^{(u)}_{e_1}(N)\le \delta N^{2/3}\big \}\le 2\eta . \end{aligned}$$
(5.34)

Note that by shrinking \(\delta \) while b remains fixed, (5.32) is reinforced. This concludes the proof of the lemma. \(\square \)

Proof of Theorem 2.3, lower bound

We first claim that

$$\begin{aligned} A_{{\mathcal {N}}^{(u)}} \ge {\mathbb {E}}\left( \xi - \sum _{i = 1}^{\xi }\omega _{i,0} \right) = {\mathbb {E}}\left( \sum _{i = 1}^{\xi }(1 - \omega _{i,0} )\right) . \end{aligned}$$
(5.35)

Under this claim, we can write

$$\begin{aligned} A_{{\mathcal {N}}^{(u)}}&= {\mathbb {E}}\left( \sum _{i = 1}^{\xi _{e_1}^{(u)}}(1 - \omega _{i,0} )\right) \ge {\mathbb {E}}\left( \mathbb {1}\big \{ \xi _{e_1}^{(u)}(N) \ge \delta N^{2/3} \big \}\sum _{i = 1}^{\left\lfloor {\delta N^{2/3}}\right\rfloor }(1 - \omega _{i,0} )\right) \\&\ge \alpha N^{2/3} {\mathbb {P}}\left\{ \xi _{e_1}^{(u)}(N) \ge \delta N^{2/3},\,\, \sum _{i = 1}^{\left\lfloor {\delta N^{2/3}}\right\rfloor }(1 - \omega _{i,0} ) \ge \alpha N^{2/3} \right\} . \end{aligned}$$

Fix an \(\eta \) positive and smaller than 1 / 4. Now, by making \(\delta \) sufficiently small, we can make the event \(\{ \xi _{e_1}^{(u)}(N) \ge \delta N^{2/3} \}\) have probability larger than \(1 -\eta \) by Lemma 5.7, for N sufficiently large. With \(\delta \) fixed, we can make \(\alpha \) smaller, so that the event \(\Big \{\sum _{i = 1}^{\left\lfloor {\delta N^{2/3}}\right\rfloor }(1 - \omega _{i,0} ) \ge \alpha N^{2/3} \Big \}\) also has probability larger than \(1 -\eta \). Therefore their intersection has probability greater than \(1 - 2\eta \).

By Eq. (4.5) and the fact that we are in a characteristic direction, the result follows.

It now remains to verify (5.35). From Eq. 4.6, we can write

$$\begin{aligned} {\mathscr {H}}^{(\varepsilon )}_{i,0}\vee \omega _{i,0} - \omega _{i,0} = {\mathscr {H}}^{(\varepsilon )}_{i,0} - {\mathscr {H}}^{(\varepsilon )}_{i,0} \omega _{i,0}. \end{aligned}$$

Then begin from (4.17) to bound

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }({\mathcal {N}}^{u_\varepsilon } - {\mathcal {N}}^{(u)} )&\ge {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }\left( {\mathscr {S}}^{u_\varepsilon }_{\xi ^{(u)}_{e_1}} - {\mathscr {S}}^{(u)}_{\xi ^{(u)}_{e_1}}\right) = {\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }\left( \sum _{i=1}^{\xi ^{(u)}_{e_1}} {\mathscr {H}}^{(\varepsilon )}_{i,0} - {\mathscr {H}}^{(\varepsilon )}_{i,0} \omega _{i,0}\right) \\&=\varepsilon {\mathbb {E}}\big (\xi ^{(u)}_{e_1}\big ) -{\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }\left( \sum _{i=1}^{\xi ^{(u)}_{e_1}}{\mathscr {H}}^{(\varepsilon )}_{i,0} \omega _{i,0}\right) \\&= \varepsilon {\mathbb {E}}\big (\xi ^{(u)}_{e_1}\big ) -{\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }\left( \sum _{y = 1}^{m_{u}(N)} \sum _{i=1}^{y}{\mathscr {H}}^{(\varepsilon )}_{i,0} \omega _{i,0}\mathbb {1}\big \{ \xi ^{(u)}_{e_1}=y\big \}\right) \\&= \varepsilon {\mathbb {E}}\big (\xi ^{(u)}_{e_1}\big ) -{\mathbb {E}}_{{\mathbb {P}}\otimes \mu _\varepsilon }\left( \sum _{i = 1}^{m_{u}(N)} {\mathscr {H}}^{(\varepsilon )}_{i,0} \omega _{i,0}\mathbb {1}\big \{ \xi ^{(u)}_{e_1} \ge i \big \}\right) \\&= \varepsilon {\mathbb {E}}\big (\xi ^{(u)}_{e_1}\big ) - \varepsilon {\mathbb {E}}\left( \sum _{i = 1}^{m_{u}(N)} \omega _{i,0}\mathbb {1}\big \{ \xi ^{(u)}_{e_1} \ge i \big \}\right) \\&= \varepsilon {\mathbb {E}}\big (\xi ^{(u)}_{e_1}\big ) - \varepsilon {\mathbb {E}}\left( \sum _{i = 1}^{ \xi ^{(u)}_{e_1} } \omega _{i,0}\right) . \end{aligned}$$

Combine the expectations and divide by \(\varepsilon \). Then take a limit as \(\varepsilon \rightarrow 0\) to finish the proof. \(\square \)

6 Variance in Off-Characteristic Directions

In this section we want to deduce the central limit theorem for rectangles that do not have characteristic shape.

Proof of Theorem 2.4

We prove the theorem in the case \(c<0\)and analogous arguments follow for \(c>0\). Set \(m^*_u(N)=m_u(N)+\left\lfloor {c N^\alpha }\right\rfloor \). Now, the point \((m^*_u(N), n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor )\) is in the characteristic direction. Thus

$$\begin{aligned} G^{(u)}_{m_u(N), n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor }=G^{(u)}_{m^*_u(N), n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor }+\sum _{i=m^*_u(N)+1}^{m_u(N)}I_{i, n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor }. \end{aligned}$$

Note that the second the term on the right hand side is a sum of \(m_u(N)-m_u^*(N)=|\left\lfloor {c N^\alpha }\right\rfloor |\) i.i.d. Bernoulli distributed with parameter \(\lambda \). We center by the mean of each random variable and we indicate them with a bar over the random variable. Multiply both sides by \(N^{-\alpha /2}\) to obtain

$$\begin{aligned} N^{-\alpha /2}{\bar{G}}^{(u)}_{m_u(N), n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor }=N^{-\alpha /2}\left( {\bar{G}}^{(u)}_{m^*_u(N), n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor }+\sum _{i=m^*_u(N)+1}^{m_u(N)}{\bar{I}}_{i, n_u(N)+\left\lfloor {c N^\alpha }\right\rfloor }\right) . \end{aligned}$$

The first term on the right hand side is stochastically \(O(N^{1/3-\alpha /2})\). Since \(\alpha >2/3\) this term converges to zero in probability. On the other hand the second term satisfies a CLT. \(\square \)

Note that for any \(\lambda \in (0,1)\), for any \(\varepsilon >0\) the endpoint \((N,pN-\varepsilon N)\) (resp. \((N,N/p + \varepsilon N)\)) will always be the north-east corner of an off-characteristic rectangle—no matter what the value of \(\lambda \).

7 Variance Without Boundary

In this section we prove some results for the last passage time in the model without boundaries but still with fixed endpoint. The case of the flat edge is particularly discussed as the order of the variance changes there.

We begin recalling the last passage time of the model without boundaries to reach a point in the characteristic direction (2.7) is \(G_{(1,1),(m_u(N),n_u(N))}\) and the last passage time of the model with boundaries to reach the same point is \(G^{(u)}_{(0,0),(m_\lambda (N),n_\lambda (N))}\). We want to prove another version of Lemma 5.6.

Lemma 7.1

Fix \(0<\alpha <1\). Then there exist a positive integer \(N_0=N(b,u)\) and constant \(C=C(\alpha ,u)\) such that, for all \(N\ge N_0\) and \(b\ge C_0\) we have

$$\begin{aligned} {\mathbb {P}}\big \{G^{(u)}_{(0,0),(m_\lambda (N),n_\lambda (N))}-G_{(1,1),(m_u(N),n_u(N))}\ge bN^{1/3}\big \}\le Cb^{-3\alpha /2}. \end{aligned}$$

Proof

We prove only the case where the maximal path exits from the x-axis. Similar arguments hold for the maximal path exits from the y-axis and find the same bound.

Note that

$$\begin{aligned}&{\mathbb {P}}\bigg \{G^{(u)}_{(0,0),(m_\lambda (N),n_\lambda (N))}-G_{(1,1),(m_u(N),n_u(N))}\ge bN^{1/3}\bigg \}\nonumber \\&\quad \le {\mathbb {P}}\left\{ \sup _{1\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_z+G_{(z,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}\ge bN^{1/3}\right\} \end{aligned}$$
(7.1)
$$\begin{aligned}&\qquad +{\mathbb {P}}\left\{ \sup _{1\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_z+G_{(z,1),(m_u(N),n_u(N))}\big \}\not = G_{(1,1),(m_u(N),n_u(N))}\right\} . \end{aligned}$$
(7.2)

For (7.2) using Corollary 4.10, there exists a \(C=C(u)\) such that

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\left\{ \sup _{1\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_z+G_{(z,1),(m_u(N),n_u(N))}\big \}\not = G_{(1,1),(m_u(N),n_u(N))}\right\} \\&\quad \le {\mathbb {P}}[\xi _{e_1}^{(u)}(N)\ge aN^{2/3}]\le Ca^{-3}. \end{aligned} \end{aligned}$$
(7.3)

For(7.1) we use the results from the proof of Lemma 5.6. Define

$$\begin{aligned} \lambda =u-rN^{-1/3} \end{aligned}$$

From (5.20) and (5.23), where we choose \(a=b^{\alpha /2},d=2,\) and \(r=b^{\alpha /2}\) we have the upper bound

$$\begin{aligned} {\mathbb {P}}\left\{ \sup _{1\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_{z-1}-{\mathcal {V}}^{(\lambda )}_{z-1}\big \}\ge bN^{1/3}- 1\right\} \le \frac{C(\alpha , u)b^{\alpha /2}}{(b-b^\alpha -N^{-1/3})^2} \end{aligned}$$
(7.4)

where \(C(\alpha ,u)>0\) is large enough so that for \(b\ge C\) (5.22) is satisfied and the denominator in (7.4) is at least b / 2. Then we can claim that for all \(b\ge C\) and \(N\ge N_0=4^3b^{-3}\)

$$\begin{aligned}&{\mathbb {P}}\left\{ \sup _{1\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_z+G_{(z,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}\ge bN^{1/3}\right\} \\&\quad \le {\mathbb {P}}\Big \{v^{(\lambda )}\Big (\Big \lfloor \frac{N}{p}\big (p + (1-p)u\big )^2\Big \rfloor \Big )\le \lfloor N\rfloor \Big \}+Cb^{\alpha /2-2}. \end{aligned}$$

Since \(N\ge N_0\) we can use the result (5.26) and remembering that \(r=b^{\alpha /2}\) in this case we obtain

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\left\{ \sup _{1\le z\le aN^{2/3}}\big \{{\mathscr {S}}^{(u)}_z+G_{(z,1),(m_u(N),n_u(N))}-G_{(1,1),(m_u(N),n_u(N))}\big \}\ge bN^{1/3}\right\} \\&\quad \le Cb^{-3\alpha /2}+Cb^{\alpha /2-2}. \end{aligned} \end{aligned}$$
(7.5)

Combining (7.5) and (7.3) we obtain the final result. \(\square \)

All the constants which will be defined in this section depend on the values xy and p.

Proof of Theorem 2.6

By Chebyshev, Theorem 2.3 for the upper bound, Lemma 7.1

$$\begin{aligned}&{\mathbb {P}}\left\{ |G_{(1,1),(\lfloor Nx\rfloor ,\lfloor Ny\rfloor )}-Ng_{pp}(x,y)|\ge bN^{1/3}\right\} \\&\quad \le {\mathbb {P}}\left\{ \left| G_{(1,1),(m_u(N),n_u(N))}-G^{(u)}_{(0,0),(m_\lambda (N),n_\lambda (N))}\right| \ge \frac{1}{2}bN^{1/3}\right\} \\&\qquad +{\mathbb {P}}\left[ \left| G^{(u)}_{(0,0),(m_\lambda (N),n_\lambda (N))}-Ng_{pp}(x,y)\right| \ge \frac{1}{4}bN^{1/3}\right] \\&\quad \le Cb^{-3\alpha /2}+Cb^{-2}\le Cb^{-3\alpha /2}. \end{aligned}$$

To get the moment bound,

$$\begin{aligned}&{\mathbb {E}}\bigg [\bigg |\frac{G_{(1,1),(\lfloor Nx\rfloor ,\lfloor Ny\rfloor )}-Ng_{pp}(x,y)}{N^{1/3}}\bigg |^r\bigg ]\\&\quad =\int _0^\infty {\mathbb {P}}\bigg [\bigg |\frac{G_{(1,1),(\lfloor Nx\rfloor ,\lfloor Ny\rfloor )}-Ng_{pp}(x,y)}{N^{1/3}}\bigg |^r\ge b\bigg ]db. \end{aligned}$$

At this point using (2.10) where b in this case is \(b^{1/r}\)

$$\begin{aligned}\int _0^\infty {\mathbb {P}}\bigg [\bigg |\frac{G_{(1,1),(\lfloor Nx\rfloor ,\lfloor Ny\rfloor )}-Ng_{pp}(x,y)}{N^{1/3}}\bigg |^r\ge b\bigg ]db\le C_0 + \int _{C_0}^\infty Cb^{\frac{-3\alpha }{2r}}db<\infty ,\end{aligned}$$

which converges iff \(1\le r <3\alpha /2\). \(\square \)

7.1 Variance in Flat-Edge Directions Without Boundary

We only treat explicitly the case for which \(y\le px\). Since our model is symmetric, the same arguments can be repeated to prove the case \(y\ge \frac{1}{p}x\).

We force macroscopic distance from the critical line, i.e. we assume that we can find \(\varepsilon > 0\) so that the sequence of endpoints (Nn(N)) satisfy

$$\begin{aligned} \varlimsup _{n \rightarrow \infty } \frac{ n(N) }{ N } \le p - \varepsilon . \end{aligned}$$
(7.6)

Proof of Theorem 2.7

Consider the following naive strategy: We construct an approximate maximal path \(\pi \) for \(G_{N, n(N)}\), knowing that for large \(n(N)< \left\lfloor {(p - \varepsilon /2)N}\right\rfloor \) without using the boundaries. \(\pi \) enters immediately inside the bulk and moves right until it finds a weight to collect diagonally. After that this procedure repeats. For each iteration of this procedure, the horizontal length of this path increases by a random Geometric(1 / p) length, independently of the past.

The probability that \(\pi \) will take more than N steps before reaching level n(N) is the same as the probability that the sum of n(N) independent \(X_i \sim \) Geometric(1 / p) r.v.’s exceeds the value N which is a large deviation event. In symbols

$$\begin{aligned} {\mathbb {P}}\{ G_{N, n(N)}(\pi ) < n(N)\} = {\mathbb {P}}\left\{ \sum _{i=1}^{n(N)}X_i> N \right\} \le {\mathbb {P}}\left\{ \sum ^{\left\lfloor {(p - \varepsilon /2)N}\right\rfloor }_{i=1}X_i > N \right\} \le e^{-cN}. \end{aligned}$$

Now, let \({\mathcal {A}} = \{ G_{N, n(N)}(\pi ) = n(N) \}\).

$$\begin{aligned} \text {Var}(G_{N, n(N)})&= {\mathbb {E}}(G_{N, n(N)}^2) - ({\mathbb {E}}(G_{N, n(N)}))^2 \\&\le (n(N))^2 - ({\mathbb {E}}(G_{N, n(N)}\mathbb {1}_{{\mathcal {A}}})^2 = (n(N))^2 - (n(N))^2 {\mathbb {P}}\{ {\mathcal {A}}\}^2\\&\le (n(N))^2( 1 - (1 - e^{-cN})^2) \le C N^2 e^{-cN} \rightarrow 0. \end{aligned}$$

\(\square \)

8 Fluctuations of the Maximal Path in the Boundary Model

In this last section we prove the path fluctuations in the characteristic direction in the model with boundaries. The idea behind it is to study how long the maximal path spends on any horizontal (or vertical) level and find a bound for the distance between the maximal path and the line which links the starting and the ending point which corresponds to the macroscopic maximal path.

Fix a boundary parameter \(\lambda \) and for this section the characteristic direction \((m_{\lambda }(N),n_{\lambda }(N))\) is abbreviated by (mn) and it is the endpoint for the maximal path. Consider two rectangles \({\mathcal {R}}_{(k,\ell ),(m,n)}\subset {\mathcal {R}}_{(0,0),(m,n)}\) with \(0<k<m_{\lambda }(N)\) and \(0<\ell <n_{\lambda }(N)\). In the smaller rectangle \({\mathcal {R}}_{(k,\ell ),(m_{\lambda }(N),n_{\lambda }(N))}\) impose boundary conditions on the south and west edges given by the distributions defined in Lemma 3.5.

$$\begin{aligned} I_{i,\ell }\overset{{\mathcal {D}}}{=}I_{i,0}\quad J_{k,j}\overset{{\mathcal {D}}}{=}J_{0,j}\text { with } i\in \{k+1,\dots ,m\},j\in \{\ell +1,\dots , n\}. \end{aligned}$$
(8.1)

Recall that (2.13) and (2.14) define respectively the i coordinate where the maximal path enters and exits from a fixed horizontal level j. Since we are interested in studying either the horizontal and vertical fluctuations we also define the j coordinate where the maximal path enters and exits from a fixed vertical level i as

$$\begin{aligned} w_0(i)=\min \{j\in \{0,\dots ,n\}:\exists k\text { such that } \pi _k=(i,j)\}, \end{aligned}$$
(8.2)

and

$$\begin{aligned} w_1(i)=\max \{j\in \{0,\dots ,n\}:\exists k\text { such that } \pi _k=(i,j)\}. \end{aligned}$$
(8.3)

To make our notation clearer we distinguish the exit point for the path which starts from (0, 0) to the one which starts from \((k,\ell )\) adding the superscript (0, 0) or \((k,\ell )\). We define the exit point from the south edge of the rectangle \({\mathcal {R}}_{(k,\ell ),(m,n)}\) as

$$\begin{aligned} \xi _{e_1}^{(k,\ell )}=\max _{\pi \in \Pi _{(k,\ell ),(m,n)}}\{r\ge 0 : (k+i,\ell ) \in \pi \text { for }0\le i\le r, \pi \text { is the right-most maximal}\}.\nonumber \\ \end{aligned}$$
(8.4)

Observe from (8.1) that \(\xi _{e_1}^{(k,\ell )}\) and \(v_1(\ell )-k\) have the same distribution, i.e.

$$\begin{aligned} {\mathbb {P}}\big \{\xi _{e_1}^{(k,\ell )}=r\big \}={\mathbb {P}}\{v_1(\ell )=k+r\}. \end{aligned}$$
(8.5)

Proof of Theorem 2.9

Note that if \(\tau =0\) (2.15) and (2.16) are already contained in (4.34) and (5.34).

For \(0<\tau <1\) set \(v=\lfloor bN^{2/3}\rfloor \) and \((k,\ell )=(\lfloor \tau m\rfloor ,\lfloor \tau n\rfloor )\). We add a superscript \({\mathbb {P}}^{(\cdot ,\cdot )}\{\cdot \}\) when we want to emphasise the target point for which we are computing the probability. Remember that the rectangle \({\mathcal {R}}_{(k,\ell ),(m,n)}\) has boundary condition (8.1). By Lemma 3.5

(8.6)

Note that \((m-k,n-\ell )\) is still in the characteristic direction since \((m-k,n-\ell )=(1-\tau )(m,n)\). Therefore, from (8.6) and Corollary 4.10

$$\begin{aligned} {\mathbb {P}}^{(m,n)}\big \{v_1(\lfloor \tau n\rfloor )>\tau m+bN^{2/3}\big \}\le C_2b^{-3}. \end{aligned}$$

To prove the other part of (2.15) notice that

$$\begin{aligned} {\mathbb {P}}^{(m,n)}\{v_0(\lfloor \tau n\rfloor )<\lfloor \tau m\rfloor -v\}\le {\mathbb {P}}^{(m,n)}\{w_1(\lfloor \tau m\rfloor -v)\ge \lfloor \tau n\rfloor \}. \end{aligned}$$
(8.7)

Let \(k=\lfloor \tau m\rfloor -v\) and \(\ell =\lfloor \tau n\rfloor -\lfloor nv/m\rfloor \). Then, up to integer-part corrections, \(k/\ell =m/n\). For a constant \(C_{\lambda }>0\) and N sufficiently large , \(\lfloor \tau n\rfloor \ge \ell +C_{\lambda }bN^{2/3}\). Note that from (8.1) we can write

$$\begin{aligned} {\mathbb {P}}^{(m,n)}\big \{\xi _{e_2}^{(k,\ell )}=r\big \}={\mathbb {P}}^{(m,n)}\{w_1(k)=\ell +r\}. \end{aligned}$$
(8.8)

The vertical analogue of (8.6) for \(w_1\) is

Combine this last result with (8.7) and from Corollary 4.10 applied to \(\xi _{e_2}\) (2.15) follows.

Finally, we prove (2.16). We want to compute

$$\begin{aligned} {\mathbb {P}}\big \{\exists \,k\text { such that } |{\hat{\pi }}_k-(\tau m,\tau n)|\le \delta N^{2/3}\big \}. \end{aligned}$$

If the path \({\hat{\pi }}\) comes within \(\ell _{\infty }\) distance \(\delta N^{2/3}\) of \((\tau m,\tau n)\), then it necessarily enters through the south or west side of the rectangle \({\mathcal {R}}_{(k+1,\ell +1),(k+4\lfloor \delta N^{2/3}\rfloor ,\ell + 4\lfloor c\delta N^{2/3}\rfloor )}\) (or via a diagonal step from the south-west corner), where the point \((k,\ell )\)\(=(\lfloor \tau m\rfloor -2\lfloor \delta N^{2/3}\rfloor ,\lfloor \tau n\rfloor -2\lfloor c\delta N^{2/3}\rfloor \) and the constant \(c>m/n\) for large enough N. The constant c is there to make the rectangle of characteristic shape.

From the perspective of the rectangle \({\mathcal {R}}_{(k,\ell ), (m,n)}\) this event is equivalent to either \(0\le \xi _{e_1}^{(k,\ell )}\le 4\delta N^{2/3}\) or \(0\le \xi _{e_2}^{(k,\ell )}\le 4c\delta N^{2/3}\). For these reasons we have

$$\begin{aligned}&{\mathbb {P}}^{(m,n)}\left\{ \exists k \text { such that } |{\hat{\pi }}_k-(\tau m,\tau n)|\le \delta N^{2/3}\right\} \\&\quad \le {\mathbb {P}}^{(m,n)}\left\{ 0<\xi _{e_1}^{(k,\ell )}\le 4\delta N^{2/3}\text { or }0<\xi _{e_2}^{(k,\ell )}\le 4c\delta N^{2/3}\right\} \\&\quad = {\mathbb {P}}^{(m-k,n-l)}\left\{ 0\le \xi ^{(0,0)}_{e_1}\le 4\delta N^{2/3}\text { or }0 \le \xi ^{(0,0)}_{e_2}\le 4c\delta N^{2/3}\right\} . \end{aligned}$$

We get the result using Eq. (5.34) for both exit points. \(\square \)