1 Introduction

In a nutshell, stochastic homogenization deals with (mostly physical) problems in an environment that changes on a very small scale, but with a spatially heterogeneous distribution. These scales can enter for instance through oscillating x-dependent coefficients of a PDE or integrands of an integral functional. As the scale of random heterogeneous oscillations gets smaller and smaller or, equivalently, the surrounding space invades the whole space, one aims to derive an effective, averaged model in a spirit similar to the law of large numbers. In the context of linear elliptic PDEs of the form

$$\begin{aligned} -\text {div}(A(x/\varepsilon ,\omega )\nabla u)=f, \end{aligned}$$
(1.1)

where \(\varepsilon \) denotes the scale of the fine oscillations and \(\omega \) belongs to the sample space \(\Omega \) of an underlying probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\), the first qualitative results date back to the work of Kozlov [34] and Papanicolaou and Varadhan [37]. In a variational context, first qualitative results in the nonlinear setting were obtained by Dal Maso and Modica [22, 23], where the authors derive an effective, averaged model for integral functionals of the form

$$\begin{aligned} \int _{D}L(x/\varepsilon ,\omega ,\nabla u)\,\mathrm {d}x\end{aligned}$$
(1.2)

defined for Sobolev functions. Starting from these qualitative results, the interest grew in deriving error estimates for the homogenization approximation, which led to the development of a quantitative theory for stochastic homogenization. In the context of linear elliptic PDEs as in (1.1), first quantitative convergence results are already contained in [42] and an unpublished preprint by Naddaf and Spencer [36], the latter being optimal in the regime of a small ellipticity ratio. Then, in a non-perturbative regime there has been enormous progress in recent years starting with the works of Gloria and Otto [29,30,31] and Gloria, Neukamm and Otto [28] (partially for discrete equations on a lattice). In terms of stochastic integrability of the error estimates, a breakthrough came with the work of Armstrong and Smart [9], which also covers the behavior of minimizers of functionals as in (1.2) under the assumption of uniform convexity, thus giving the first quantitative version of the results in [22, 23]. Further quantitative results in this nonlinear setting were obtained for instance in [8, 26]. Finally, in the linear elliptic setting (1.1) essentially optimal results in terms of scaling and stochastic integrability of weighted averages of the first order corrector were obtained in [7, 32]. This list is by no means exhaustive as different assumptions on the statistics of the medium lead to different results.

In the last decade, qualitative stochastic homogenization has been extended to variational models involving discontinuities, namely, to functionals defined on the space of functions of bounded variation. In particular, models for interfacial energies were studied first in discrete environments where the underlying medium is either a periodic lattice with random interactions [17] or a random point set [3, 16, 18]. The latter results then motivated the qualitative analysis carried out in [11, 40] for random discrete approximations of free-discontinuity functionals (see also [27] for a discrete approximation of the total variation on point clouds). In a continuum environment, the qualitative theory covers by now, among others, the stochastic homogenization of free-discontinuity functionals with randomly oscillating integrands [20, 21] or defined on randomly perforated domains [38], of diffuse interface problems [35], and of singularly-perturbed elliptic approximations of free-discontinuity functionals [12].

In this paper we derive first quantitative results for interfacial energies in a continuum medium. Namely, we consider energies acting on finite partitions,  i.e., functions of bounded variation taking values in a finite set. More precisely, given an open set \(D\subset {\mathbb {R}}^d\) with Lipschitz boundary and \({\mathcal {M}}\subset {\mathbb {R}}^m\) finite, we consider energies of the form

$$\begin{aligned} E_{g,\varepsilon }(u,D)=\int _{D\cap S_u}g(\omega ,x/\varepsilon ,u^+-u^-,\nu _u)\,\mathrm {d}{\mathcal {H}}^{d-1},\qquad u\in BV(D;{\mathcal {M}}), \end{aligned}$$
(1.3)

where \(S_u\), \(u^\pm \) and \(\nu _u\) denote the jumpset of u, the traces of u on both sides of \(S_u\), and the generalized normal to \(S_u\), respectively, while g is a jointly measurable and uniformly bounded function (see Definition 2.1). From a deterministic point of view,  i.e., when g is independent of \(\omega \), functionals as in (1.3) have been studied by Ambrosio and Braides in [4, 5] and we refer to those two papers for more details. In particular, in [5] the authors prove a periodic homogenization result for functionals as in (1.3). Although a corresponding stochastic homogenization result for stationary random integrands g is, to the best of our knowledge, not available in the literature, it follows from by now standard methods. Indeed, the following result can be proved by following essentially the lines of [4, Theorem 3.2] (using the integral representation result [15]) and [20, Theorem 3.12] (restricting to functions taking only finitely many values):

Theorem 0.1

Let g be an admissible random surface tension in the sense of Definition 2.1 satisfying Assumption 1 (see Sect. 2). As \(\varepsilon \searrow 0\) the sequence of functionals \(E_{g,\varepsilon }\) defined in (1.3) \(\Gamma \)-converges with respect to the strong \(L^1(D)\)-convergence to the functional

$$\begin{aligned} E_{g_{\mathrm{hom}}}(u,D)=\int _{D\cap S_u}g_\mathrm{hom}(\omega ,u^+,u^-,\nu _u)\,\mathrm {d}{\mathcal {H}}^{d-1}, \end{aligned}$$
(1.4)

where \(g_{\mathrm{hom}}\) is as in (1.5). Moreover, if g is ergodic, then \(g_{\mathrm{hom}}\) is deterministic.

The effective integrand \(g_{\mathrm{hom}}\) in (1.4) does not depend on the spatial variable (but, depending on the set \({\mathcal {M}}\), it can loose the structural dependence on \(u^+-u^-\)). Moreover, it is given by an asymptotic minimization problem involving boundary conditions on larger and larger cubes. More precisely, denoting by \(u^{a,b,\nu }\in BV_{\mathrm{loc}}({\mathbb {R}}^d,{\mathcal {M}})\) the function

$$\begin{aligned} u^{a,b,\nu }(x)={\left\{ \begin{array}{ll} b &{}\text{ if }\ \langle x,\nu \rangle >0,\\ a &{}\text{ otherwise, } \end{array}\right. } \end{aligned}$$

the integrand \(g_{\mathrm{hom}}(\omega ,a,b,\nu )\) is given by the following limit, which in particular exists almost surely (see also Theorem 3.1):

$$\begin{aligned} g_{\mathrm{hom}}(\omega ,a,b,\nu ){=}\lim _{t\rightarrow {+}\infty }\frac{1}{t^{d-1}}\inf \left\{ E_{g,1}(u,tQ^{\nu }):\,u{=}u^{a,b,\nu }\text { in a neighborhood of }\partial tQ^{\nu }\right\} ,\nonumber \\ \end{aligned}$$
(1.5)

where \(Q^{\nu }\) is a unit cube with two sides orthogonal to \(\nu \).

The aim of the present paper is to analyze the asymptotic formula in (1.5) and give some quantitative error estimates when the size of the box grows. Clearly, in order to obtain quantitative information, mere stationarity will not suffice and we certainly have to focus on the ergodic setting, so that from now on we assume in this introduction that \(g_{\mathrm{hom}}\) is deterministic. Denoting the value of the normalized infimum in (1.5) for fixed t by \(X_{t,t}^{a,b,\nu }(g)(\omega )\) (the double index tt will become clear in a second), the error \(X_{t,t}^{a,b,\nu }(g)(\omega )-g_{\mathrm{hom}}(a,b,\nu )\) naturally splits into two parts, a deterministic error and the fluctuations of \(X_{t,t}^{a,b,\nu }(g)\). More precisely, we can write

$$\begin{aligned} X_{t,t}^{a,b,\nu }(g)(\omega ){-}g_{\mathrm{hom}}(a,b,\nu ){=}\underbrace{X_{t,t}^{a,b,\nu }(g)(\omega ){-}{\mathbb {E}}[X_{t,t}^{a,b,\nu }(g)]}_{\text {fluctuations}}{+}\underbrace{{\mathbb {E}}[X_{t,t}^{a,b,\nu }(g)]{-}g_{\mathrm{hom}}(a,b,\nu )}_{\text {deterministic error}}.\nonumber \\ \end{aligned}$$
(1.6)

In this paper we give quantitative estimates for the fluctuations assuming that the integrand g in (1.3) satisfies a multiscale functional inequality [24, 25] of the form

$$\begin{aligned} \mathrm{Var}(X(g))\le C\,{\mathbb {E}}\left[ \int _0^{\infty }\int _{{\mathbb {R}}^d}\left( \partial ^{\mathrm{osc}}_{g,B_{s+1}(x)}X(g)\right) ^2\,\mathrm {d}x\,(s+1)^{-d}\pi (s)\,\mathrm {d}s\right] , \end{aligned}$$

for all measurable functions X, a non-negative weight \(\pi \) and the so-called oscillation \(\partial ^{\mathrm{osc}}_{g,B_{s+1}(x)}X(g)\). The latter measures the sensitivity of X(g) with respect to local perturbations of g on \(B_{s+1}(x)\) (see Assumption 2 for the details and Remark 2.4 for further comments). Under this additional assumption, one can show that

$$\begin{aligned} {\mathbb {E}}\big [\big |X_{t,t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,t}^{a,b,\nu }(g)]\big |^{2p}\big ]\le (C p^2)^p t^{p(2-d)}\int _0^{+\infty }(s+1)^{2p(d-1)}\pi (s)\,\mathrm {d}s\quad \text {for any}\ p\ge 1. \end{aligned}$$

However, this estimate does not imply any decay in t in dimension 2. For this reason, we first move a step back and characterize \(g_{\mathrm{hom}}\) in a way which is more convenient for our purpose. Namely, we prove that for any fixed \(\nu \in {\mathbb {S}}^{d-1}\), in (1.5) one can reduce the height of the cubes \(tQ^{\nu }\) ( i.e., the side length along the direction \(\nu \)) to an arbitrarily slowly diverging sequence \(\ell _t\). Denoting the corresponding value of the minimization problem by \(X_{t,\ell _t}^{a,b,\nu }(g)(\omega )\) (see Sect. 2.4 for a precise definition), we show that as long as \(\ell _t\le t\) diverges, it holds that

$$\begin{aligned} g_{\mathrm{hom}}(\omega ,a,b,\nu )=\lim _{t\rightarrow +\infty }X_{t,\ell _t}^{a,b,\nu }(g)(\omega ) \end{aligned}$$
(1.7)

almost surely. Note that this result just requires \({\mathbb {R}}^d\)-stationarity (the case of \({\mathbb {Z}}^d\)-stationarity causes problems along irrational directions, see Sect. 4.1 for how to circumvent this issue under additional assumptions). With the formula using the flat hyperrectangles (we refer to it also as almost plane-like formula) we can improve the estimate on the fluctuations to

$$\begin{aligned} {\mathbb {E}}\big [\big |X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]\big |^{2p}\big ]\le (C p^2)^p t^{p(1-d)}\ell _t^p\int _0^{+\infty }(s+1)^{2p(d-1)}\pi (s)\,\mathrm {d}s,\quad \end{aligned}$$
(1.8)

which decays in any dimension \(d\ge 2\) when \(\ell _t\) grows much slower than t. Depending on the decay of the weight this bound can be summed with respect to p and we obtain strong concentration estimates for the fluctuations. We present them in detail for an exponentially decaying weight in Corollary 3.4.

Characterizing \(g_{\mathrm{hom}}\) via the almost plane-like formula in (1.7) is of interest on its own, in particular in comparison with the periodic setting in the case of two phases. Namely, when the function g is deterministic and 1-periodic in the spatial variable and \({\mathcal {M}}\) contains only two values, Caffarelli and de la Llave have shown in [19] under a mild continuity and ellipticity assumption that there always exist so-called plane-like minimizers, that means, the jump set stays in a uniformly bounded neighborhood of the hyperplane orthogonal to \(\nu \). As a consequence, in this setting the (deterministic) limit in (1.7) still holds when the height \(\ell _t\) does not diverge, but is uniformly bounded. However, in the random setting such a property is not expected to hold. Indeed, in Sect. 4.2 we construct an example of a stationary, ergodic integrand g such that for any fixed \(\ell \in {\mathbb {N}}\) the corresponding limit of \(X_{t,\ell }^{a,b,\nu }(g)(\omega )\) as \(t\rightarrow +\infty \) is strictly smaller than \(g_{\mathrm{hom}}\). Moreover, for first passage percolation in dimension two (which is equivalent to a lattice-based version of the problem defining \(X_{t,t}^{a,b,\nu }(g)(\omega )\) with two phases) it is expected that the maximal deviation from the straight line connecting 0 and \(n\nu ^{\perp }\) is of the order \(n^{2/3}\) (see [10, Section 4.2] and references therein). Note that our result, which extends to discrete models without significant changes, is no contradiction to this conjecture, since we only speak about the minimal energy value instead of absolute minimizers.

We close this introduction by briefly commenting on our result together with our choice of methods, its limitations and possible future problems. The strong concentration estimates for the fluctuations obtained in the present paper allow to control the probabilistic error in (1.6). Instead, a quantitative estimate for the deterministic error in (1.6) is beyond the scope of this paper. In fact, such a control seems to be a rather difficult issue for subadditive processes and one of the few general methods seems to be the theory developed in [2]. However, the assumptions therein are not well-adaptable to random interfaces except in dimension two where the duality to paths can be used. Let us also mention that in the case of two phases, there is a similar problem to the minimization problem defining \(X_{t,\ell _t}^{a,b,\nu }\), which consists of finding the maximal flow/minimal cut between the upper and lower parts of the boundary of the cube with iid weighted edges given by nearest neighbors in the integer lattice \({\mathbb {Z}}^d\) (the problems are slightly different since the minimal cut does not have to be the discontinuity set of a function). Under much weaker moment conditions on the weights than uniform boundedness from above and below, fluctuation estimates on the energy of a minimal cut were obtained for instance in [39, 41]. While parts of the analysis seems to be adaptable to our continuum model assuming a finite range of dependence, some estimates use the independence assumption through a logarithmic Sobolev inequality relying on [14]. For a continuum model, requiring a logarithmic Sobolev inequality and finite range of dependence seems quite restrictive. Moreover, as shown in [25], many physical models satisfy a weighted functional inequality since they are transformations of product structures. Finally, note that in contrast to uniformly convex problems, quantitative results on the energy of minimizers do not imply any quantitative estimates on the convergence rate of the minimizers of interfacial-type energies. The latter seems to be a very challenging problem for the future.

2 Preliminaries and notation

2.1 General notation

We first introduce some notation that will be used in this paper. Given a measurable set \(A\subset {\mathbb {R}}^d\) we denote by |A| its d-dimensional Lebesgue measure, and by \({\mathcal {H}}^{k}(A)\) its k-dimensional Hausdorff measure. For \(x\in {\mathbb {R}}^d\) we denote by |x| the Euclidean norm and \(B_\rho (x_0)\) denotes the open ball with radius \(\rho >0\) centered at \(x_0\in {\mathbb {R}}^d\). If \(x_0=0\) we simply write \(B_\rho \). Given \(x_0\in {\mathbb {R}}^d\) and \(\nu \in {\mathbb {S}}^{d-1}\) we let \(H^\nu (x_0)\) be the hyperplane orthogonal to \(\nu \) and passing through \(x_0\) and for every \((a,b)\in {\mathbb {R}}^m\times {\mathbb {R}}^m\), the piecewise constant function taking values ab and jumping across \(H^\nu (x_0)\) is denoted by \(u_{x_0}^{a,b,\nu }:{\mathbb {R}}^{d}\rightarrow {\mathbb {R}}^{m}\), i.e.,

$$\begin{aligned} u_{x_0}^{a,b,\nu }(x):={\left\{ \begin{array}{ll} b &{}\text{ if }\ \langle x-x_0,\nu \rangle >0, \\ a &{}\text{ otherwise, } \end{array}\right. } \end{aligned}$$
(2.1)

where the brackets \(\langle \cdot ,\cdot \rangle \) denote the standard scalar product. If \(x_0=0\) we write \(H^\nu \) and \(u^{a,b,\nu }\) in place of \(H^\nu (0)\) and \(u_{0}^{a,b,\nu }\), respectively. Let \(\{e_1,\ldots ,e_d\}\) be the standard basis of \({\mathbb {R}}^d\). Then \(O_{\nu }\) is the orthogonal matrix induced by the linear mapping

$$\begin{aligned} x\mapsto {\left\{ \begin{array}{ll} \displaystyle {2\frac{\langle x,\nu +e_d\rangle }{|\nu +e_d|^2}(\nu +e_d)-x} &{}\text{ if }\ \nu \in {\mathbb {S}}^{d-1}\setminus \{-e_d\}, \\ -x &{}\text{ otherwise. } \end{array}\right. } \end{aligned}$$
(2.2)

In this way, \(O_\nu e_d=\nu \) and the set \(\{\nu _j:=O_\nu e_j:j=1,\ldots , d-1\}\) is an orthonormal basis for \(H^\nu \). Setting \(\nu _d=\nu \), we define the cube \(Q^\nu \) as

$$\begin{aligned} Q^\nu =\left\{ x\in {\mathbb {R}}^d:|\langle x,\nu _j\rangle | <1/2\text { for }j=1,\ldots ,d\right\} , \end{aligned}$$
(2.3)

and we set \(Q^\nu _\rho (x_0)=x_0+\rho Q^\nu \).

Finally, the letter C stands for a generic positive constant that may change every time it appears.

2.2 BV-functions

The relevant function space in this paper is the space of finite partitions, i.e., the space of functions of bounded variation taking only finitely many values. More precisely, we denote by \(BV(D;{\mathbb {R}}^m)\) the space of all functions \(u\in L^1(D;{\mathbb {R}}^m)\) whose distributional derivative Du is a matrix-valued Radon measure. Moreover, given \({\mathcal {M}}\subset {\mathbb {R}}^m\), we set \(BV(D;{\mathcal {M}}):=\{u\in BV(D;{\mathbb {R}}^m):u(x)\in {\mathcal {M}}\text { a.e. in}D\}\). If \({\mathcal {M}}\) is finite, then Du can be represented as \(Du(B)=\int _{B\cap S_u}(u^+(x)-u^-(x))\otimes \nu _u(x)\,\mathrm {d}{\mathcal {H}}^{d-1}\) for any Borel set \(B\subset D\). Here \(S_u\) is the so-called jumpset of u, which is \({\mathcal {H}}^{d-1}\)-rectifiable and coincides \({\mathcal {H}}^{d-1}\)-a.e. with the complement in D of Lebesgue points of u. Moreover, \(\nu _u(x)\) is the measure-theoretic normal to \(S_u\) and \(u^+(x),u^-(x)\) are the traces on both sides of \(S_u\). We refer the reader to [6] for more details on functions of bounded variation.

2.3 Boundedness and probabilistic assumptions

In this subsection we give the precise assumptions we make on the random integrand. We start fixing some notation in the deterministic setting. Namely, for a given parameter \(c\ge 1\) we denote by \({\mathcal {A}}_c\) the class of all Borel measurable functions \(g:{\mathbb {R}}^d\times \mathbb {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\rightarrow [0,+\infty )\) satisfying

$$\begin{aligned} \frac{1}{c}\le g(x,\zeta ,\nu )\le c,\quad \text {and}\quad g(x,\zeta ,\nu )=g(x,-\zeta ,-\nu )\,, \end{aligned}$$
(2.4)

for every \((x,\zeta ,\nu )\in {\mathbb {R}}^d\times \mathbb {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\). To any \(g\in {\mathcal {A}}_c\) and \(D\subset {\mathbb {R}}^d\) open with Lipschitz boundary we associate a functional \(E_g(\cdot ,D)\) defined on partitions by setting \(E_g(\cdot ,D):L^1_\mathrm{loc}({\mathbb {R}}^d;{\mathbb {R}}^m)\rightarrow [0,+\infty ]\),

$$\begin{aligned} E_g(u,D):={\left\{ \begin{array}{ll} \displaystyle \int _{D\cap S_u} g(x,u^+-u^-,\nu _u)\,\mathrm {d}{\mathcal {H}}^{d-1}&{}\text {if }u\in BV(D;{\mathcal {M}}),\\ +\infty &{}\text {otherwise in }L^1_{\mathrm{loc}}({\mathbb {R}}^d;{\mathbb {R}}^m). \end{array}\right. } \end{aligned}$$
(2.5)

Here \({\mathcal {M}}\subset {\mathbb {R}}^m\) is a finite set that we fix throughout this paper. Note that \(E_g\) is well-defined for \(g\in {\mathcal {A}}_c\) thanks to the second condition in (2.4) and the fact that the triple \((u^+(x),u^-(x),\nu _u(x))\) is uniquely defined up to a permutation in \((u^+(x),u^-(x))\) and a simultaneous change of sign in \(\nu _u(x)\).

We are now in a position to rigorously introduce the random setting. Throughout the paper \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) denotes a complete probability space.

Definition 2.1

(Admissible surface tensions) We say that a function \(g:\Omega \times {\mathbb {R}}^d\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\rightarrow [0,+\infty )\) is an admissible random surface tension, if it is jointly measurable and there exists \(c\ge 1\) such that for every \(\omega \in \Omega \) the function \(g(\omega ):{\mathbb {R}}^d\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\rightarrow [0,+\infty )\), \(g(\omega ):=g(\omega ,\cdot ,\cdot ,\cdot )\) belongs to \({\mathcal {A}}_c\).

For any admissible random surface tension g and for \(\omega \in \Omega \) we set \(E_g(\omega ):=E_{g(\omega )}\), where \(E_{g(\omega )}\) is defined according to (2.5),  i.e., \(E_g(\omega )(u,D)=\int _{D\cap S_u}g(\omega ,x,u^+-u^-,\nu _u)\,\mathrm {d}{\mathcal {H}}^{d-1}\) for \(D\subset {\mathbb {R}}^d\) open, \(u\in BV(D;{\mathcal {M}})\).

We now introduce two further probabilistic assumptions. The first one concerns spatial stationarity of the integrand, while the second one is a multi-scale functional inequality (or weighted spectral gap). We start by recalling the notion of measure-preserving group actions.

Definition 2.2

(Measure-preserving group action) Let \(k\in {\mathbb {N}}\), \(k\ge 1\). A measure-preserving additive group action on \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) is a family \(\{\tau _z\}_{z\in {\mathbb {R}}^k}\) of mappings \(\tau _z:\Omega \rightarrow \Omega \) satisfying the following properties:

  1. (1)

    (measurability) \(\tau _z\) is \({\mathcal {F}}\)-measurable for every \(z\in {\mathbb {R}}^k\);

  2. (2)

    (invariance) \({\mathbb {P}}(\tau _z A)={\mathbb {P}}(A)\), for every \(A\in {\mathcal {F}}\) and every \(z\in {\mathbb {R}}^k\);

  3. (3)

    (group property) \(\tau _0=\mathrm id_\Omega \) and \(\tau _{z_1+z_2}=\tau _{z_2}\circ \tau _{z_1}\) for every \(z_1,z_2\in {\mathbb {R}}^k\).

If, in addition, \(\{\tau _z\}_{z\in {\mathbb {R}}^k}\) satisfies the implication

$$\begin{aligned} {\mathbb {P}}(\tau _zA\Delta A)=0\quad \forall \, z\in {\mathbb {R}}^k\implies {\mathbb {P}}(A)\in \{0,1\}, \end{aligned}$$

then it is called ergodic.

Remark 2.3

(Discrete measure-preserving group action) If in Definition 2.2 the space \({\mathbb {R}}^k\) is replaced by \({\mathbb {Z}}^k\), we say that the corresponding family \(\{\tau _z\}_{z\in {\mathbb {Z}}^k}\) of mappings \(\tau _z:\Omega \rightarrow \Omega \) satisfying (1)–(3) is a discrete measure-preserving group action.

We are now in a position to state our probabilistic assumptions on the random surface tension g.

Assumption 1

The admissible random surface tension is \({\mathbb {R}}^d\)-stationary, i.e., there exists a measure-preserving group action \(\{\tau _{z}\}_{z\in {\mathbb {R}}^d}\) such that for all \(\omega \in \Omega \) and for all \(z\in {\mathbb {R}}^d\) it holds that

$$\begin{aligned} g(\tau _z\omega ,x,\zeta ,\nu )=g(\omega ,x+z,\zeta ,\nu )\quad \quad \forall \, (x,\zeta ,\nu )\in {\mathbb {R}}^d\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}. \end{aligned}$$

It is called ergodic, if it is stationary and the group action \(\{\tau _z\}_{z\in {\mathbb {R}}^d}\) is ergodic. We refer to Assumption 1(E) if ergodicity holds.

Assumption 2

Let \(\pi \in L^1((0,+\infty ))\) be non-negative. The admissible random surface tension g satisfies a multiscale functional inequality with weight \(\pi \) and with respect to the oscillation, i.e., for any function \(X:{\mathcal {A}}_c\rightarrow {\mathbb {R}}\) such that \(\omega \mapsto (X(g))(\omega ):=X(g(\omega ))\) is measurable we have

$$\begin{aligned} \mathrm{Var}(X(g))\le C\,{\mathbb {E}}\left[ \int _0^{\infty }\int _{{\mathbb {R}}^d}\left( \partial ^\mathrm{osc}_{g,B_{s+1}(x)}X(g)\right) ^2\,\mathrm {d}x\,(s+1)^{-d}\pi (s)\,\mathrm {d}s\right] , \end{aligned}$$

where the oscillation of X with respect to g on \(U\subset {\mathbb {R}}^d\) is formallyFootnote 1 defined as

$$\begin{aligned} \begin{aligned} \partial ^{\mathrm{osc}}_{g,U}X(g)(\omega ):=&\,\mathrm{ess}\,\sup \{X(g'):g'\in {\mathcal {A}}_c,\,g'|_{({\mathbb {R}}^d\setminus U)\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}}=g(\omega )|_{({\mathbb {R}}^d\setminus U)\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}}\} \\&-\mathrm{ess}\,\inf \{X(g'):g'\in {\mathcal {A}}_c,\,g'|_{({\mathbb {R}}^d\setminus U)\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}}=g(\omega )|_{({\mathbb {R}}^d\setminus U)\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}}\}. \end{aligned}\nonumber \\ \end{aligned}$$
(2.6)

Remark 2.4

Our definition of multiscale functional inequality differs from [24, 25], since we consider functions g not just depending on x. However, to have some concrete examples we can consider surface tensions of the form \(g(\omega ,x,\zeta ,\nu )=a(\omega ,x)\phi (\zeta ,\nu )\) with a satisfying a multiscale functional inequality in the spirit of [25]. We chose our framework to allow for more general dependencies that are not present in the homogenization of linear elliptic PDEs.

2.4 Relevant quantities

We now introduce the quantities which are relevant for the analysis carried out in the present paper.

For \(\nu \in {\mathbb {S}}^{d-1}\) let \(O_\nu \) be the orthogonal matrix introduced in (2.2), \(\nu _j:=O_\nu e_j\), \(j=1,\ldots , d-1\), and for every \(t,\ell >0\) set

$$\begin{aligned} R_{t,\ell }^\nu :=\{x\in {\mathbb {R}}^d:|\langle x,\nu \rangle |<\ell /2,\ |\langle x,\nu _j\rangle |<t/2, j=1,\ldots ,d-1\}. \end{aligned}$$
(2.7)

Let g be an admissible random surface tension; for \(t,\ell >0\), \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), \(\nu \in {\mathbb {S}}^{d-1}\) and for every \(\omega \in \Omega \) we introduce the quantity

$$\begin{aligned} X_{t,\ell }^{a,b,\nu }(g)(\omega ):=t^{1-d}\inf \big \{E_g(\omega )(u,R_{t,\ell }^\nu ):u\in {\mathscr {A}}(u^{a,b,\nu },R_{t,\ell }^\nu )\big \}, \end{aligned}$$
(2.8)

where for every \(D\subset {\mathbb {R}}^d\) open with Lipschitz boundary and \({\bar{u}}\in BV(D;{\mathcal {M}})\) we set

$$\begin{aligned} {\mathscr {A}}({{\bar{u}}},D):=\{u\in BV(D;{\mathcal {M}}),\ u={\bar{u}}\text { in a neighborhood of }\partial D\}. \end{aligned}$$
(2.9)

Remark 2.5

Clearly, for \(\ell =t\) we have \(R_{t,t}^\nu =tQ^\nu \), so that in particular

$$\begin{aligned} X_{t,t}^{a,b,\nu }(g)(\omega )=t^{1-d}\inf \big \{E_g(\omega )(u,tQ^\nu ):u\in {\mathscr {A}}(u^{a,b,\nu },tQ^\nu )\big \}. \end{aligned}$$
(2.10)

Moreover, it is immediate to see that for every \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), \(t>0\) and \(\omega \in \Omega \) the mapping \(\ell \mapsto X_{t,\ell }^{a,b,\nu }(g)(\omega )\) is decreasing in \(\ell \). In fact, if \(\ell '\ge \ell >0\), then any competitor \(u\in {\mathscr {A}}(u^{a,b,\nu },R_{t,\ell }^\nu )\) can be extended to a competitor \(u\in {\mathscr {A}}(u^{a,b,\nu },R_{t,\ell '}^\nu )\) by setting \(u:=u^{a,b,\nu }\) in \(R_{t,\ell '}^\nu \setminus R_{t,\ell }^\nu \). Since \(S_{u^{a,b,\nu }}\cap (R_{t,\ell '}^\nu \setminus R_{t,\ell }^\nu )=\emptyset \), by definition of \(E_g(\omega )\) we have \(E_g(\omega )(u,R_{t,\ell '}^\nu )=E_g(\omega )(u,R_{t,\ell }^\nu )\) and we conclude by minimization that

$$\begin{aligned} X_{t,\ell '}^{a,b,\nu }(g)(\omega )\le X_{t,\ell }^{a,b,\nu }(g)(\omega )\quad \text {for every}\ \ell '\ge \ell >0. \end{aligned}$$
(2.11)

Using the same extension argument for fixed \(\ell \), but different parameters \(t,t'\), shows that the mapping \(t\mapsto X_{t,\ell }^{a,b,\nu }(g)(\omega )\) is almost decreasing. Namely, using (2.4) we obtain

$$\begin{aligned} X_{t',\ell }^{a,b,\nu }(g)(\omega ){\le } \bigg (\frac{t}{t'}\bigg )^{d-1}X_{t,\ell }^{a,b,\nu }(g)(\omega )+c\frac{(t'-t)(t')^{d-2}}{(t')^{d-1}}{\le } X_{t,\ell }^{a,b,\nu }(g)(\omega ){+}\frac{c(t'-t)}{t'}\quad \text {for every}\ t'\ge t.\nonumber \\ \end{aligned}$$
(2.12)

Finally, note that by testing the function \(u=u^{a,b,\nu }\) in the infimum problem, we deduce from the boundedness of g that

$$\begin{aligned} 0\le X_{t,\ell }^{a,b,\nu }(g)(\omega )\le c \end{aligned}$$
(2.13)

uniformly in all parameters. Eventually, thanks to [20, Proposition A.1], for any admissible random surface tension g, the mapping \(\omega \mapsto X_{t,\ell }^{a,b,\nu }(g)(\omega )\) is measurable. Thus, we can define the oscillation of \(X_{t,\ell }^{a,b,\nu }\) according to (2.6) and use (2.13) with \(g'\) in place of g to obtain the immediate bound

$$\begin{aligned} \partial _{g,B_{s+1}(x)}^{\mathrm{osc}}X_{t,\ell }^{a,b,\nu }(g)(\omega )\le 2c, \end{aligned}$$
(2.14)

for any \(s>0\) and \(x\in {\mathbb {R}}^d\).

3 Statement of the main results

In this section we present our main results. The corresponding proofs are postponed to Sect. 5. Our first result provides a simpler formula to compute the asymptotic surface tension \(g_{\mathrm{hom}}\). More precisely, we show that instead of the full cube \(tQ^{\nu }\), we can reduce the size in the direction \(\nu \) taking an arbitrary slow diverging sequence \(\ell _t\) instead of t. This result is also interesting from a numerical point of view.

Theorem 3.1

Let \(g:\Omega \times {\mathbb {R}}^d\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\rightarrow [0,+\infty )\) be an admissible random surface tension satisfying Assumption 1. Then there exists a function \(g_\mathrm{hom}:\Omega \times {\mathcal {M}}\times {\mathcal {M}}\times {\mathbb {S}}^{d-1}\rightarrow [0,+\infty )\) such that almost surely for all \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), \(\nu \in {\mathbb {S}}^{d-1}\) we have

$$\begin{aligned} \lim _{t\rightarrow +\infty } X_{t,t}^{a,b,\nu }(g)(\omega )=g_\mathrm{hom}(\omega ,a,b,\nu ). \end{aligned}$$
(3.1)

In particular, the limit in (3.1) exists. Moreover, \(g_{\mathrm{hom}}\) is \(\{\tau _z\}_{z\in {\mathbb {R}}^d}\) invariant. If, in addition, \(\{\tau _z\}_{z\in {\mathbb {R}}^d}\) is ergodic, then \(g_{\mathrm{hom}}\) is deterministic. Finally, let \(\nu \in {\mathbb {S}}^{d-1}\) be fixed; then almost surely, for every \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) we have

$$\begin{aligned} \lim _{t\rightarrow +\infty }X_{t,\ell _t}^{a,b,\nu }(g)(\omega )=g_\mathrm{hom}(\omega ,a,b,\nu ), \end{aligned}$$
(3.2)

under the assumption that \(0<\ell _t\le t\) satisfies \(\displaystyle \lim _{t\rightarrow +\infty }\ell _t=+\infty \).

Remark 3.2

  1. (i)

    The exceptional set where the convergence in (3.2) might fail may depend on \(\nu \), but not on the sequence \(\ell _t\). In the particular case \(\ell _t=t\) it is possible to find a set of full probability where the convergence in (3.1) holds for all directions \(\nu \). Indeed, in this case it is sufficient to establish (3.1) on a countable dense subset of \({\mathbb {S}}^{d-1}\) and extend the convergence to all directions via a deterministic continuity argument (see, e.g., [20, Lemma 5.5]). Choosing the flat hyperrectangles rules out this possibility. This poses additional problems when the medium satisfies only discrete stationarity (as discrete environments on a lattice), since in this case we are only able to prove (3.2) for rational directions. However, under Assumption 2 we obtain a slightly weaker version of (3.2) also in the case of \({\mathbb {Z}}^d\)-stationarity (see Sect. 4.1 and Corollary 4.3). The limit in (3.1) holds without any extra assumption for \({\mathbb {Z}}^d\)-stationary models thanks to the above mentioned continuity argument.

  2. (ii)

    Due to (2.13) the almost sure convergence implies convergence in \(L^p(\Omega )\) for any \(1\le p<+\infty \). Denoting by \({\mathcal {F}}_{\mathrm{inv}}\) the \(\sigma \)-algebra of \(\{\tau _z\}_{z\in {\mathbb {R}}^d}\)-invariant sets, then by subadditivity and stationarity (cf. Lemma 5.2 and its proof) the conditional expectations satisfy \({\mathbb {E}}[X_{t,t}^{a,b,\nu }|{\mathcal {F}}_{\mathrm{inv}}](\omega )\ge g_{\mathrm{hom}}(\omega ,a,b,\nu )\). Moreover, thanks to the monotonicity Property (2.11) we have

    $$\begin{aligned}{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }|{\mathcal {F}}_{\mathrm{inv}}](\omega )\ge {\mathbb {E}}[X_{t,t}^{a,b,\nu }|{\mathcal {F}}_{\mathrm{inv}}](\omega )\ge g_{\mathrm{hom}}(\omega ,a,b,\nu ). \end{aligned}$$

    In the ergodic case the above estimate reduces to the expectation. In this sense, the formula with the flat hyperrectangles produces a larger deterministic error, but allows at the same time to obtain concentration estimates for the fluctuations (cf. Corollary 3.4).

Our next result gives a control of the variance (and higher moments) under the additional Assumption 2. Note that in dimension 2 the flatness of the hyperrectangles is crucial to obtain a decay rate.

Theorem 3.3

Let g be an admissible random surface tension satisfying Assumption 1Footnote 2 and 2. Then there exists a constant \(c_d>0\) such that for all \(p\ge 1\) and every \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), \(\nu \in {\mathbb {S}}^{d-1}\) the estimate

$$\begin{aligned} {\mathbb {E}}\big [\big (X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]\big )^{2p}\big ]\le (c_d p^2)^p t^{p(1-d)}\ell _t^p\int _0^{+\infty }(s+1)^{2p(d-1)}\pi (s)\,\mathrm {d}s\quad \, \end{aligned}$$
(3.3)

holds whenever \(t\ge \ell _t\ge 1\). In particular,

$$\begin{aligned} \mathrm{Var}\left( X_{t,\ell _t}^{a,b,\nu }(g)\right) \le c_d t^{1-d}\ell _t\int _0^{+\infty }(s+1)^{2(d-1)}\pi (s)\,\mathrm {d}s. \end{aligned}$$

We deduce asymptotic exponential concentration estimates for the fluctuations in case of an exponential decay of the weight \(\pi \). This is the case for many physical models of random heterogeneities (cf. [25, Section 3]).

Corollary 3.4

Let g be an admissible random surface tension satisfying Assumptions 1(E) and 2. Assume that the weight \(\pi \) satisfies \(\pi (s)\le C\exp (-s/C)\) for some \(C>0\). Then there exists \(C_d>0\) such that for all \(t\ge \ell _t\ge 1\) and \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), \(\nu \in {\mathbb {S}}^{d-1}\) we have

$$\begin{aligned} {\mathbb {E}}\left[ \exp \left( \frac{1}{C_d}\left| \frac{X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]}{\sqrt{t^{1-d}\ell _t}}\right| ^{\frac{1}{d}}\right) \right] \le 4. \end{aligned}$$

In particular, for every \(\eta >0\) we have

$$\begin{aligned} \limsup _{t\rightarrow +\infty }\bigg ((t^{1-d}\ell _t)^{\tfrac{1}{2d}} \log \Big ({\mathbb {P}}\big (|X_{t,\ell _t}^{a,b,\nu }(g)-g_\mathrm{hom}(a,b,\nu )|>\eta \big )\Big )\bigg )<0. \end{aligned}$$

Remark 3.5

(On the (non)-optimality of the concentration estimates) In the proof of Theorem 3.3 we estimate the oscillation of the process \(X_{t,\ell _t}^{a,b,\nu }(g)\) for all balls \(B_{s+1}(x)\) that intersect the hyperrectangle \(R_{t,\ell _t}^{\nu }\), which then leads by integration to the factor \(\ell _t\). It would suffice to consider all balls that intersect the jumpset of a minimizer for \(X_{t,\ell _t}^{a,b,\nu }\). However, there are two problems: in general, minimizers do not exist. Moreover, choosing an almost minimizer, one has then to estimate the measure of the \(2(s+1)\)-neighborhood of the jumpset for all \(s>0\). If one assumes that the weight \(\pi \) has compact support, then one could try to use the theory of Minkowski content. However, this needs to be done in a quantitative way since (almost) minimizers depend on t and \(s>0\) is not infinitesimal, but finite. For the moment this seems to be out of reach for non-smooth x-dependent integrands g. We remark that in a discrete setting, this approach seems more plausible since there one just has to estimate the number of edges used in a minimal interface (which is proportional to \(t^{d-1}\) when the weights are uniformly bounded from above and below). Finally, even with the best possible assumption that the jumpset of an almost minimizer is flat, the improvement would be minor since the factor \(\ell _t\) can diverge arbitrarily slow. Eventually, the exponent \(\frac{1}{2d}\) in Corollary 3.4 is due to two facts: the factor 2 can be avoided if we use a functional derivative instead of the oscillation together with a logarithmic Sobolev inequality instead of the variance control via the spectral gap (see also [24, Proposition 1.10 i)]). The factor d disappears when we assume that \(\pi \) is bounded and has compact support.

4 Further remarks

In this section we discuss how our strategy has to be adapted if one assumes only stationarity with respect to integer translations in \({\mathbb {Z}}^d\) (cf. Remark 2.3). This is particularly relevant for discrete interface models, which are often defined on the lattice \({\mathbb {Z}}^d\), where integer translations provide a natural framework. Moreover, we give an example of a stationary, ergodic integrand which shows that, in general, the assumption \(\ell _t\rightarrow +\infty \) in Theorem 3.1 cannot be dropped.

4.1 On \({\mathbb {Z}}^d\)-stationary integrands

Assuming that stationarity of the random surface tension g is only satisfied for a discrete measure preserving group action \(\{\tau _z\}_{z\in {\mathbb {Z}}^d}\), our strategy of proof establishes the limit (3.2) in Theorem 3.1 only for rational directions, i.e., for \(\nu \in {\mathbb {S}}^{d-1}\cap {\mathbb {Q}}^{d}\). In contrast to the process \(X_{t,t}^{a,b,\nu }(g)\), the hyperrectangles in the definition of \(X_{t,\ell _t}^{a,b,\nu }\) do not allow to use deterministic continuity arguments to extend the convergence to irrational directions (cf. Remark 3.2 (i)), since a small, but fixed rotation does not lead to a uniformly small perturbation of the thin hyperrectangles when t grows. However, under Assumption 2 with a certain decay of the weight \(\pi \) we can extend the convergence to all directions using Theorem 3.3. Since Assumption 2 only allows to control the variance of random variables, we first need to prove the convergence of the expectation of the random variables \(X_{t,\ell _t}^{a,b,\nu }(g)\). This will be achieved again under the sole assumption of stationarity. A similar approach has been used in [39, Proposition 3.5] for a maximal flow model on \({\mathbb {Z}}^d\).

Proposition 4.1

Let \(g:\Omega \times {\mathbb {R}}^d\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\rightarrow [0,+\infty )\) be an admissible random surface tension satisfying Assumption 1 with a measure preserving group action \(\{\tau _z\}_{z\in {\mathbb {Z}}^d}\). Then for every \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) and every \(\nu \in {\mathbb {S}}^{d-1}\) it holds that

$$\begin{aligned} \lim _{t\rightarrow +\infty }{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]={\mathbb {E}}[g_\mathrm{hom}(\cdot ,a,b,\nu )] \end{aligned}$$

under the assumption that \(0<\ell _t\le t\) satisfies \(\lim _{t\rightarrow +\infty }\ell _t=+\infty \). If g is ergodic, then \({\mathbb {E}}[g_\mathrm{hom}(\cdot ,a,b,\nu )]=g_{\mathrm{hom}}(a,b,\nu )\) since \(g_{\mathrm{hom}}\) is deterministic.

Remark 4.2

Since the condition \(\ell _t\le t\) implies that \(X_{t,\ell _t}^{a,b,\nu }(g)\ge X_{t,t}^{a,b,\nu }(g)\) (cf. Remark 2.5), the convergence of the expectations to the same limit implies also convergence in \(L^1(\Omega )\) and therefore also almost sure convergence after selecting a subsequence. From this one can deduce the convergence in \(L^p(\Omega )\) for any \(1\le p<+\infty \) by the dominated convergence theorem.

Combining the above result with the concentration estimate in Theorem 3.3, we obtain the almost sure convergence along all directions also for \({\mathbb {Z}}^d\)-stationary models.

Corollary 4.3

Let g be an admissible random surface tension satisfying Assumption 1(E) with a measure preserving group action \(\{\tau _z\}_{z\in {\mathbb {Z}}^d}\)Footnote 3 and Assumption 2 with a weight \(\pi \) that satisfies

$$\begin{aligned} \int _{0}^{+\infty }(s+1)^{r}\pi (s)\,\mathrm {d}s<+\infty \quad \text {for some}\ r>2(d-1). \end{aligned}$$

Fix \(\nu \in {\mathbb {S}}^{d-1}\) and let \({\bar{\ell }}_n\rightarrow +\infty \) be an arbitrary diverging sequence. Then a.s. for all \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) it holds that

$$\begin{aligned} \lim _{n\rightarrow +\infty }X_{t_n,\ell _n}^{a,b,\nu }(g)(\omega )=g_\mathrm{hom}(a,b,\nu ), \end{aligned}$$
(4.1)

whenever \({\bar{\ell }}_n\le \ell _n\le t_n\) for every \(n\in {\mathbb {N}}\).

Remark 4.4

(Choice of \({\bar{\ell }}_n\)) The choice of the sequence \({\bar{\ell }}_n\) in Corollary 4.3 is only for technical reasons to avoid that the exceptional set where (4.1) might fail may depend on the diverging sequences \(\ell _n,t_n\). Instead, choosing an arbitrarily slowly diverging sequence \({\bar{\ell }}_n\) allows to exclude an exceptional set depending only on \(\nu \) and the fixed sequence \({\bar{\ell }}_n\). As already observed in Remark 3.2 (i), this procedure is not necessary for rational directions.

4.2 Non-existence of plane-like minimizing sequences in the ergodic setting

In light of Theorem 3.1 one might try to prove that the height of the flat hyperrectangles can be taken to be bounded, still obtaining the same limit. Below we show that this is in general not possible. To be more precise, let us introduce a notion of plane-like minimizing sequences. In what follows we focus on the ergodic setting to simplify the formulas.

Definition 4.5

(Plane-like minimizing sequence) Let g be an admissible random surface tension satisfying Assumption 1(E) and let \(g_{\mathrm{hom}}\) be as in Theorem 3.1 and \(\nu \in {\mathbb {S}}^{d-1}\). Given a realization \(\omega \in \Omega \) we say that a sequence \((u_t^\omega )_t\) is a minimizing sequence for \(g_\mathrm{hom}(a,b,\nu )\), if \(u_t^\omega \in {\mathscr {A}}(u^{a,b,\nu },tQ^\nu )\) for every \(t>0\) and

$$\begin{aligned} g_{\mathrm{hom}}(a,b,\nu )=\lim _{t\rightarrow +\infty } t^{1-d}E_g(\omega )(u_t^\omega ,tQ^\nu ). \end{aligned}$$
(4.2)

We say that a minimizing sequence for \(g_{\mathrm{hom}}(a,b,\nu )\) is plane-like, if there exist \(\ell \in {\mathbb {N}}\) and \(t_\ell >0\) such that \(u_t^\omega \in {\mathscr {A}}(u^{a,b,\nu },R_{t,\ell }^\nu )\) for all \(t>t_\ell \).

We will provide an example of a stationary, ergodic integrand g in dimension two such that the probability of finding a plane-like minimizing sequence for the direction \(\nu =(0,1)\) is zero.

Example 4.6

Let \(d=2\) and \({\mathcal {M}}=\{0,1\}\). There exists an admissible random surface tension g satisfying Assumption 1(E) such that with probability 1 there exists no plane-like minimizing sequence for \(g_{\mathrm{hom}}(0,1,e_2)\).

Remark 4.7

The restriction to \({\mathcal {M}}=\{0,1\}\) is only for convenience. In fact, we will construct an integrand depending only on x and \(\omega \), so that the same construction works for any finite set \({\mathcal {M}}\). Moreover, it will be clear from the construction that it can be extended to any dimension upon heavier notation and replacing \(e_2\) by \(e_d\).

Below we prove the claim made in Example 4.6 and construct an admissible ergodic random surface tension g for which \(g_\mathrm{hom}(0,1,e_2)\) has no plane-like minimizing sequences in the sense of Definition 4.5. We let \(u^{0,1}\) be as in (2.1) with \((a,b)=(0,1)\) and \(\nu =e_2\).

Step 1. In this step we construct a suitable random surface integrand g. To this end, we let \(\{X_i\}_{i\in {\mathbb {Z}}}\) be a sequence of independent and [1, 2]-uniformly distributed random variables on a suitable probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) equipped with a measure-preserving, ergodic map \(\tau :\Omega \rightarrow \Omega \) satisfying the following properties:

  1. i)

    \(\tau \) is bijective and the inverse map \(\tau ^{-1}:\Omega \rightarrow \Omega \) is again \({\mathcal {F}}\)-measurable;

  2. ii)

    \(X_i(\omega )=X_0(\tau ^i\omega )\) for every \(i\in {\mathbb {Z}}\), where \(\tau ^i\) denotes the i-times iterated composition of the map \(\tau \) for \(i\ge 0\) (with the convention \(\tau ^0:=\mathrm{id}\)), respectively the \(-i\)-times iterated composition of \(\tau ^{-1}\) for \(i<0\).

This setting can be realized on the product space \(\Omega =[1,2]^{{\mathbb {Z}}}\) with the shift operator (see [33, Section 7.3]). We now define a random surface integrand \(g:\Omega \times {\mathbb {R}}^2\times {\mathbb {S}}^1\rightarrow [0,+\infty )\) by setting

$$\begin{aligned} g(\omega ,x,\nu ):=X_{\lceil x_2\rceil }(\omega )\quad \text {for every}\ x=(x_1,x_2)\in {\mathbb {R}}^2,\ \omega \in \Omega ,\ \nu \in {\mathbb {S}}^{1}, \end{aligned}$$
(4.3)

where \(\lceil x_2\rceil \) denotes the upper integer part of \(x_2\). In this way, g is measurable, and since each \(X_i\) takes values in [1, 2] it satisfies \(1\le g(\omega ,x,\nu )\le 2\),  i.e., \(g(\omega )\in {\mathcal {A}}_2\). Moreover, both \(\tau \) and \(\tau ^{-1}\) are measure-preserving and ergodic. Thus, the family of maps \(\tau _z:=\tau ^{z_2}\), \(z\in {\mathbb {Z}}^2\) defines a measure-preserving ergodic group action. Thanks to ii) it is immediate to see that g is stationary with respect to \(\{\tau _z\}_{z\in {\mathbb {Z}}^2}\) as aboveFootnote 4, and we set

$$\begin{aligned} E_g(\omega ):=\int _{S_u\cap D}g(\omega ,x,\nu )\,\mathrm {d}{\mathcal {H}}^1=\int _{S_u\cap D}X_{\lceil x_2\rceil }(\omega )\,\mathrm {d}{\mathcal {H}}^1\quad \text {for all}\ u\in BV(D;\{0,1\}).\nonumber \\ \end{aligned}$$
(4.4)

Eventually, for every \(\ell \in {\mathbb {N}}\) and \(t>0\) we let \(X_{t,\ell }^{e_2}\) be as in (2.8) with \(\nu =e_2\), \(a=0\), \(b=1\), and \(E_g\) as in (4.4).

Let us introduce the random variables \(Y_\ell :\Omega \rightarrow [1,2]\) given by \(Y_\ell (\omega ):=\min _{i\in [-\ell +1,\ell ]} X_i(\omega )\), which clearly satisfy \(Y_{\ell +1}(\omega )\le Y_\ell (\omega )\) for every \(\ell \in {\mathbb {N}}\). Moreover, for every \(\ell \in {\mathbb {N}}\) there exists \(\Omega _\ell \in {\mathcal {F}}\) with \({\mathbb {P}}(\Omega _\ell )>0\) such that

$$\begin{aligned} Y_\ell (\omega )> X_{\ell +1}(\omega )=Y_{\ell +1}(\omega )\quad \text {for every}\ \omega \in \Omega _\ell . \end{aligned}$$
(4.5)

Indeed, since all \(X_i\) are independent and uniformly distributed on the interval [1, 2], for any \(s\in (1,2)\) we have \( {\mathbb {P}}\big (X_{-\ell }>s,\ldots ,X_{\ell }>s,X_{\ell +1}\le s\big )=(2-s)^{2\ell +1}(s-1)>0, \) which implies (4.5).

Step 2. In this step we show that almost surely we have

  1. 1)

    \(\lim _{t\rightarrow +\infty }X_{t,2\ell }^{e_2}(g)(\omega )=Y_\ell (\omega )\) for every \(\ell \in {\mathbb {N}}\);

  2. 2)

    \(\lim _{\ell \rightarrow +\infty }Y_\ell (\omega )=1\).

We start proving 1): Since \(-\ell +1\le \lceil x_2\rceil \le \ell \) for every \(x\in R_{t,2\ell }^{e_2}=(-\frac{t}{2},\frac{t}{2})\times (-\ell ,\ell )\), we clearly have that \( E_g(\omega )(u, R_{t,2\ell }^{e_2})\ge Y_\ell (\omega ){\mathcal {H}}^1(S_u\cap R_{t,2\ell }^{e_2}) \) for every \(u\in BV(R_{t,2\ell }^{e_2};\{0,1\})\), which in particular yields

$$\begin{aligned} X_{t,2\ell }^{e_2}(g)(\omega )\ge & {} Y_\ell (\omega )\frac{1}{t}\min \big \{{\mathcal {H}}^1(S_u\cap R_{t,2\ell }^{e_2}):u\in {\mathscr {A}}(u^{0,1}, R_{t,2\ell }^{e_2})\big \}\nonumber \\&=Y_{\ell }(\omega )\frac{1}{t}{\mathcal {H}}^1 (H^{\nu }\cap R_{t,2\ell }^{e_2})=Y_\ell (\omega ). \end{aligned}$$
(4.6)

To estimate \(X_{t,2\ell }^{e_2}(g)(\omega )\) from above, let \(i_\ell \in (-\ell ,\ell ]\) (depending also on \(\omega \)) be such that \(X_{i_\ell }(\omega )\le X_i(\omega )\) for every \(i\in (-\ell ,\ell ]\). Without loss of generality we assume that \(i_\ell >0\). Then the function \(u_\ell \) defined as the characteristic function of the set \(R_{t,2\ell }^{e_2}\setminus R_{t-1,2i_\ell -1}^{e_2}\setminus \{x_2<0\}\) (see Fig. 1) is admissible for \(X_{t,2\ell }^{e_2}(g)(\omega )\) and satisfies

$$\begin{aligned} \begin{aligned} E_g(\omega )(u_\ell ,R_{t,2\ell }^{e_2})&\le \int _{\{x_2=i_\ell -\frac{1}{2},\ |x_1|<\frac{t-1}{2}\}}X_{\lceil x_2\rceil }(\omega )\,\mathrm {d}{\mathcal {H}}^1\\&+2{\mathcal {H}}^1\Big (\big \{|x_1|=\tfrac{t-1}{2},\ x_2\in \big (0,i_\ell -\tfrac{1}{2}\big )\big \}\cup \big \{|x_2|=0,\ |x_1|\in \big (\tfrac{t-1}{2},\tfrac{t}{2}\big )\big \}\Big )\\&= (t-1)X_{i_\ell }(\omega )+4i_\ell \le t Y_\ell (\omega )+4\ell . \end{aligned} \end{aligned}$$
Fig. 1
figure 1

The rectangles \(R_{t,2\ell }^{e_2}\), \(R_{t-1,2i_\ell -1}^{e_2}\) and in gray the set \(R_{t,2\ell }^{e_2}\setminus R_{t-1,2i_\ell -1}^{e_2}\setminus \{x_2<0\}\) where \(u_\ell =1\)

Dividing the above inequality by t and using (4.6) we thus obtain

$$\begin{aligned} X_{t,2\ell }^{e_2}(g)(\omega )\le \frac{1}{t}E_g(u_\ell ,R_{t,2\ell }^{e_2})\le Y_\ell (\omega )+\frac{4\ell }{t}\le X_{t,2\ell }^{e_2}(g)(\omega )+\frac{4\ell }{t}. \end{aligned}$$

Passing in the above inequality to the limsup on the left-hand side and to the liminf on the right-hand side yields that almost surely there exists

$$\begin{aligned} g_{\ell }(\omega ,e_2):=\lim _{t\rightarrow +\infty }X_{t,2\ell }^{e_2}(g)(\omega )=Y_{\ell }(\omega ). \end{aligned}$$
(4.7)

We now come to prove 2); by construction we have \({\mathbb {P}}(Y_\ell >s)=(2-s)^{2\ell }\) for every \(s\in (1,2)\), hence \(\sum _{\ell }{\mathbb {P}}(Y_\ell >s)<+\infty \), since \(2-s<1\). As a consequence, the monotone continuity of \({\mathbb {P}}\) together with the Borel-Cantelli Lemma imply that

$$\begin{aligned} {\mathbb {P}}\big (\limsup _{\ell \rightarrow +\infty } Y_\ell>1\big )=\lim _{s\searrow 1}{\mathbb {P}}\big (\limsup _{\ell \rightarrow +\infty } Y_\ell >s\big )=0. \end{aligned}$$

Thus, 2) follows immediately from the existence of \(\lim _\ell Y_\ell (\omega )=\inf _\ell Y_\ell (\omega )\ge 1\).

Step 3. Conclusion and final remarks.

As a first consequence of 1) and 2) we deduce that almost surely there exists

$$\begin{aligned} g_\mathrm{hom}(e_2):=\lim _{t\rightarrow +\infty }X_{t,t}^{e_2}(g)(\omega )=1\le g_{\ell }(\omega ,e_2), \end{aligned}$$
(4.8)

where \(g_\ell \) is as in (4.7). Indeed, arguing as in (4.6) we obtain \(1\le \liminf _t X_{t,t}^{e_2}(g)(\omega )\), while (2.11) together with 1) implies that \(\limsup _t X_{t,t}^{e_2}(g)(\omega )\le Y_\ell (\omega )\) for every \(\ell \in {\mathbb {N}}\). Hence, (4.8) follows by letting \(\ell \rightarrow +\infty \) and using 2).

As a consequence, with probability 1 minimizing sequences are not plane-like. In fact, assume that for \(\omega \in \Omega \) there exists a plane-like minimizing sequence \((u_t^\omega )_t\) in the sense of Definition 4.5 with parameter \(2\ell =2\ell (\omega )\in {\mathbb {N}}\). Then by definition

$$\begin{aligned} 1= & {} g_{\mathrm{hom}}(e_2)=\lim _{t\rightarrow +\infty }\frac{1}{t}E_g(\omega )(u_t^{\omega },tQ^{\nu })=\lim _{t\rightarrow +\infty }\frac{1}{t}E_g(\omega )(u_t^{\omega },R_{t,2\ell }^{\nu })\nonumber \\&\ge \lim _{t\rightarrow +\infty }X_{t,2\ell }^{e_2}(g)(\omega )=Y_{\ell }(\omega )\ge 1, \end{aligned}$$

where we used (4.7). Hence \(Y_{\ell }(\omega )=1\), but we clearly have \({\mathbb {P}}(\exists \ell \in {\mathbb {N}}:\,Y_{\ell }=1)=0\).

We conclude this example by observing that for every \(\ell \in {\mathbb {N}}\) the function \(g_\ell (\cdot ,e_2)\) is neither deterministic nor invariant under the group action \(\{\tau _z\}_{z\in {\mathbb {Z}}^2}\). In fact, for any interval \((s_1,s_2)\subset (1,2)\) we have by definition that \({\mathbb {P}}(Y_\ell \in (s_1,s_2))=(2-s_1)^{2\ell }-(2-s_2)^{2\ell }>0\), which implies that \(g_\ell (\cdot ,e_2)\) still depends on the realization \(\omega \). This already implies that \(g_\ell (\cdot , e_2)\) cannot be invariant under the group action \(\{\tau _z\}_{z\in {\mathbb {Z}}^2}\). This can also be directly seen observing that by construction, for any \(z=(z_1,z_2)\in {\mathbb {Z}}^2\) it holds that \(g_\ell (\tau _z\omega ,e_2)=g_{\ell +z_2}(\omega ,e_2)\). Thus, assuming that \(z_2>0\), from (4.7) and (4.5) we deduce that

$$\begin{aligned} g_\ell (\tau _z\omega ,e_2)=g_{\ell +z_2}(\omega ,e_2)\le g_{\ell +1}(\omega ,e_2)<g_\ell (\omega ,e_2)\quad \text {for every}\ \omega \in \Omega _\ell . \end{aligned}$$

\(\square \)

5 Proofs

5.1 Almost plane-like formulas in the stationary setting: proof of Theorem 3.1

As a preliminary step towards the proof of Theorem 3.1, for \(\ell \in {\mathbb {N}}\) fixed we analyze the asymptotic behavior of \(X_{t,\ell }^{a,b,\nu }(g)(\omega )\) as \(t\rightarrow +\infty \). This will be done by relating \(X_{t,\ell }^{a,b,\nu }(g)\) in a suitable way to a subadditive stochastic process in dimension \((d-1)\). We recall here the notion of such a process for the readers’ convenience.

For every \(p=(p_1,\dots ,p_{d-1}),\,q=(q_1,\dots ,q_{d-1})\in {\mathbb {R}}^{d-1}\) with \(p_i<q_i\) for all \(i\in \{1,\ldots ,d-1\}\) we consider the \((d-1)\)-dimensional half-opens intervals

$$\begin{aligned}{}[p,q):=\{x\in {\mathbb {R}}^{d-1}:p_i\le x_i<q_i\,\,\text {for}\,\,i=1,\ldots ,d-1\} \end{aligned}$$

and we set

$$\begin{aligned} {\mathcal {I}}:=\{[p,q):p,q\in {\mathbb {R}}^{d-1}\,,\, p_i<q_i\,\,\text {for}\,\,i=1,\ldots ,d-1\}\,. \end{aligned}$$

Definition 5.1

(Subadditive process) A subadditive process with respect to a measure-preserving additive group action \(\{\tau _z\}_{z\in {\mathbb {R}}^{d-1}}\) is a function \(\mu :{\mathcal {I}}\times \Omega \rightarrow {\mathbb {R}}\) satisfying the following properties:

  1. (1)

    (measurability) for every \(I\in {\mathcal {I}}\) the function \(\omega \mapsto \mu (I,\omega )\) is \({\mathcal {F}}\)-measurable;

  2. (2)

    (stationarity) for every \(\omega \in \Omega \), \(I\in {\mathcal {I}}\), and \(z\in {\mathbb {R}}^{d-1}\) we have \(\mu (I+z,\omega )=\mu (I,\tau _z(\omega ))\);

  3. (3)

    (subadditivity) for every \(I\in {\mathcal {I}}\) and for every finite partition \((I^i)_{i=1}^k\) of I, we have

    $$\begin{aligned} \mu (I,\omega )\le \sum _{i=1}^k\mu (I_i,\omega )\quad \text {for every}\,\,\omega \in \Omega \,; \end{aligned}$$
  4. (4)

    (boundedness) there exists \(M>0\) such that \(0\le \mu (I,\omega )\le M{\mathcal {L}}^{d-1}(I)\) for every \(\omega \in \Omega \) and \(I\in {\mathcal {I}}\).

In order to, relate \(X_{t,\ell }^{a,b,\nu }(g)\) to a subadditive process, we associate to each hyperrectangle \(R_{t,\ell }^\nu \) a set \(I\in {\mathcal {I}}\) (and vice versa) as follows: For fixed \(\nu \in {\mathbb {S}}^{d-1}\) we let \(O_\nu \) be the orthogonal matrix induced by (2.2). For every \(I=[p_1,q_1)\times \cdots \times [p_{d-1},q_{d-1})\in {\mathcal {I}}\) we denote by \(s_{\mathrm{max}}(I):=\max _{i}|q_i-p_i|\) its maximal side length and define the open set \(Q^{\ell ,\nu }(I)\subset {\mathbb {R}}^d\) as

$$\begin{aligned} Q^{\ell ,\nu }(I):=O_\nu \Big (\mathrm{int}\, I\times \min \{\ell , s_\mathrm{max}(I)\}(-1/2,1/2)\Big ), \end{aligned}$$
(5.1)

where \(\mathrm{int}\) denotes the \((d-1)\)-dimensional interior. For \(\ell =+\infty \) we clearly have \(Q^{\infty ,\nu }(I)=O_\nu \big (\mathrm{int}\, I\times s_{\mathrm{max}}(I)(-1/2,1/2)\big )\). Then we define a function \(\mu _{\ell }^{a,b,\nu }:{\mathcal {I}}\times \Omega \rightarrow {\mathbb {R}}\) by setting

$$\begin{aligned} \mu _{\ell }^{a,b,\nu }(I,\omega ):=\inf \{E_g(\omega )(u,Q^{\ell ,\nu }(I)):u\in {\mathscr {A}}(u^{a,b,\nu },Q^{\ell ,\nu }(I))\}. \end{aligned}$$
(5.2)

In this way, we have

$$\begin{aligned} \frac{\mu _{\ell }^{a,b,\nu }\big ([-\tfrac{t}{2},\tfrac{t}{2})^{d-1},\omega \big )}{t^{d-1}}{=}X_{t,\ell }^{a,b,\nu }(g)(\omega )\;\text { if }\;\ell {\le } t,\quad \frac{\mu _{\ell }^{a,b,\nu }\big ([-\tfrac{t}{2},\tfrac{t}{2})^{d-1},\omega \big )}{t^{d-1}}{=}X_{t,t}^{a,b,\nu }(g)(\omega )\;\text { if }\;\ell {\ge } t.\nonumber \\ \end{aligned}$$
(5.3)

Lemma 5.2

Let \(g:\Omega \times {\mathbb {R}}^d\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\rightarrow [0,+\infty )\) be an admissible random surface tension satisfying Assumption 1. For every \(\ell \in {\mathbb {N}}\cup \{+\infty \}\), \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), and \(\nu \in {\mathbb {S}}^{d-1}\) let \(\mu _{\ell }^{a,b,\nu }(I,\omega )\) be as in (5.2). Then there exists a measure-preserving group action \(\{\tau _z^\nu \}_{z\in {\mathbb {R}}^{d-1}}\) such that \(\mu _{\ell }^{a,b,\nu }\) is a subadditive process with respect to \(\{\tau _{z}^\nu \}_{z\in {\mathbb {R}}^{d-1}}\) satisfying

$$\begin{aligned} \mu _{\ell }^{a,b,\nu }(I,\omega )\le E_g(\omega )(u^{a,b,\nu },Q^{\ell ,\nu }(I))\le c {\mathcal {L}}^{d-1}(I) \end{aligned}$$
(5.4)

for every \(\omega \in \Omega \). Moreover, there exist \(\Omega _{\ell }^{a,b,\nu }\subset \Omega \) with \({\mathbb {P}}(\Omega _\ell ^{a,b,\nu })=1\) and a function \(g_\ell (\cdot ,a,b,\nu ):\Omega \rightarrow {\mathbb {R}}\) such that for every \(\omega \in \Omega _{\ell }^{a,b,\nu }\) we have

$$\begin{aligned} g_{\ell }(\omega ,a,b,\nu )= {\left\{ \begin{array}{ll} \displaystyle \lim _{t\rightarrow +\infty }X_{t,\ell }^{a,b,\nu }(g)(\omega )&{}\text {if}\ \ell \in {\mathbb {N}},\\ \displaystyle \lim _{t\rightarrow +\infty }X_{t,t}^{a,b,\nu }(g)(\omega ) &{}\text {if}\ \ell =+\infty . \end{array}\right. } \end{aligned}$$
(5.5)

Eventually, \(\Omega _\ell ^{a,b,\nu }\) and \(g_\ell (\cdot ,a,b,\nu )\) are invariant under the group action \(\{\tau _z^\nu \}_{z\in {\mathbb {R}}^{d-1}}\),  i.e., \(\tau _z^\nu (\Omega _{\ell }^{a,b,\nu })=\Omega _\ell ^{a,b,\nu }\) for every \(z\in {\mathbb {R}}^{d-1}\) and

$$\begin{aligned} g_{\ell }(\tau _z^\nu \omega ,a,b,\nu )=g_{\ell }(\omega ,a,b,\nu )\quad \text {for every}\ \omega \in \Omega _{\ell }^{a,b,\nu }. \end{aligned}$$
(5.6)

Remark 5.3

For \(\ell =+\infty \) the corresponding function \(g_\infty (\cdot ,a,b,\nu )\) given by (5.5) is invariant under the whole group action \(\{\tau _z\}_{z\in {\mathbb {R}}^d}\) associated to g (the invariance can be proven by a deterministic argument as in [20, Theorem 6.2] similar to (5.11)). This is, in general, not true for the functions \(g_\ell (\cdot ,a,b,\nu )\) with \(\ell \in {\mathbb {N}}\). In fact, Example 4.6 provides an admissible random surface tension g where (5.6) fails if \(\tau _z^\nu \) is replaced by \(\tau _z\). As a consequence, \(g_\infty (\cdot ,a,b,\nu )\) is deterministic if g is ergodic, while this is in general not true for \(g_\ell \).

If g is only \({\mathbb {Z}}^d\)-stationary, and \(\nu \in {\mathbb {S}}^{d-1}\cap \mathbb Q^{d\times d}\), then \(\mu _{\ell }^{a,b,\nu }\) defines a subadditive process with respect to a discrete measure-preserving group action. In particular, the limits in (5.5) still exist for rational directions. The existence of the second limit can still be extended to irrational directions via continuity (cf. Remark 3.2 (i)).

The sets of full probability for which (5.5) holds depend on \(\ell ,a,b,\nu \); for fixed \(\nu \in {\mathbb {S}}^{d-1}\) we can define a set \(\Omega ^\nu \) of full probability by taking the countable intersection of \(\Omega _\ell ^{a,b,\nu }\) over \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), \(\ell \in {\mathbb {N}}\cup \{+\infty \}\). Then for every \(\omega \in \Omega ^\nu \) the limits in (5.5) exist for all \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) and every \(\ell \in {\mathbb {N}}\cup \{+\infty \}\). In particular, the monotonicity property in (2.11) implies that for every \(\omega \in \Omega ^\nu \) and every \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) we have

$$\begin{aligned} g_\infty (\omega ,a,b,\nu )\le g_{\ell +1}(\omega ,a,b,\nu )\le g_\ell (\omega ,a,b,\nu )\quad \text {for every}\ \ell \in {\mathbb {N}}. \end{aligned}$$
(5.7)

Example 4.6 also shows that for every \(\ell \in {\mathbb {N}}\) the above inequality can be strict on a set of positive probability (depending again on \(\ell ,a,b,\nu \)).

Proof of Lemma 5.2

Throughout this proof we fix \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) and \(\nu \in {\mathbb {S}}^{d-1}\). In order to not to overburden notation, for every \(I\in {\mathcal {I}}\) and \(\ell \in {\mathbb {N}}\cup \{+\infty \}\) we write \(Q^\ell (I)=Q^{\ell ,\nu }(I)\) for the d-dimensional interval introduced in (5.1) and for every \(\omega \in \Omega \) we set \(\mu _\ell (I,\omega ):=\mu _\ell ^{a,b,\nu }(I,\omega )\) with \(\mu _\ell ^{a,b,\nu }\) as in (5.2).

Step 1. Stationarity and subadditivity of \(\mu _{\ell }\)

The fact that \(\mu _{\ell }(I,\cdot )\) is measurable follows from [20, Proposition A.1]. Moreover, since \(g(\omega )\in {\mathcal {A}}_c\) for every \(\omega \in \Omega \), we obtain the uniform bound (5.4) by taking \(u^{a,b,\nu }\) as a candidate in the minimization problem defining \(\mu _\ell (I,\omega )\) as in Remark 2.5.

We next prove stationarity of the process. To this end, given \(z\in {\mathbb {R}}^{d-1}\) we set \(z^\nu :=O_\nu (z,0)\in {\mathbb {R}}^d\) and define a measure-preserving group action \(\{\tau _z^\nu \}_{z\in {\mathbb {R}}^{d-1}}\) by setting \(\tau _z^\nu :=\tau _{-z^\nu }\), where \(\{\tau _z\}_{z\in {\mathbb {R}}^d}\) is the group action associated to g. Note that for every \(I\in {\mathcal {I}}\) and every \(z\in {\mathbb {R}}^{d-1}\) we have \(Q^{\ell }(I-z)=Q^{\ell }(I)-z^\nu \). Thus, for any \(u\in BV(Q^\ell (I-z);{\mathcal {M}})\) the function \(u_z:=u(\cdot -z^\nu )\) belongs to \(BV(Q^\ell (I);{\mathcal {M}})\). Moreover, \(z^{\nu }\in H^\nu \) due to the properties of \(O_{\nu }\), which implies that \(u^{a,b,\nu }(\cdot -z^\nu )=u^{a,b,\nu }\). Hence, for every \(u\in BV(Q^\ell (I-z);{\mathcal {M}})\) we have

$$\begin{aligned} u=u^{a,b,\nu }\text { near }\partial Q^\ell (I-z)\ \Longleftrightarrow u_z=u^{a,b,\nu }\text { near }\partial Q^\ell (I), \end{aligned}$$

while the stationarity of g with respect to \(\{\tau _z\}_{z\in {\mathbb {R}}^d}\) together with a change of variables yields

$$\begin{aligned} E_g(\omega )(u,Q^\ell (I-z))=E_g(\tau _z^\nu \omega )(u_z,Q^\ell (I)). \end{aligned}$$

Thus, we conclude by minimization that \(\mu _{\ell }(I-z,\omega )=\mu _{\ell }(I,\tau _z^\nu \omega )\), which implies the stationarity of the process with respect to the lower-dimensional group action \(\{\tau _z^\nu \}_{z\in {\mathbb {R}}^{d-1}}\).

We conclude this step by showing that \(\mu _\ell \) is subadditive. Let \(I\in {\mathcal {I}}\) and let \((I^i)_{i=1}^k\subset {\mathcal {I}}\) be pairwise disjoint and such that \(I=\bigcup _{i=1}^k I^i\). Fix \(\eta >0\) and for any \(i\in \{1,\ldots , k\}\) let \(u^i\in {\mathscr {A}}(u^{a,b,\nu },Q^\ell (I^i))\) be such that

$$\begin{aligned} E_g(\omega )(u^i, Q^\ell (I^i))\le \mu _{\ell }(I^i,\omega )+k^{-1}\eta . \end{aligned}$$
(5.8)

Note that also the d-dimensional cuboids \(Q^{\ell }(I^i)\) are pairwise disjoint, so that we can define a function \(u\in BV(Q^{\ell }(I);{\mathcal {M}})\) by setting

$$\begin{aligned} u(x):= {\left\{ \begin{array}{ll} u^i(x) &{}\text {if}\ x\in Q^{\ell }(I^i)\ \text {for some}\ 1\le i\le k,\\ u^{a,b,\nu }(x) &{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

Since \(s_{\mathrm{max}}(I^i)\le s_{\mathrm{max}}(I)\) for all \(1\le i\le k\), all cuboids \(Q^{\ell }(I^i)\) are contained in \(Q^{\ell }(I)\), so that the function u satisfies \(u=u^{a,b,\nu }\) near \(\partial Q^\ell (I)\). Moreover, since \(S_{u^{a,b,\nu }}=H^\nu \), thanks to the boundary conditions satisfied by each \(u^i\) and the equality \(I=\bigcup _{i=1}^kI^i\) we have \(S_u\cap Q^\ell (I)=S_u\cap \big (\bigcup _{i=1}^k\overline{Q^\ell (I^i)}\big )=\bigcup _{i=1}^kS_{u^i}\). Thus, using the additivity of \(E_g(\omega )\) as a set function, from (5.8) we infer

$$\begin{aligned} \mu _\ell (I,\omega )\le E_g(\omega )(u,Q^{\ell }(I))=\sum _{i=1}^k E_g(\omega )(u^i,Q^\ell (I^i))\le \mu _\ell (I^i,\omega )+\eta . \end{aligned}$$

We then obtain the subadditivity of the process by the arbitrariness of \(\eta >0\).

Step 2. Existence of the limit

Suppose first that \(\ell \in {\mathbb {N}}\); for every \(t>0\) consider the sets \(I_t:=[-\frac{t}{2},\frac{t}{2})^{d-1}\). Thanks to the first equality in (5.3) and Step 1 we can apply the multi-parameter subadditive ergodic theorem (cf. [20, Theorem 3.11] which is a slightly improved version of [1, Theorem 2.4]) to deduce the existence of a set \(\Omega _\ell ^{a,b,\nu }\) of full probability and a function \(g_{\ell }(\cdot ,a,b,\nu ):\Omega \rightarrow [0,+\infty )\) such that for every \(\omega \in \Omega _\ell ^{a,b,\nu }\) we have

$$\begin{aligned} g_{\ell }(\omega ,a,b,\nu )=\lim _{t\rightarrow +\infty }\frac{\mu _\ell (I_t,\omega )}{t^{d-1}}=\lim _{t\rightarrow +\infty }X_{t,\ell }^{a,b,\nu }(g)(\omega ). \end{aligned}$$
(5.9)

If instead \(\ell =+\infty \), then the second equality in (5.3) is valid for all \(t>0\) and we obtain a set \(\Omega _\infty ^{a,b,\nu }\) of full probability and a function \(g_\infty (\cdot ,a,b,\nu ):\Omega \rightarrow [0,+\infty )\) such that for every \(\omega \in \Omega _\infty ^{a,b,\nu }\) we have

$$\begin{aligned} g_\infty (\omega ,a,b,\nu )=\lim _{t\rightarrow +\infty }X_{t,t}^{a,b,\nu }(g)(\omega ). \end{aligned}$$
(5.10)

Gathering (5.9) and (5.10) we get (5.5) and it remains to show the shift invariance. To this end, fix \(\omega \in \Omega _{\ell }^{a,b,\nu }\) and \(z\in {\mathbb {R}}^{d-1}\), let \(t\rightarrow +\infty \), and set \(t^\pm :=t\pm 2|z^\nu |\). Then, thanks to the stationarity of g, by an extension argument as in (2.12) on can show that

$$\begin{aligned} X_{t^+,\ell }^{a,b,\nu }(g)(\omega )\le X_{t,\ell }^{a,b,\nu }(g)(\tau _z^\nu \omega )+\frac{c(t^+-t)}{t^+}\le X_{t^-,\ell }^{a,b,\nu }(g)(\omega )+\frac{c(t^+-t)}{t^+}+\frac{c(t-t^-)}{t}.\nonumber \\ \end{aligned}$$
(5.11)

Thus, from (5.9) and (5.10), respectively, we deduce that

$$\begin{aligned} \lim _{t\rightarrow +\infty }X_{t,\ell }^{a,b,\nu }(g)(\tau _z^\nu \omega )=g_\ell (\omega ,a,b,\nu ),\quad \lim _{t\rightarrow +\infty }X_{t,t}^{a,b,\nu }(g)(\tau _z^\nu ,\omega )=g_\infty (\omega ,a,b,\nu ), \end{aligned}$$

that is, \(\tau _z^\nu \omega \in \Omega _{\ell }^{a,b,\nu }\) and (5.6) holds true. \(\square \)

Based on Lemma 5.2 we now prove Theorem 3.1. Namely, having at hand the almost sure existence of the limits in (5.5) we aim to show that we can switch the limit as \(t\rightarrow +\infty \) and \(\ell \rightarrow +\infty \) at least in conditional expectation to obtain (3.2). This will be done by exploiting suitable monotonicity properties of \(g_\ell \) and \(\mu _\ell \) together with the group-invariance of conditional expectations with respect to the \(\sigma \)-algebra of \(\tau \)-invariant sets. The latter property might be well-known in probability theory but we include the short proof for the sake of completeness. Recall that the conditional expectation of a random variable \(X\in L^1(\Omega )\) with respect to a \(\sigma \)-algebra \({\mathcal {F}}'\subset {\mathcal {F}}\) is the (almost surely) uniquely defined \({\mathcal {F}}'\)-measurable function \({\mathbb {E}}[X|{\mathcal {F}}']\) such that for any \(F\in {\mathcal {F}}'\) we have

$$\begin{aligned} \int _FX\,\mathrm {d}{\mathbb {P}}=\int _{F}{\mathbb {E}}[X|{\mathcal {F}}']\,\mathrm {d}{\mathbb {P}}. \end{aligned}$$

Lemma 5.4

Let \(X\in L^1(\Omega )\) be a random variable and \({\widetilde{\tau }}:\Omega \rightarrow \Omega \) be a measurable, measure-preserving map. Let \({\mathcal {F}}_1\subset {\mathcal {F}}\) be a \(\sigma \)-algebra containing only \({\widetilde{\tau }}\)-invariant sets, i.e., \({\mathbb {P}}({\widetilde{\tau }}(F)\Delta F)=0\) for all \(F\in {\mathcal {F}}_1\). Then

$$\begin{aligned} {\mathbb {E}}[X\circ {\widetilde{\tau }}|{\mathcal {F}}_1]={\mathbb {E}}[X|{\mathcal {F}}_1]. \end{aligned}$$

Proof

First note that since \({\widetilde{\tau }}\) is measure preserving it follows that \(X\circ {\widetilde{\tau }}\in L^1(\Omega )\). Hence \({\mathbb {E}}[X\circ {\widetilde{\tau }}|{\mathcal {F}}_1]\) is well-defined and in particular \({\mathcal {F}}_1\)-measurable. Next fix \(F\in {\mathcal {F}}_1\). By a change of variables and \({\widetilde{\tau }}\)-invariance of F we have

$$\begin{aligned} \int _F{\mathbb {E}}[X\circ {\widetilde{\tau }}|{\mathcal {F}}_1](\omega )\,\mathrm {d}{\mathbb {P}}=\int _{F}X({\widetilde{\tau }}(\omega ))\,\mathrm {d}{\mathbb {P}}=\int _{{\widetilde{\tau }} (F)}X(\omega )\,\mathrm {d}{\mathbb {P}}=\int _FX(\omega )\,\mathrm {d}{\mathbb {P}}, \end{aligned}$$

where the first equality follows from the definition of the conditional expectation. This proves the claim. \(\square \)

Proof of Theorem 3.1

The first part of the proof is an immediate consequence of Lemma 5.2 in the case \(\ell =+\infty \), Remark 5.3 and the fact that for every \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) and \(\omega \in \Omega \) the restrictions of the mappings \(\nu \mapsto \liminf _{t} X_{t,t}^{a,b,\nu }(g)(\omega )\), \(\nu \mapsto \limsup _{t} X_{t,t}^{a,b,\nu }(g)(\omega )\) to \({\mathbb {S}}^{d-1}\setminus \{-e_d\}\) are continuous (the latter can be shown arguing word by word as in [20, Lemma 5.5]). Namely, by setting

$$\begin{aligned} {\widehat{\Omega }}:=\bigcap _{\begin{array}{c} (a,b)\in {\mathcal {M}}\times {\mathcal {M}}\\ \nu \in {\mathbb {S}}^{d-1}\cap {\mathbb {Q}}^{d\times d} \end{array}}\Omega _{\infty }^{a,b,\nu }\quad \text {and}\quad g_\mathrm{hom}(\cdot ,a,b,\nu ):=g_\infty (\cdot ,a,b,\nu ), \end{aligned}$$

we clearly have that \({\mathbb {P}}({\widehat{\Omega }})=1\), while (5.5) together the above mentioned continuity ensures that (3.1) holds true for every \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), \(\nu \in {\mathbb {S}}^{d-1}\) and \(\omega \in {\widehat{\Omega }}\). Eventually, in view of Remark 5.3, \(g_{\mathrm{hom}}=g_\infty \) satisfies the required invariance property and hence is deterministic in the case that g is ergodic.

We now come to prove the second part of Theorem 3.1. Let \(\nu \in {\mathbb {S}}^{d-1}\) be fixed, \(\Omega ^\nu \) as in Remark 5.3 and for every \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) and \(\ell \in {\mathbb {N}}\cup \{+\infty \}\) let \(g_\ell (\omega ,a,b,\nu )\) be as in Lemma 5.2. The main part of the proof consists in showing that there exists a set \({\widehat{\Omega }}^{\nu }\subset \Omega ^\nu \) of full probability such that

$$\begin{aligned} \lim _{\ell \rightarrow +\infty }g_\ell (\omega ,a,b,\nu )=g_{\infty }(\omega ,a,b,\nu )\quad \text {for every}\ \omega \in {\widehat{\Omega }}^{\nu }. \end{aligned}$$
(5.12)

To this end, we fix \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) and we compute \(g_{\ell }(\omega ,a,b,\nu )\) using \(\mu _\ell ([-2^k,2^k),\omega )\), where we use the shorthand \(\mu _\ell :=\mu _\ell ^{a,b,\nu }\). First note that (5.6) implies that \(g_{\ell }(\cdot ,a,b,\nu )\) is measurable with respect to the \(\sigma \)-algebra of \(\{\tau ^\nu _{z}\}_{z\in \mathbb {{\mathbb {R}}}^{d-1}}\)-invariant sets defined by

$$\begin{aligned} {\mathcal {F}}^\nu =\{F\in {\mathcal {F}}:\,{\mathbb {P}}((\tau ^\nu _zF)\Delta F)=0\quad \forall \, z\in {\mathbb {R}}^{d-1}\}. \end{aligned}$$

Hence, \(g_{\ell }(\omega ,a,b,\nu )={\mathbb {E}}[g_{\ell }(\cdot ,a,b,\nu )|{\mathcal {F}}^\nu ](\omega )\) for a.e. \(\omega \). Together with the uniform bound (5.4) and the dominated convergence theorem for the conditional expectation this ensures that almost surely we have

$$\begin{aligned} g_{\ell }(\omega ,a,b,\nu )={\mathbb {E}}[g_{\ell }(\cdot ,a,b,\nu )|{\mathcal {F}}^\nu ](\omega )=\lim _{k\rightarrow +\infty }\frac{{\mathbb {E}}[\mu _{\ell }([-2^k,2^k)^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega )}{(2^{k+1})^{(d-1)}}.\nonumber \\ \end{aligned}$$
(5.13)

Thanks to (2.11) this in turn implies that almost surely

$$\begin{aligned} \lim _{\ell \rightarrow +\infty }g_{\ell }(\omega ,a,b,\nu )=\inf _{\ell \in {\mathbb {N}}}\lim _{k\rightarrow +\infty }\frac{{\mathbb {E}}[\mu _{\ell }([-2^k,2^k)^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega )}{(2^{k+1})^{(d-1)}}. \end{aligned}$$
(5.14)

We now show that also the limit in k in (5.14) is an infimum, which will then allow us to switch the infima in \(\ell \) and k. To this end, we notice that for every \(k\in {\mathbb {N}}\) the cube \([-2^{k+1},2^{k+1})^{d-1}\) can be partitioned into \(n_d:=2^{d-1}\) integer-translated disjoint copies of \([-2^k,2^k)^{d-1}\), i.e., there exist \(z_1,\ldots ,z_{n_d}\in {\mathbb {Z}}^{d-1}\) with

$$\begin{aligned}{}[-2^{k+1},2^{k+1})^{d-1}{=}\bigcup _{n=1}^{n_d}\big (z_n{+}[-2^k,2^k)^{d-1}\big ),\quad \left( z_n{+}[-2^k,2^k)^{d-1}\right) \cap \left( z_m{+}[-2^k,2^k)^{d-1}\right) {=}\emptyset \; \text { for }\; n{\ne } m. \end{aligned}$$

Hence, using the subadditivity of the stochastic process \(\mu _{\ell }\) and its \(\{\tau _z^\nu \}_{z\in {\mathbb {R}}^{d-1}}\)-stationarity we can write

$$\begin{aligned} \mu _{\ell }([-2^{k+1},2^{k+1})^{d-1},\omega )\le \sum _{n=1}^{n_d}\mu _{\ell }([-2^k,2^k)^{d-1},\tau ^\nu _{z_n}\omega ). \end{aligned}$$

Taking the conditional expectation with respect to the \(\sigma \)-algebra \({\mathcal {F}}^\nu \) and using that it is linear and order preserving, by Lemma 5.4 we find that almost surely

$$\begin{aligned} {\mathbb {E}}[\mu _{\ell }([-2^{k+1},2^{k+1})^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega )\le&\sum _{n=1}^{n_d}{\mathbb {E}}[\mu _\ell ([-2^k,2^k)^{d-1},\cdot )\circ \tau _{z_n}^\nu |{\mathcal {F}}^{\nu }](\omega ) \\ =&n_d{\mathbb {E}}[\mu _{\ell }([-2^k,2^k)^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega ). \end{aligned}$$

Dividing this estimate by \((2^{k+2})^{(d-1)}\) we see that the map \(k\mapsto \frac{{\mathbb {E}}[\mu _{\ell }([-2^{k},2^{k})^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega )}{(2^{k+1})^{(d-1)}}\) is almost surely decreasing. Together with (5.13) this implies that almost surely

$$\begin{aligned} g_\ell (\omega ,a,b,\nu )=\inf _{k\in {\mathbb {N}}}\frac{{\mathbb {E}}[\mu _{\ell }([-2^k,2^k)^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega )}{(2^{k+1})^{(d-1)}}\quad \text {for every}\ \ell \in {\mathbb {N}}\cup \{+\infty \}. \end{aligned}$$
(5.15)

Eventually, from (2.11) and (5.3) we infer that also \(\ell \mapsto \mu _\ell ([-2^k,2^k),\omega )\) is almost surely decreasing. Since the monotonicity is preserved by the conditional expectation, gathering (5.14) and (5.15) we obtain that

$$\begin{aligned} \lim _{\ell \rightarrow +\infty }g_\ell (\omega ,a,b,\nu ){=}\inf _{ k,\ell \in {\mathbb {N}}}\frac{{\mathbb {E}}[\mu _{\ell }([-2^k,2^k)^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega )}{(2^{k+1})^{(d-1)}}{=}\inf _{k\in {\mathbb {N}}}\lim _{\ell \rightarrow +\infty }\frac{{\mathbb {E}}[\mu _{\ell }([-2^k,2^k)^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega )}{(2^{k+1})^{(d-1)}}\nonumber \\ \end{aligned}$$
(5.16)

almost surely. For fixed \(k\in {\mathbb {N}}\) we have \(\mu _{\ell }([-2^k,2^k)^{d-1},\omega )=\mu _\infty ([-2^k,2^k)^{d-1},\omega )\) for \(\ell > 2^{k+1}\). Thus, for fixed \(k\in {\mathbb {N}}\), on the right-hand side of (5.16) we can pass to the limit with respect to \(\ell \) inside the conditional expectation using again the corresponding dominated convergence theorem to deduce that almost surely

$$\begin{aligned} \lim _{\ell \rightarrow +\infty }g_{\ell }(\omega ,a,b,\nu )=\inf _{k\in {\mathbb {N}}}\frac{{\mathbb {E}}[\mu _{\infty }([-2^k,2^k)^{d-1},\cdot )|{\mathcal {F}}^\nu ](\omega )}{(2^{k+1})^{(d-1)}}=g_\infty (\omega ,a,b,\nu )=g_\mathrm{hom}(\omega ,a,b,\nu ), \end{aligned}$$

where the second equality follows from (5.15) applied with \(\ell =+\infty \). Hence, there exists \({\widehat{\Omega }}^{\nu }\) with full probability such that (5.12) is satisfied.

Thanks to (5.12) we are now able to conclude as follows. We fix sequences \(0<\ell _t\le t\) such that \(\ell _t\rightarrow +\infty \) as \(t\rightarrow +\infty \). Then for any fixed \(\ell \in {\mathbb {N}}\) and t large (2.11) implies that

$$\begin{aligned} X_{t,t}^{a,b,\nu }(g)(\omega )\le X_{t,\ell _t}^{a,b,\nu }(g)(\omega )\le X_{t,\ell }^{a,b,\nu }(g)(\omega ). \end{aligned}$$

Passing to the limit in \(t\rightarrow +\infty \), the left-hand side converges to \(g_{\mathrm{hom}}(\omega ,a,b,\nu )\) for \(\omega \in \Omega ^\nu \), while for the right-hand side we use (5.5), so that

$$\begin{aligned} g_{\mathrm{hom}}(\omega ,\alpha ,b,\nu )\le \liminf _{t\rightarrow +\infty }X_{t,\ell _t}^{a,b,\nu }(g)(\omega )\le \limsup _{t\rightarrow +\infty }X_{t,\ell _t}^{a,b,\nu }(g)(\omega )\le g_{\ell }(\omega ,a,b,\nu ), \end{aligned}$$

for every \(\omega \in \Omega ^\nu \). Thus, letting \(\ell \rightarrow +\infty \) and using (5.12) we deduce that for all \(\omega \in {{\widehat{\Omega }}}^{\nu }\) we have (3.2), hence Theorem 3.1 is proved. \(\square \)

5.2 Estimating the oscillation and concentration inequalities: proof of theorem 3.3

Proof of Theorem 3.3

Throughout the proof \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) and \(\nu \in {\mathbb {S}}^{d-1}\) will be fixed, so we write \(X_{t,\ell _t}(g)=X_{t,\ell _t}^{a,b,\nu }(g)\) to reduce notation. Moreover, for every \(U\subset {\mathbb {R}}^d\) let the quantity \(\partial ^\mathrm{osc}_{g,U}X_{t,\ell _t}(g)\) be as in (2.6) with \(X=X_{t,\ell _t}\). Let \(t\ge \ell _t\ge 1\) be fixed; thanks to [24, Proposition 1.10] and Assumption 2 there exists a constant \(C>0\) such that for every \(p\ge 1\) we have the estimate

$$\begin{aligned} {\mathbb {E}}[(X_{t,\ell _t}(g)-{\mathbb {E}}[X_{t,\ell _t}(g)])^{2p}]\le (Cp^2)^p{\mathbb {E}}\Bigg [\int _0^{+\infty }\Bigg (\int _{{\mathbb {R}}^d}\Big (\partial ^\mathrm{osc}_{g, B_{2(s+1)}(x)}X_{t,\ell _t}(g)\Big )^2\,\mathrm {d}x\Bigg )^{p}(s+1)^{-dp}\pi (s)\,\mathrm {d}s\Bigg ]. \end{aligned}$$
(5.17)

Thus, to obtain (3.3) we fix \(\omega \in \Omega \) and we suitably bound the term

$$\begin{aligned} \int _0^{+\infty }\Bigg (\int _{{\mathbb {R}}^d}\Big (\partial ^{\mathrm{osc}}_{g, B_{2(s+1)}(x)}X_{t,\ell _t}(g) (\omega ) \Big )^2\,\mathrm {d}x\Bigg )^{p}(s+1)^{-dp}\pi (s)\,\mathrm {d}s. \end{aligned}$$
(5.18)

We first show that in (5.18) we can reduce the domain of integration in x. In fact, for given \(s>0\) suppose that \(x\in {\mathbb {R}}^d\) is such that

$$\begin{aligned} B_{2(s+1)}(x)\cap R^\nu _{t,\ell _t}=\emptyset \end{aligned}$$
(5.19)

and let \(g'\in {\mathcal {A}}_c\) with \(g'=g(\omega )\) on \({\mathbb {R}}^d\setminus B_{2(s+1)}(x)\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\). Then clearly \(X_{t,\ell _t}(g')=X_{t,\ell _t}(g)(\omega )\), which implies that \(\partial ^{\mathrm{osc}}_{g,B_{2(s+1)}(x)} X_{t,\ell _t}(g)=0\). Thus, since (5.19) is satisfied for all \(x\in {\mathbb {R}}^d\setminus R^\nu (s,t)\) with

$$\begin{aligned} R^\nu (s,t):=R^\nu _{t+4(s+1),\ell _t +4(s+1)}, \end{aligned}$$

we have

$$\begin{aligned} \int _{{\mathbb {R}}^d}\Big (\partial ^{\mathrm{osc}}_{g, B_{2(s+1)}(x)}X_{t,\ell _t}(g) (\omega ) \Big )^2\,\mathrm {d}x=\int _{R^\nu (s,t)}\Big (\partial ^{\mathrm{osc}}_{g, B_{2(s+1)}(x)}X_{t,\ell _t}(g)(\omega )\Big )^2\,\mathrm {d}x. \end{aligned}$$
(5.20)

We show that there exists a dimensional constant \(C=C_d>0\) such that for all \(t\ge \ell _t\ge 1\) and for all \(s>0\) and \(x\in R^\nu (s,t)\) we have

$$\begin{aligned} \partial ^{\mathrm{osc}}_{g,B_{2(s+1)}(x)}X_{t,\ell _t}(g)(\omega )\le C\Big (\frac{s+1}{t}\Big )^{d-1}. \end{aligned}$$
(5.21)

We distinguish the following two exhaustive cases:

  1. (a)

    \(s>0\) and \(x\in R^\nu (s,t)\) are such that \(R^\nu _{t,\ell _t}\subset B_{2(s+1)}(x)\);

  2. (b)

    \(s>0\) and \(x\in R^\nu (s,t)\) are such that \(R^\nu _{t,\ell _t}\cap ({\mathbb {R}}^d\setminus B_{2(s+1)}(x))\ne \emptyset \).

Suppose that we are in the case (a) and let \(g'\in {\mathcal {A}}_c\) be such that \(g'=g\) on \({\mathbb {R}}^d\setminus B_{2(s+1)}(x)\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\). Then, to obtain (5.21) it suffices to use (2.14), since the inclusion \(R^\nu _{t,\ell _t}\subset B_{2(s+1)}(x)\) implies that \(2(s+1)\ge t/2\), from which we readily deduce that

$$\begin{aligned} \partial ^{\mathrm{osc}}_{g,B_{2(s+1)}(x)}X_{t,\ell _t}(g)(\omega )\le 2c\le 4^{d-1}2c\Big (\frac{s+1}{t}\Big )^{d-1}. \end{aligned}$$

We now prove (5.21) in the case (b), where the above construction is not optimal. Instead, we choose \(u\in BV(R^\nu _{t,\ell _t};{\mathcal {M}})\) such that \(u=u^{a,b,\nu }\) near \(\partial R^\nu _{t,\ell _t}\) and

$$\begin{aligned} \int _{S_u\cap R^\nu _{t,\ell _t}}g(\omega ,y,u^+-u^-,\nu _u)\,\mathrm {d}{\mathcal {H}}^{d-1}\le t^{d-1}X_{t,\ell _t}(g)(\omega )+1. \end{aligned}$$
(5.22)

Then we define a new function \({\tilde{u}}\in BV(R^\nu _{t,\ell _t};{\mathcal {M}})\) by setting

$$\begin{aligned} {\tilde{u}}:= {\left\{ \begin{array}{ll} u &{}\text {in }R^\nu _{t,\ell _t}\setminus {\overline{B}}_{2(s+1)}(x),\\ u^{a,b,\nu }&{}\text {in } R^\nu _{t,\ell _t}\cap {\overline{B}}_{2(s+1)}(x). \end{array}\right. } \end{aligned}$$

By construction \({\tilde{u}}=u^{a,b,\nu }\) near \(\partial R^\nu _{t,\ell _t}\), hence for every \(g'\in {\mathcal {A}}_c\) with \(g'=g\) on \({\mathbb {R}}^d\setminus B_{2(s+1)}(x)\times {\mathbb {R}}^m\times {\mathbb {S}}^{d-1}\) we have

$$\begin{aligned} t^{d-1}X_{t,\ell _t}(g')&\le \int _{S_{{\tilde{u}}}\cap R^\nu _{t,\ell _t}} g'(y,{\tilde{u}}^+-{\tilde{u}}^-,\nu _{{\tilde{u}}})\,\mathrm {d}{\mathcal {H}}^{d-1}\nonumber \\&\le \int _{S_u\cap R^\nu _{t,\ell _t}}g(\omega ,y,u^+-u^-,\nu _u)\,\mathrm {d}{\mathcal {H}}^{d-1}+c{\mathcal {H}}^{d-1}\big (S_{{\tilde{u}}}\cap {\overline{B}}_{2(s+1)}(x)\big )\,. \end{aligned}$$
(5.23)

By construction, we have

$$\begin{aligned} {\mathcal {H}}^{d-1}\big (S_{{\tilde{u}}}\cap {\overline{B}}_{2(s+1)}(x)\big )&\le {\mathcal {H}}^{d-1}(H^\nu \cap {\overline{B}}_{2(s+1)})+{\mathcal {H}}^{d-1}(\partial B_{2(s+1)})\nonumber \\&\le \mathrm{diam}(B_{2(s+1)})^{d-1}+{\mathcal {H}}^{d-1}(\partial B_{2(s+1)})\le C(s+1)^{d-1}, \end{aligned}$$
(5.24)

for some \(C=C_d>0\). Thus, gathering (5.22), (5.23), and (5.24), and dividing by \(t^{d-1}\), we obtain

$$\begin{aligned} X_{t,\ell _t}(g')\le X_{t,\ell _t}(g)(\omega )+C\Big (\frac{s+1}{t}\Big )^{d-1} \end{aligned}$$

with C only depending on d. Then, (5.21) follows by passing to the supremum in \(g'\) and using the triangular inequality.

Using (5.21) we can estimate the right-hand side in (5.20) via

$$\begin{aligned} \int _{R^\nu (s,t)}\Big (\partial ^{\mathrm{osc}}_{g, B_{2(s+1)}(x)}X_{t,\ell _t}(g)(\omega )\Big )^2\,\mathrm {d}x\le C\Big (\frac{s+1}{t}\Big )^{2(d-1)}|R^\nu (s,t)|. \end{aligned}$$
(5.25)

Moreover, since \(1\le \ell _t\le t\), we can bound the volume \(|R^\nu (s,t)|=(t+4(s+1))^{d-1}(\ell _t+4(s+1))\) via

$$\begin{aligned} |R^\nu (s,t)|\le C(s+1)^dt^{d-1}\ell _t \,, \end{aligned}$$

hence the integral in (5.25) can be further estimated via

$$\begin{aligned} \int _{R^\nu (s,t)}\Big (\partial ^{\mathrm{osc}}_{g, B_{2(s+1)}(x)}X_{t,\ell _t}(g)(\omega )\Big )^2\,\mathrm {d}x\le Ct^{1-d}\ell _t (s+1)^{3d-2}. \end{aligned}$$

Eventually, combining the above inequality with (5.20) and (5.17) and integrating over \(s\in (0,+\infty )\) and \(\omega \in \Omega \) we obtain

$$\begin{aligned} {\mathbb {E}}[|X_{t,\ell _t}(g)-{\mathbb {E}}[X_{t,\ell _t}(g)]|^{2p}]\le (c_d p^2)^p t^{p(1-d)}\ell _t^p\int _0^{+\infty }(s+1)^{2p(d-1)}\pi (s)\,\mathrm {d}s\end{aligned}$$

with \(c_d>0\). \(\square \)

Finally, we derive strong concentration estimates in the case of an exponentially decaying weight \(\pi \).

Proof of Corollary 3.4

We first bound the integral appearing in Theorem 3.3. Without loss of generality we can assume that \(\pi (s)=C\exp (-\tfrac{s}{C})\) with \(C\ge 1\). Using two changes of variables we have

$$\begin{aligned} \int _0^{+\infty }(s+1)^{2p(d-1)}\pi (s)\,\mathrm {d}s\overset{s+1=y}{=}&C\int _1^{+\infty }y^{2p(d-1)}\exp (-\tfrac{y}{C})\exp (\tfrac{1}{C})\,\mathrm {d}y \\ \overset{y/C=x}{\le }&C^{2p(d-1)+2}\exp (\tfrac{1}{C}) \int _{0}^{+\infty }x^{2p(d-1)}\exp (-x)\,\mathrm {d}x{=} C_d^p\,\Gamma (2p(d{-}1){+}1). \end{aligned}$$

In what follows the constant \(C_d\) may change, but will only depend on d. We use the following elementary bound on the \(\Gamma \)-function: \(\Gamma (x+1)\le 2\left( \frac{2x}{e}\right) ^x\) for all \(x>0\). Hence we can estimate the last factor by

$$\begin{aligned} \Gamma (2p(d-1)+1)\le 2\left( \frac{4p(d-1)}{ e}\right) ^{2p(d-1)}=C_d^p p^{2p(d-1)}. \end{aligned}$$

Hence from Theorem 3.3 we infer that (upon increasing the dimensional constant \(C_d\))

$$\begin{aligned} {\mathbb {E}}\left[ \left( \frac{1}{C_d}\left| X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]\right| ^{2}\right) ^p\right] \le p^{2pd}t^{p(1-d)}\ell _t^p\quad \text {for all}\ p\ge 1. \end{aligned}$$
(5.26)

We also need an estimate for \(p\in [0,1)\). Since the function \(x^x\) is bounded uniformly away from zero on [0, 1], we have \(t^{p(1-d)}\ell _t^p\le C_d p^{2pd}t^{p(1-d)}\ell _t^p\) upon further increasing \(C_d\). Thus, applying Jensen’s inequality with \(p\in [0,1)\) and using (5.26) with \(p=1\) leads to

$$\begin{aligned} {\mathbb {E}}\left[ \left( \frac{1}{C_d}\left| X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]\right| ^{2}\right) ^p\right] \le&{\mathbb {E}}\left[ \frac{1}{C_d}\left| X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]\right| ^{2}\right] ^p \\ \le&t^{p(1-d)}\ell _t^p \le C_d p^{2pd}t^{p(1-d)}\ell _t^p. \end{aligned}$$

Therefore we conclude that

$$\begin{aligned} {\mathbb {E}}\left[ \left( \frac{1}{C_d}\left| X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]\right| ^{2}\right) ^p\right] \le C_dp^{2pd}t^{p(1-d)}\ell _t^p\quad \forall \, p\ge 0. \end{aligned}$$

For \(n\in {\mathbb {N}}\) we set \(p_n=\tfrac{n}{2d}\). Then the above estimate implies

$$\begin{aligned} {\mathbb {E}}\left[ \left( \frac{1}{C_d}\left| X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]\right| ^{\frac{1}{d}}\right) ^n\right] {\le } C_d\left( \frac{n}{2d}\right) ^{n}t^{n\frac{(1-d)}{2d}}\ell _t^{\frac{n}{2d}}{\le } C_d\, n!\,\left( \frac{3}{2d}\right) ^n\left( t^{1-d}\ell _t\right) ^{\tfrac{n}{2d}} \quad \forall \, n\in {\mathbb {N}}, \end{aligned}$$

where we used the estimate \(n^n\le 3^n n!\) (valid for all \(n\in {\mathbb {N}}\)). For \(n\in {\mathbb {N}}\) we can absorb the factor \(C_d\) in the left-hand side. Dividing by \(n!(t^{1-d}\ell _t)^\frac{n}{2d}\), summing the resulting estimate over \(n\in {\mathbb {N}}\) and exchanging summation and expectation we deduce that

$$\begin{aligned} {\mathbb {E}}\left[ \exp \left( \frac{1}{C_d}\left| \frac{X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]}{\sqrt{t^{1-d}\ell _t}}\right| ^{\frac{1}{d}}\right) \right] \le \sum _{n\ge 0} \left( \frac{3}{2d}\right) ^n=\frac{1}{1-\left( \frac{3}{2d}\right) }\le 4.\nonumber \\ \end{aligned}$$
(5.27)

This proves the first estimate in Theorem 3.4. To prove the second one we observe that (3.2) together with (2.13) and the dominated convergence theorem implies that \({\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]\rightarrow g_{\mathrm{hom}}(a,b,\nu )\). Thus, for any \(\eta >0\) we have

$$\begin{aligned} {\mathbb {P}}\left( |X_{t,\ell _t}^{a,b,\nu }(g)-g_\mathrm{hom}(a,b,\nu )|>\eta \right) \le {\mathbb {P}}\left( |X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]|>\eta /2\right) ,\nonumber \\ \end{aligned}$$
(5.28)

for t sufficiently large. Moreover, applying Markov’s inequality and using (5.27) we infer

$$\begin{aligned} \begin{aligned} {\mathbb {P}}\Big (|X_{t,\ell _t}^{a,b,\nu }(g)&-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]|>\eta /2\Big )\\&={\mathbb {P}}\Bigg (\exp \bigg (\frac{1}{C_d}\bigg |\frac{X_{t,\ell _t}^{a,b,\nu }(g)-{\mathbb {E}}[X_{t,\ell _t}^{a,b,\nu }(g)]}{\sqrt{t^{1-d}\ell _t}}\bigg |^{\frac{1}{d}}\bigg )>\exp \bigg (\frac{1}{C_d}\bigg (\frac{\eta }{2\sqrt{t^{1-d}\ell _t}}\bigg )^{\tfrac{1}{d}}\bigg )\Bigg ) \\&\le 4 \exp \Bigg (-\frac{1}{C_d}\left( \frac{\eta }{2\sqrt{t^{1-d}\ell _t}}\right) ^{\tfrac{1}{d}}\Bigg ). \end{aligned}\nonumber \\ \end{aligned}$$
(5.29)

Hence, gathering (5.28)–(5.29), taking the logarithm, and passing to the limsup in t we deduce that

$$\begin{aligned}&\limsup _{t\rightarrow +\infty }\left( (t^{1-d}\ell _t)^{\tfrac{1}{2d}} \log \left( {\mathbb {P}}\left( |X_{t,\ell _t}^{a,b,\nu }(g)-g_\mathrm{hom}(a,b,\nu )|>\eta \right) \right) \right) \\&\quad \le \limsup _{t\rightarrow +\infty }\,(t^{1-d}\ell _t)^{\tfrac{1}{2d}}\log (4)-\frac{1}{C_d}\left( \frac{\eta }{2}\right) ^{\frac{1}{d}}. \end{aligned}$$

Note that \(t^{1-d}\ell _t\le t^{2-d}\le 1\) for \(t\ge \ell _t\ge 1\). If along the particular sequence \(t^{1-d}\ell _t\rightarrow 0\), then we conclude by the above estimate. If the limsup is realized by a sequence such that \(t^{1-d}\ell _t\ge c>0\), then the estimate in Corollary 3.4 is trivial since \(X_{t,\ell _t}^{a,b,\nu }(g)\) converges in probability to \(g_\mathrm{hom}(a,b,\nu )\), so that the logarithmic term is negative for t large enough. \(\square \)

5.3 Almost plane-like formulas for \({\mathbb {Z}}^d\)-stationary integrands

In this subsection we show how to extend Theorem 3.1 to models with \({\mathbb {Z}}^d\)-stationarity assuming quantitative concentration inequalities in form of a multi-scale functional inequality.

Proof of Proposition 4.1

Fix \(\nu \in {\mathbb {S}}^{d-1}\), \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\) and consider sequences \(t_k,t_n'\rightarrow +\infty \) and \(\ell _k,\ell _n'\rightarrow +\infty \) such that \(0<\ell _k\le t_k\) and \(0<\ell _n'\le t_n'\). For \(n\in {\mathbb {N}}\) choose \(K_0=K_0(n)\ge n\in {\mathbb {N}}\) such that for all \(k\ge K_0\) we have \(\ell _k\ge \ell '_n+2\sqrt{d}\) and \(t_k\ge t_n'+2\sqrt{d}\). Consider the collection of integer vertices

$$\begin{aligned} I_{n,k}=\{z\in {\mathbb {Z}}^{d-1}:\,Q_{n,z}:=t'_nz+(-\tfrac{t'_n}{2},\tfrac{t'_n}{2})^{d-1}\subset (-\tfrac{t_k-2\sqrt{d}}{2},\tfrac{t_k-2\sqrt{d}}{2})^{d-1}\}. \end{aligned}$$

Note that with the orthogonal matrix \(O_{\nu }\) introduced in (2.2), for every \(z\in I_{n,k}\) it holds that

$$\begin{aligned} O_{\nu }(Q_{n,z}\times \{0\})=t'_nO_{\nu }(z,0)+t'_n(Q^{\nu }\cap H^{\nu })\subset (t_k-2\sqrt{d})(Q^{\nu }\cap H^{\nu }). \end{aligned}$$
(5.30)

Moreover, for \(t'_n\ge 1\) we also infer that

$$\begin{aligned} \bigcup _{z\in I}O_{\nu }(Q_{n,z}\times \{0\})\supset (t_k-4\sqrt{d}t'_n)(Q^{\nu }\cap H^{\nu }). \end{aligned}$$
(5.31)

For each \(z\in I_{n,k}\) we decompose the vector \(t'_nO_{\nu }(z,0)\in H^{\nu }\) into its integer part and a remainder writing

$$\begin{aligned} t'_nO_{\nu }(z,0)=z_n(z)-y_n(z), \end{aligned}$$
(5.32)

with \(z_n(z)\in {\mathbb {Z}}^d\), \(y_n(z)\in {\mathbb {R}}^d\) and \(|y_n(z)|\le \sqrt{d}\). Then, combining (5.30) and (5.32) we obtain that

$$\begin{aligned} O_{\nu }(Q_{n,z}\times \{0\})+y_n(z)=z_n(z)+t_n'(Q^{\nu }\cap H^{\nu }). \end{aligned}$$
(5.33)

Since \(|y_n(z)|\le \sqrt{d}\), it follows from (5.30) that \(O_{\nu }(Q_{n,z}\times \{0\})+y_n(z)\subset t_k Q^{\nu }\) for all \(z\in I_{n,k}\). In particular, since \(\ell _k\ge \ell '_n+2\sqrt{d}\) we conclude that

$$\begin{aligned} A_{n,\nu }(z):=O_{\nu }\left( Q_{n,z}\times \Big (-\tfrac{\ell _n'}{2},\tfrac{\ell _n'}{2}\Big )\right) +y_n(z)\subset \subset R^{\nu }_{t_k,\ell _k}. \end{aligned}$$
(5.34)

Moreover, (5.33) implies that \(A_{n,\nu }(z)=z_n(z)+R_{t'_n,\ell '_n}^{\nu }\) is an integer translate of \(R^{\nu }_{t_n',\ell _n'}\), so that by stationarity of g the random variables \(Y_{n,z}\) defined by setting

$$\begin{aligned} Y_{n,z}(\omega ):=\inf \Big \{E_{g}(\omega )(u,A_{n,\nu }(z)):u\in {\mathscr {A}}(u_{y_n(z)}^{a,b,\nu },A_{n,\nu }(z))\Big \} \end{aligned}$$

have the same distribution as \((t'_n)^{d-1}X_{t_n',\ell _n'}^{a,b,\nu }(g)\) for all \(z\in I_{n,k}\) (and are measurable). Note that the boundary value has changed since in general \(y_n(z)\notin H^{\nu }\). Let us number these random variables by numbering the finitely many elements in \(I_{n,k}\), i.e., we write \(I_{n,k}=\{z^1,\ldots ,z^r\}\) with \(r=r(n,k)\). Since the cubes \(Q_{n,z}\) are pairwise disjoint, it follows that

$$\begin{aligned} r\le \left( \frac{t_k}{t'_n}\right) ^{d-1}. \end{aligned}$$
(5.35)

For \(i=1,\ldots ,r\) let \(u_{n}^i\) be a candidate for the optimization problem defining \(Y_{n,z^i}(\omega )\). We now define \(v\in BV(R_{t_k,\ell _k}^\nu ;{\mathcal {M}})\) by setting

$$\begin{aligned} v(x):={\left\{ \begin{array}{ll} u_{n}^i(x)&{}\text{ if } x\in A_{n,\nu }(z^i)\setminus \bigcup _{j=1}^{i-1}A_{n,\nu }(z^j) \text{ for } \text{ some } i=1,\ldots ,r\text{, } \\ u^{a,b,\nu }(x) &{}\text{ otherwise. } \end{array}\right. } \end{aligned}$$

Thanks to (5.34) the function v is admissible for the minimum problem defining \(X_{t_k,\ell _k}^{a,b,\nu }(g)\). In order to estimate its energy, we split the jumpset into three different parts: the portion inside a set \(A_{n,\nu }(z^i)\), on the boundary of some \(A_{n,\nu }(z^i)\), and in the complement of \(\bigcup _{i=1}^r \overline{A_{n,\nu }(z^i)}\). From the definition of v we infer that

$$\begin{aligned} E_{g}(\omega )(v, R_{t_k,\ell _k}^{\nu })\le&\sum _{i=1}^r E_{g}(\omega )(u_{n}^i,A_{n,\nu }(z^i))\nonumber \\&+c\sum _{i=1}^r{\mathcal {H}}^{d-1}(\partial A_{n,\nu }(z^i)\cap S_v)+c{\mathcal {H}}^{d-1}\bigg (\Big (R_{t_k,\ell _k}^{\nu }\setminus \bigcup _{i=1}^r\overline{A_{n,\nu }(z^i)}\Big )\cap H^{\nu }\bigg ). \end{aligned}$$
(5.36)

We argue that the terms in the second line are asymptotically negligible. We start with the last term, which can be estimated using a purely geometrical argument. Note that when \(x\in \left( R_{t_k,\ell _k}^{\nu }\setminus \bigcup _{i=1}^r\overline{A_{n,\nu }(z^i)}\right) \cap H^{\nu }\), then there are two exhaustive cases:

$$\begin{aligned}&i)\quad x\in t_k (Q^{\nu }\cap H^{\nu })\setminus \bigcup _{i=1}^r O_{\nu }(Q_{n,z^i}\times \{0\}), \\&ii)\quad x\in O_\nu (Q_{n,z^i}\times \{0\})\setminus \overline{A_{n,\nu }(z^i)}\quad \text {for some }i. \end{aligned}$$

In the first case we can use (5.31) and deduce that

$$\begin{aligned} {\mathcal {H}}^{d-1}\left( t_k (Q^{\nu }\cap H^{\nu })\setminus \bigcup _{i=1}^r O_{\nu }(Q_{n,z^i}\times \{0\})\right) \le t_k^{d-1}-(t_k-4\sqrt{d}t'_n)^{d-1}\le C t_k^{d-2}t'_n.\nonumber \\ \end{aligned}$$
(5.37)

In the second case, note that there exists a point y on the segment \([0,y_n(z^i)]\) such that \(x+y\in \partial A_{n,\nu }(z^i)=z_n(z^i)+\partial R_{t'_n,\ell '_n}^{\nu }\). In view of (5.32) we have \(z_n(z^i)-y_n(z^i)\in H^\nu \). Since also \(x\in H^{\nu }\), we infer that

$$\begin{aligned} |\langle x+y-z_n(z^i),\nu \rangle |=|\langle y-z_n(z^i),\nu \rangle |=|\langle y-y_n(z^i),\nu \rangle |\le |\langle y_n(z^i),\nu \rangle |\le \sqrt{d}. \end{aligned}$$

Thus, for \(t'_n\) large enough the condition \(x+y-z_n(z^i)\in \partial R_{t'_n,\ell '_n}^{\nu }\) implies that there exists \(j\in \{1,\ldots ,d-1\}\) such that

$$\begin{aligned} |\langle x+y-z_n(z^i),O_{\nu }e_j\rangle |=\frac{t'_n}{2}. \end{aligned}$$

Since \(|y|\le |y_n(z^i)|\le \sqrt{d}\) this give

$$\begin{aligned} \frac{t'_n}{2}-2\sqrt{d}\le |\langle x-z_n(z^i),O_{\nu }e_j\rangle |\le \frac{t'_n}{2}+2\sqrt{d}. \end{aligned}$$

In particular,

$$\begin{aligned} {\mathcal {H}}^{d-1}(O_{\nu }(Q_{n,z^i}\times \{0\})\setminus \overline{A_{n,\nu }(z^i)})\le C (t'_n)^{d-2}. \end{aligned}$$

Taking into account the bound (5.35), we conclude that

$$\begin{aligned} \sum _{i=1}^r{\mathcal {H}}^{d-1}(O_{\nu }(Q_{n,z^i}\times \{0\})\setminus \overline{A_{n,\nu }(z^i)})\le C \frac{t_k^{d-1}}{t'_n}. \end{aligned}$$
(5.38)

Gathering (5.37) and (5.38) we finally obtain

$$\begin{aligned} {\mathcal {H}}^{d-1}\bigg (\Big (R_{t_k,\ell _k}^{\nu }\setminus \bigcup _{i=1}^r\overline{A_{n,\nu }(z^i)}\Big )\cap H^{\nu }\bigg )\le C \bigg (t_k^{d-2}t'_n+\frac{t_k^{d-1}}{t'_n}\bigg ). \end{aligned}$$
(5.39)

Next we treat the term \({\mathcal {H}}^{d-1}(\partial A_{n,\nu }(z^i)\cap S_v)\). Consider \(x\in \partial A_{n,\nu }(z^i)\cap S_v\) such that it is in the relative interior of \(\partial A_{n,\nu }(z^i)\) (the measure of the remaining part is negligible). Then without loss generality we can assume that

$$\begin{aligned}&v^+(x)=u_i^+(x)=u^{a,b,\nu }_{y_n(z^i)}(x)=b, \\&v^-(x)= u^{a,b,\nu }_{y_n(z^j)}(x)=a\; \text { for some }\; j\ne i\quad \text {or}\quad v^-(x)=u^{a,b,\nu }(x)=a. \end{aligned}$$

This implies that

$$\begin{aligned}&\langle x-y_n(z^i),\nu \rangle >0, \\&\langle x-y_n(z^j),\nu \rangle \le 0\quad \text {or}\quad \langle x,\nu \rangle \le 0. \end{aligned}$$

If \(\langle x,\nu \rangle \le 0\), we have \(0<\langle x-y_n(z^i),\nu \rangle \le \langle -y_n(z^i),\nu \rangle \le \sqrt{d}\). If instead \(\langle x-y_n(z^j),\nu \rangle \le 0\), we deduce that

$$\begin{aligned} 0<\langle x-y_n(z^i),\nu \rangle =\langle x-y_n(z^j),\nu \rangle +\langle y_n(z^j)-y_n(z^i),\nu \rangle \le \langle y_n(z^j)-y_n(z^i),\nu \rangle \le 2\sqrt{d}. \end{aligned}$$

Since \(z_n(z^i)-y_n(z^i)\in H^{\nu }\), the above estimates yield as well

$$\begin{aligned} 0<\langle x-z_n(z^i),\nu \rangle \le 2\sqrt{d}. \end{aligned}$$

If instead \(v^+(x)=a\) and \(v^-(x)=b\), we deduce by the same argument that \(-2\sqrt{d}\le \langle x-z_n(z^i),\nu \rangle \le 0\). Thus we obtain for \(\ell '_n\) large enough that

$$\begin{aligned} {\mathcal {H}}^{d-1}(\partial A_{n,\nu }(z^i)\cap S_v)\le&{\mathcal {H}}^{d-1}(\partial A_{n,\nu }(z^i)\cap \{x:\,|\langle x-z_n(z^i),\nu \rangle |\le 2\sqrt{d}\}) \\ =&{\mathcal {H}}^{d-1}(\partial R_{t'_n,\ell '_n}^{\nu }\cap \{y:\,|\langle y,\nu \rangle |\le 2\sqrt{d}\})\le C (t'_n)^{d-2}, \end{aligned}$$

where we used a change of variables taking into account that \(\partial A_{n,\nu }(z^i)-z_n(z^i)=\partial R_{t'_n,\ell '_n}^{\nu }\). Summing over all \(i=1,\ldots ,r\) we deduce from (5.35) that

$$\begin{aligned} \sum _{i=1}^r{\mathcal {H}}^{d-1}(\partial A_{n,\nu }(z^i)\cap S_v)\le C\frac{t_k^{d-1}}{t'_n}. \end{aligned}$$
(5.40)

Inserting (5.39) and (5.40) in (5.36) we infer that

$$\begin{aligned} E_{g}(\omega )(v,R_{t_k,\ell _k}^{\nu })\le \sum _{i=1}^r E_{g}(\omega )(u_{n}^i,A_{n,\nu }(z^i))+C\left( t_k^{d-2}t'_n+\frac{t_k^{d-1}}{t'_n}\right) . \end{aligned}$$

Note that thanks to (5.34), v is admissible for the minimization problem defining \(X_{t_k,\ell _k}^{a,b,\nu }\). Thus, since \(u_{n}^i(\omega )\) was arbitrary by minimization and using that \(Y_{n,z^i}\) have the same distribution as \((t'_n)^{d-1}X_{t'_n,\ell '_n}^{a,b,\nu }(g)\), we can take the expectation of the above estimate to deduce that

$$\begin{aligned} {\mathbb {E}}[X_{t_k,\ell _k}^{a,b,\nu }(g)]\le&\frac{1}{t_k^{d-1}}{\mathbb {E}}[E_{g(\cdot )}(v,R_{t_k,\ell _k}^{\nu })]\le \underbrace{r\left( \frac{t'_n}{t_k}\right) ^{d-1}}_{\le 1}{\mathbb {E}}[X_{t'_n,\ell '_n}^{a,b,\nu }(g)]+\frac{C}{t_k^{d-1}}\left( t_k^{d-2}t'_n+\frac{t_k^{d-1}}{t'_n}\right) \\ \le&{\mathbb {E}}[X_{t'_n,\ell '_n}^{a,b,\nu }(g)]+C\left( \frac{t'_n}{t_k}+\frac{1}{t'_n}\right) . \end{aligned}$$

Now letting first \(k\rightarrow +\infty \) and then \(n\rightarrow +\infty \) we obtain

$$\begin{aligned} \limsup _{k\rightarrow +\infty }{\mathbb {E}}[X_{t_k,\ell _k}^{a,b,\nu }(g)]\le \liminf _{n\rightarrow +\infty }{\mathbb {E}}[X_{t'_n,\ell '_n}^{a,b,\nu }(g)]. \end{aligned}$$

Since the sequences \(t'_n,\ell '_n\) and \(t_k,\ell _k\) were arbitrary, the limit of the expectations exists. By setting \(\ell _k=t_k\), we deduce the claim from Theorem 3.1 and the dominated convergence theorem. \(\square \)

Now we can prove the almost sure convergence of the process \(X_{t,\ell _t}^{a,b,\nu }(g)\) towards \(g_{\mathrm{hom}}(a,b,\nu )\) under the Assumption 2 when the weight \(\pi \) has higher integrability.

Proof of Corollary 4.3

Let us fix \(\nu \in {\mathbb {S}}^{d-1}\), \((a,b)\in {\mathcal {M}}\times {\mathcal {M}}\), and let \({\bar{\ell }}_n\rightarrow +\infty \) be an arbitrary diverging sequence. We first show that almost surely we have

$$\begin{aligned} \lim _{n\rightarrow +\infty }\left( X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)(\omega )-{\mathbb {E}}[X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)]\right) =0. \end{aligned}$$
(5.41)

To this end, let \(r>2(d-1)\) be as in the assumptions and let \(p:=\frac{r}{2(d-1)}>1\), so that \(2p(d-1)=r\). Moreover, let \(\alpha >0\) be sufficiently small such that \(p\,\alpha \in (0,p\,(d-1)-1)\ne \emptyset \); upon decreasing \({\bar{\ell }}_n\) and taking its lower integer part it is not restrictive to assume that \({\bar{\ell }}_n\le n^\alpha \) and \({\bar{\ell }}_n\in {\mathbb {N}}\) for every \(n\in {\mathbb {N}}\), so that

$$\begin{aligned} ({\overline{\ell }}_n n^{1-d})^p\le n^{p\,\alpha -p\,(d-1)}. \end{aligned}$$
(5.42)

The choice of \(\alpha \) ensures that \(p\,\alpha -p\,(d-1)<-1\). Hence, for every \(\delta >0\) an application of Chebyshev’s inequality together with Theorem 3.3 and (5.42) gives

$$\begin{aligned} \sum _{n\in {\mathbb {N}}}{\mathbb {P}}\left( |X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)(\omega )-{\mathbb {E}}[X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)]|\ge \delta \right) \le&\sum _{n\in {\mathbb {N}}}\frac{1}{\delta ^{2p}}{\mathbb {E}}\big [\big |X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)-{\mathbb {E}}[X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)]\big |^{2p}\big ] \\ \le&C(\delta ,p) \sum _{n\in {\mathbb {N}}}n^{p\,\alpha -p\,(d-1)} \int _0^{+\infty }(s+1)^{r}\pi (s)\,\mathrm {d}s<+\infty . \end{aligned}$$

Thus, the sequence \(X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)(\omega )-{\mathbb {E}}[X_{n,{{\bar{\ell }}}_n}^{a,b,\nu }(g)]\) converges completely and hence almost surely to 0, i.e.,  (5.41) follows. As a consequence, using Proposition 4.1 and Remark 3.2 (i) we find a set \(\Omega '\subset \Omega \) of full probability such that

$$\begin{aligned} \lim _{n\rightarrow +\infty }X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)(\omega )=\lim _{n\rightarrow +\infty }X_{n,n}^{a,b,\nu }(g)(\omega )=g_\mathrm{hom}(a,b,\nu )\quad \text {for every}\ \omega \in \Omega '. \end{aligned}$$
(5.43)

Note that for any \(n\in {\mathbb {N}}\) and any \(\ell \in [{\bar{\ell }}_n,n]\) the monotonicity property (2.11) implies that

$$\begin{aligned} X_{n,n}^{a,b,\nu }(g)(\omega )\le X_{n,\ell }^{a,b,\nu }(g)(\omega )\le X_{n,{\bar{\ell }}_n}^{a,b,\nu }(g)(\omega )\quad \text {for every}\ \omega \in \Omega . \end{aligned}$$

In particular, in view of (5.43), for any \(\delta >0\) and any \(\omega \in \Omega '\) there exists \(n_0=n_0(\omega ,\delta )\in {\mathbb {N}}\) such that

$$\begin{aligned} |X_{n,\ell }^{a,b,\nu }(g)(\omega )-g_\mathrm{hom}(a,b,\nu )|<\delta \quad \text {for all}\ n\ge n_0\ \text {and all}\ \ell \in [{\bar{\ell }}_n,n]. \end{aligned}$$

From this we immediately deduce that

$$\begin{aligned} \lim _{\begin{array}{c} t_n\rightarrow +\infty \\ t_n\in {\mathbb {N}} \end{array}}X_{t_n,\ell _n}^{a,b,\nu }(g)(\omega )= g_{\mathrm{hom}}(a,b,\nu )\quad \text {for every}\ \omega \in \Omega ', \end{aligned}$$
(5.44)

provided \(t_n\ge \ell _n\ge {\bar{\ell }}_n\) for every \(n\in {\mathbb {N}}\).

The case of arbitrary sequences \(t_n\rightarrow +\infty ,\ell _n\rightarrow +\infty \) with \(t_n\ge \ell _n\ge {\bar{\ell }}_n\) for every \(n\in {\mathbb {N}}\) can be treated by combining (2.11) and (2.12). Namely, we consider the auxiliary sequences \(t_n^-:=\lfloor t_n\rfloor \), \(t_n^+:=\lceil t_n\rceil \). Then (2.12) yields

$$\begin{aligned} X_{t_n^+,\ell _n}^{a,b,\nu }(g)(\omega )\le X_{t_n,\ell _n}^{a,b,\nu }(g)(\omega )+\frac{c(t_n^+-t_n)}{t_n^+}. \end{aligned}$$

Since \(\ell _n\le t_n^+\), we deduce from (5.44) that

$$\begin{aligned} \liminf _{n\rightarrow +\infty }X_{t_n,\ell _n}^{a,b,\nu }(g)(\omega )\ge \liminf _{n\rightarrow +\infty }X_{t_n^+,\ell _n}^{a,b,\nu }(g)(\omega )\ge g_{\mathrm{hom}}(a,b,\nu ). \end{aligned}$$

To prove the reverse inequality for the limit superior, we combine (2.12) with (2.11) to deduce that

$$\begin{aligned} X_{t_n,\ell _n}^{a,b,\nu }(g)(\omega )\le X_{t_n,{\bar{\ell }}_n}^{a,b,\nu }(g)(\omega )\le X_{t_n^-,{\bar{\ell }}_n}^{a,b,\nu }(g)(\omega )+\frac{c(t_n-t_n^-)}{t_n}. \end{aligned}$$

By assumption \({\bar{\ell }}_n\in {\mathbb {N}}\), so that \({\bar{\ell }}_n\le t_n^-\). Thus, applying (5.44) yields

$$\begin{aligned} \limsup _{n\rightarrow +\infty }X_{t_n,\ell _n}^{a,b,\nu }(g)(\omega )\le \limsup _{n\rightarrow +\infty }X_{t_n^-,{\bar{\ell }}_n}^{a,b,\nu }(g)(\omega )\le g_{\mathrm{hom}}(a,b,\nu ), \end{aligned}$$

which concludes the proof. \(\square \)