The Choice of Representative Volumes in the Approximation of Effective Properties of Random Materials
 130 Downloads
Abstract
The effective largescale properties of materials with random heterogeneities on a small scale are typically determined by the method of representative volumes: a sample of the random material is chosen—the representative volume—and its effective properties are computed by the cell formula. Intuitively, for a fixed sample size it should be possible to increase the accuracy of the method by choosing a material sample which captures the statistical properties of the material particularly well; for example, for a composite material consisting of two constituents, one would select a representative volume in which the volume fraction of the constituents matches closely with their volume fraction in the overall material. Inspired by similar attempts in materials science, Le Bris, Legoll and Minvielle have designed a selection approach for representative volumes which performs remarkably well in numerical examples of linear materials with moderate contrast. In the present work, we provide a rigorous analysis of this selection approach for representative volumes in the context of stochastic homogenization of linear elliptic equations. In particular, we prove that the method essentially never performs worse than a random selection of the material sample and may perform much better if the selection criterion for the material samples is chosen suitably.
1 Introduction
For materials with random heterogeneities on small scales, the approximation of the effective material coefficient by the method of representative volumes is a random quantity itself, as the outcome depends on the sample of the material. In the setting of linear elliptic PDEs with random coefficient fields—which corresponds to the setting of heat conduction, electrical currents, or electrostatics in a material with random microstructure—Gloria and Otto [48, 53, 54] have investigated the structure of the error of the approximation of the effective material coefficient by the method of representative volumes: the leadingorder contribution to the error (with respect to the size of the RVE) consists of random fluctuations; in expectation the approximation of effective coefficients by the method of representative volumes is accurate to higher order, that is the systematic error of the RVE method is of higher order.^{2} For a given size of the RVE—which corresponds to a fixed computational effort—the accuracy of the RVE method may therefore be increased significantly by reducing the variance of the approximations of the effective coefficient. It is precisely such a reduction of the variance by which the selection approach for representative volumes of Le Bris et al. [64] achieves its gain in accuracy.
For linear elliptic PDEs with random coefficients and moderate ellipticity contrast, the reduction of the variance by the ansatz of Le Bris et al. [64] is particularly remarkable; by selecting the representative volume according to the criterion that the averaged coefficient in the RVE should be particularly close to the averaged coefficient in the overall material, in numerical examples with ellipticity contrast \(\sim 5\) they observed a variance reduction by a factor of \(\sim 10\). Going beyond this simple selection criterion, they devised a criterion based on an expansion of the effective coefficient in the regime of small ellipticity contrast, which numerically achieves a remarkable variance reduction factor of \(\sim 60\) even for a moderate ellipticity contrast \(\sim 5\). Note that this basically corresponds to the gain of about one order of magnitude in accuracy for a negligible additional computational cost and implementation effort.
However, the analysis of the selection approach for representative volumes has been restricted to the onedimensional setting [64], in which the homogenization of linear elliptic PDEs is linear in the inverse coefficient and therefore independent of the geometry of the material. Besides the highly nonlinear dependence of the effective coefficient on the heterogeneous coefficient field in dimensions \(d\geqq 2\), one of the main challenges in the analysis of the selection method for representative volumes is the fact that it is only expected to increase the accuracy by a (though often very large) constant factor, at least for a fixed set of statistical quantities by which the selection is performed. At the same time, the available error estimates for the representative volume element method in stochastic homogenization are only optimal up to constant factors. For this reason, the analysis of the selection approach for representative volumes necessitates a finegrained analysis of the structure of fluctuations in stochastic homogenization.
1.1 Stochastic Homogenization of Linear Elliptic PDEs: A Brief Outline
1.2 Informal Summary of Our Main Results

essentially never performs worse than a completely random selection of the representative volume element, but may perform much better for suitable selection criteria,

basically maintains the order of the systematic error of the approximation for the effective coefficient, and

reduces also the error in the approximation for the effective coefficient that may occur with a given low probability, that is reduces also the “outliers” of the approximation for the effective coefficient.
 The systematic error of the approximation \(a^{{\text {selRVE}}}\) is essentially (up to powers of \(\log L\) and some prefactors) of the same order as the systematic error of the standard representative volume element method \(a^{{\text {RVE}}}\): We haveThe quantity \(\kappa \) will be discussed below.$$\begin{aligned} \big \mathbb {E}\big [a^{{\text {selRVE}}}\big ]a_{\mathsf {hom}}\big  \leqq \frac{C\kappa ^{3/2}}{\delta } L^{d} \log L^C. \end{aligned}$$
 The fluctuations of the approximation \(a^{{\text {selRVE}}}\) are reduced by the fraction of the variance of \(a^{{\text {RVE}}}\) that is explained by \(\mathcal {F}(a)\). More precisely, we derive the estimatewhere \(\rho _{\mathcal {F}(a),a^{{\text {RVE}}}} \in [1,1]\) denotes the correlation coefficient of \(\mathcal {F}(a)\) and \(a^{{\text {RVE}}}\), given by$$\begin{aligned} \frac{{{\text {Var}}~}a^{{\text {selRVE}}}}{{{\text {Var}}~}a^{{\text {RVE}}}} \leqq&1(1\delta ^2)\rho _{\mathcal {F}(a),a^{{\text {RVE}}}}^2 +\frac{C \kappa ^{3/2} r_{{\text {Var}}}}{\delta } L^{d/2} \log L^C \end{aligned}$$and where \(r_{{\text {Var}}}:=\frac{L^{d}}{{{\text {Var}}~}a^{{\text {RVE}}}}\) denotes the ratio between the expected order of fluctuations of \(a^{{\text {RVE}}}\) and the actual magnitude of fluctuations. Note that the last term in the estimate on \({{\text {Var}}~}a^{{\text {selRVE}}}\) converges to zero as the size L of the representative volume increases.$$\begin{aligned} \rho _{\mathcal {F}(a),a^{{\text {RVE}}}}:= \frac{{\text {Cov}}[a^{{\text {RVE}}},\mathcal {F}(a)]}{\sqrt{{{\text {Var}}~}\mathcal {F}(a) {{\text {Var}}~}a^{{\text {RVE}}}}}, \end{aligned}$$
 The probability of “outliers” is reduced by the selection method just as suggested by the variance reduction, at least in an “intermediate” region between the “bulk” and the “outer tail” of the probability distribution: One has a moderatedeviationstype estimate of the formfor any \(s\geqq C\max \{(1\rho ^2)^{1/2} \delta ^{1},\delta (1\rho ^2)^{1/2}\}\) and some \(\beta =\beta (d)>0\), where \(\mathcal {N}_1\) denotes the centered normal distribution with unit variance.$$\begin{aligned}&\mathbb {P}\Bigg [\frac{\big a^{{\text {selRVE}}}_{ij}a_{{\mathsf {hom}},ij}\big }{\sqrt{\big (1\rho _{\mathcal {F}(a),a^{{\text {RVE}}}}^2+\delta ^2\big ){{\text {Var}}~}a^{{\text {RVE}}}_{ij}+L^{d/2\beta }}}\geqq s\Bigg ]\\&\quad \leqq \Big (1+\frac{C\delta }{\sqrt{1\rho ^2} s}+\frac{C}{\delta L^\beta }\Big )\mathbb {P}\big [\mathcal {N}_1\geqq s\big ]+\frac{C}{\delta }\exp (L^{2\beta }) \end{aligned}$$
 In the above bounds, \(\kappa :=(1\rho _{\mathcal {F}(a),a^{{\text {RVE}}}}^2)^{1}\) denotes (essentially) the condition number of the covariance matrix \({{\text {Var}}~}(a^{{\text {RVE}}},\mathcal {F}(a))\). For the case that the correlation \(\rho _{\mathcal {F}(a),a^{{\text {RVE}}}}\) is close to one, we derive bounds which are independent of \(\kappa \) but come at the cost of a lower rate of convergence in L, namelyand$$\begin{aligned} \big \mathbb {E}\big [a^{{\text {selRVE}}}\big ]a_{\mathsf {hom}}\big  \leqq \frac{C}{\delta } L^{d/2d/8} \log L^C \end{aligned}$$$$\begin{aligned} \frac{{{\text {Var}}~}a^{{\text {selRVE}}}}{{{\text {Var}}~}a^{{\text {RVE}}}} \leqq&1(1\delta ^2)\big \rho _{\mathcal {F}(a),a^{{\text {RVE}}}}\big ^2 +\frac{C r_{{\text {Var}}}}{\delta } L^{d/8} \log L^C. \end{aligned}$$
This raises the question whether such a degeneracy of the correlation coefficient can occur for “natural” choices of the statistical quantity \(\mathcal {F}(a)\). In Theorem 4, we shall prove that even for a “natural” choice like Open image in new window there is a priori no guarantee that there is a nonzero correlation between \(a^{{\text {RVE}}}\) and \(\mathcal {F}(a)\): We construct an example of a probability distribution of a for which the covariance of \(a^{{\text {RVE}}}\) and the average of the coefficient field Open image in new window in fact vanishes, while the variances Open image in new window and \({{\text {Var}}~}a^{{\text {RVE}}}\) are nondegenerate.
1.3 Outline of Our Strategy
The basic idea underlying our analysis of the selection approach for representative volumes is the observation that the joint probability distribution of the approximation for the effective coefficient \(a^{{\text {RVE}}}\) and one or more statistical quantities \(\mathcal {F}(a)\) like the average of the coefficient field Open image in new window is close to a multivariate Gaussian, up to an error of the order \(L^{d} \log L^C\) in a suitable notion of distance between probability measures. The selection of representative volumes by the criterion (7)—which amounts to conditioning on the event \(\mathcal {F}(a)\mathbb {E}[\mathcal {F}(a)]\leqq \delta \sqrt{{{\text {Var}}~}\mathcal {F}(a)}\)—then reduces the variance of the probability distribution of \(a^{{\text {RVE}}}\) by the variance explained by the statistical quantity \(\mathcal {F}(a)\), up to error terms due to the deviation of the probability distribution from a multivariate Gaussian and the nonperfectness of the conditioning \(\delta >0\), see Fig. 2. Note that for an ideal multivariate Gaussian distribution, the expected value of the approximation \(a^{{\text {RVE}}}\) would be left unchanged under conditioning since the criterion (7) is symmetric around \(\mathbb {E}[\mathcal {F}(a)]\), that is the conditioning would not introduce a bias. As a consequence, for our approximate multivariate Gaussian \((a^{{\text {RVE}}},\mathcal {F}(a))\) the expectation of \(a^{{\text {RVE}}}\) is changed under conditioning only by the distance of our probability distribution to a multivariate Gaussian, which is a higherorder term. Note that both the reduction of the variance by conditioning and the estimate on the bias introduced by the conditioning rely crucially on the fact that our probability distribution is close to a multivariate Gaussian (and not another probability distribution); it is obvious from the picture in Fig. 2 that a probability distribution other than a multivariate Gaussian could introduce a large bias under conditioning and even an increase in variance. Our analysis of the selection approach for representative volumes by Le Bris et al. [64] is a first practical application of the beautiful theory of fluctuations in stochastic homogenization, which has been developed in recent years and which our work both draws ideas from and contributes to.
The underlying reason for the convergence of the joint probability distribution of \(a^{{\text {RVE}}}\) and one or more functionals \(\mathcal {F}(a)\) towards a multivariate Gaussian is a central limit theorem for suitable collections of vectorvalued random variables. We show that the approximation \(a^{{\text {RVE}}}\) for the effective coefficient \(a_{\mathsf {hom}}\)—and also the functionals \(\mathcal {F}(a)\) that are used in the work of Le Bris et al. [64]—may be written as a sum of random variables with a local dependence structure with multiple levels, see Definition 6 and Proposition 7. For such sums of vectorvalued random variables with multilevel local dependence, a proof of quantitative normal approximation is provided in the companion article [43] (see also Theorem 9 below). To the best of our knowledge such quantitative normal approximation results were previously known only for sums of random variables with local dependence structure [33, 34, 80] (corresponding more or less to just the lowest level of random variables in Fig. 4 below), a framework into which the approximation for the effective coefficient \(a^{{\text {RVE}}}\) does not fit. Note that the sharp boundaries of the region defined by the selection criterion (7) (see also the sharp boundaries in Fig. 2) necessitate the use of a rather strong (though standard) distance between probability measures for our quantitative normal approximation result (see Definition 8); in particular, a stronger notion of distance between probability measures than the 1Wasserstein distance must be used.
As a byproduct, our work also provides a proof of quantitative normal approximation for \(a^{{\text {RVE}}}\) in a different setting than available in the literature so far. To the best of our knowledge, the results on quantitative normal approximation for \(a^{{\text {RVE}}}\) in the literature always rely on an assumption that the coefficient field a is obtained as a function of iid random variables [39, 52, 77] or that the probability distribution of a is subject to a secondorder Poincaré inequality like in [38]. In contrast, our result holds under the assumption of finite range of dependence, in which to the best of our knowledge only a qualitative normal approximation result had been known [6].
The companion article [43] also provides a result on moderate deviations in the sense of Kramers for sums of random variables with multilevel local dependence structure, see Theorem 10. Our result on the reduction of the error by the selection approach for representative volumes in the case of unlikely events (Theorem 3) is based on this moderate deviations theorem.
Our counterexample for the variance reduction—which shows that even “natural” statistical quantities like the spatial average Open image in new window do not necessarily explain a positive fraction of the variance of \(a^{{\text {RVE}}}\)—is based on the nonlinear dependence of the effective coefficient in periodic homogenization on the underlying coefficient field. More precisely, our counterexample consists of an interpolation between a standard random checkerboard and a random checkerboard with two types of tiles, one tile type being a constant coefficient field and one tile type being a secondorder laminate microstructure; see Section 6 for details of the construction.
1.4 Computation of Effective Properties of Random Materials: A More Detailed Look
Generally speaking, in the method of representative volumes the equation for the homogenization corrector may be solved by any numerical algorithm that is feasible for the given size of the representative volume; for example, standard finite element methods may be employed for representative volumes of moderate size, while for very large representative volumes one may use appropriate instances of modern computational homogenization methods like the multiscale finite element method, heterogeneous multiscale methods, and related approaches (see for example [1, 14, 29, 40, 60, 61, 71]) or the local orthogonal decomposition method by Målqvist and Peterseim [70].
Note that besides the modern numerical homogenization methods—which are in principle applicable to any elliptic PDE involving a heterogeneous coefficient field—there have been numerous numerical works on the more specific problem of the approximation of effective coefficients in stochastic homogenization, see for example [13, 32, 41, 42, 62, 72, 79].
1.5 The Selection Approach for Representative Volumes by Le Bris, Legoll and Minvielle
On a numerical level, such a selection approach typically provides an increase in computational efficiency if the accuracy is indeed increased by conditioning on the event (9): usually, the most expensive step in the computation of the approximations \(a^{{\text {RVE}}}\) is the computation of the homogenization corrector as the solution to the PDE (3). In contrast, the generation of random coefficient fields a and the evaluation of the average of a is typically cheap. Therefore it is often worth generating about \(\frac{1}{\delta }\) independent realizations of a to obtain on average one realization of a which satisfies (9); for this single realization, the corrector equation (3) is solved numerically and the approximation \(a^{{\text {RVE}}}\) for the effective coefficient is computed. This strategy is also applicable to situations in which the probability distribution of the coefficient field is not known, but one has only access to a large number of samples of the coefficient field, like in applications in which one has access to data from actual material samples.
Another remarkable feature of the selection approach for representative volumes by Le Bris, Legoll, and Minvielle is its compatibility with the vast majority of numerical homogenization methods: As the selection approach for representative volumes operates at the level of the choice of the coefficient field a, it may be combined with essentially any numerical discretization method for the corrector problem (59). Note that there exist many numerical homogenization methods that are particularly welladapted to certain geometries of the microstructure; the selection approach for representative volumes may be employed in most of these methods to achieve a further speedup.
The selection approach for representative volumes is only one out of several variance reduction concepts in the context of stochastic homogenization: Blanc et al. [22, 23, 25] have succeeded in reducing the variance by the method of antithetic variables; note that however for this approach the achievable variance reduction factor is much more limited. The method of control variates has also been demonstrated to be successful in the context of the computation of effective coefficients in stochastic homogenization [25, 65].
1.6 A Brief Overview of Quantitative Stochastic Homogenization
Higherorder approximation results in terms of homogenized problems have been derived in [19, 20, 21, 56, 69], relying on the concept of higherorder correctors which was first used in the stochastic homogenization context in [44] to establish Liouville principles of arbitrary order in the spirit of Avellaneda and Lin’s result in periodic homogenization [11]. Further works in quantitative stochastic homogenization include the analysis of nondivergence form equations [7], a regularity theory up to the boundary [45], degenerate elliptic equations [2, 46], and the homogenization of parabolic equations [3, 66]. Recently, Armstrong and Dario [4] and Dario [36] succeeded in establishing quantitative homogenization for supercritical Bernoulli bond percolation on the standard lattice.
The fluctuations of the mathematical objects arising in the stochastic homogenization of linear elliptic PDEs have been the subject of a beautiful series of works, starting with the work of Nolen [77] and a subsequent work of Gloria and Nolen [52] on quantitative normal approximation for (a single component of) the approximation of the effective conductivity \(a^{{\text {RVE}}}\) and a work of Mourrat and Otto [74] on the correlation structure of fluctuations in the homogenization corrector \(\phi _i\). Mourrat and Nolen [73] have shown a quantitative normal approximation result for the fluctuations of the corrector. Gu and Mourrat [57] have derived a description of fluctuations in the solutions to the equation with random coefficient field (1). Recently, a pathwise description of fluctuations of the solutions to the equation with random coefficient field (1)—namely, in terms of deterministic linear functionals of the socalled homogenization commutator\(\Xi :=(aa_{\mathsf {hom}})({\text {Id}}+\nabla \phi )\), a random field converging (for \(\varepsilon \rightarrow 0\)) towards white noise—was developed by Duerinckx et al. [39]. The scaling limit of certain energetic quantities—related to the homogenization commutator—as well as the scaling limit of the homogenization corrector has been identified in the setting of finite range of dependence by Armstrong et al. [5]. As far as quantitative normal approximation results are concerned, all of these works work under the assumption of i.i.d. coefficients (in the discrete setting) or secondorder Poincaré inequalities. To the best of our knowledge, the present work provides the first quantitative description of fluctuations (though so far limited to the approximation of the effective conductivity \(a^{{\text {RVE}}}\)) when the decorrelation in the coefficient field is quantified by the assumption of finite range of dependence instead of functional inequalities.
Note that despite its long history [35, 63, 67, 78], the qualitative theory of stochastic homogenization has also been a very active area of research in the past years, see for example [10, 27, 58, 59]; however, due to the substantial length of the present manuscript we shall not provide a more detailed discussion and refer the reader to these references instead.
Notation Throughout the paper, we shall use standard notation for Sobolev spaces and weak derivatives; for a spacetime function v(x, s), we denote by \(\nabla v\) its spatial gradient (in the weak sense) and by \(\partial _s v\) its (weak) time derivative. The notation Open image in new window is used for the average integral over a set B of positive but finite Lebesgue measure. The space of measurable functions f with \(f_{L^p}:=(\int _{\mathbb {R}^d} f^p \,\mathrm{d}x)^{1/p}<\infty \) will be denoted by \(L^p\). By \(L^p_{loc}\) we denote the space of functions f with \(f\chi _{\{x\leqq R\}}\in L^p\) for all \(R<\infty \). We shall also use the weighted space \(L^p_{h}\) of functions with \(f_{L^p_h}:=(\int _{\mathbb {R}^d} f(x)^p h(x) \,\mathrm{d}x)^{1/p}<\infty \) for a nonnegative measurable weight function h. By \(H^1(\mathbb {R}^d)\) we denote as usual the Sobolev space of functions \(v\in L^2(\mathbb {R}^d)\) with \(\nabla v\in L^2(\mathbb {R}^d)\); similarly, \(H^1_{loc}(\mathbb {R}^d)\) is the space of functions v with \(v\in L^2_{loc}(\mathbb {R}^d)\) and \(\nabla v\in L^2_{loc}(\mathbb {R}^d)\). For a Banach space X we denote by \(L^p([0,T];X)\) the usual Lebesgue–Bochner space.
As usual, we shall denote by C and c constants whose value may change from occurrence to occurrence. We are going to use the notation \(\mathcal {C}(a)\) and similar expressions to denote a random constant subject to suitable moment bounds; again, the precise value of \(\mathcal {C}(a)\) may change from occurrence to occurrence.
For a vector \(v\in \mathbb {R}^m\) we denote by v its Euclidean norm. We denote the identity matrix in \(\mathbb {R}^{N\times N}\) by \({\text {Id}}\) or \({\text {Id}}_N\). For a matrix \(A\in \mathbb {R}^{m\times m}\) we shall denote by A its natural norm \(A:=\max _{v,w\in \mathbb {R}^m,v=w=1} v\cdot A w\) and by \(A^*\) its transpose (as all our matrices are real). For \(x\in \mathbb {R}^d\) we denote by \(x_\infty =\max _i x_i\) its supremum norm. By \(xy_{{\text {per}}}\) respectively (for sets) \({\text {dist}}_{{\text {per}}}(U,V)\), we denote the periodicityadjusted distance (in the context of the torus \([0,L\varepsilon ]^d\)). By \(xy_\infty ^{{{\text {per}}}}\) and \({\text {dist}}^{{\text {per}}}_\infty (x,y)\), we denote the corresponding distances associated with the maximum norm. For a positive definite matrix A, we denote by \(\kappa (A)\) its condition number.
For a map \(f:\mathbb {R}^N\rightarrow V\) into a normed vector space V, we denote for any \(r>0\) by \({{\text {osc}}}_r f(x_0):=\sup _{x,y\in \{xx_0\leqq r\}} f(x)f(y)_V\) its oscillation in the ball of radius r around \(x_0\).
The conditional expectation of a random variable X given Y is denoted by \(\mathbb {E}[XY]\).
2 Main Results
 (A1)
Uniform ellipticity of a coefficient field a as usual means that there exists a positive real number \(\lambda >0\) such that almost surely we have \(a(x)v\cdot v \geqq \lambda v^2\) for almost every \(x\in \mathbb {R}^d\) and every \(v\in \mathbb {R}^d\). Furthermore we assume uniform boundedness in the sense that almost surely \(a(x)v\leqq \frac{1}{\lambda }v\) holds for almost every \(x\in \mathbb {R}^d\) and every \(v\in \mathbb {R}^d\).
 (A2)
Stationarity means that the law of the shifted coefficient field \(a(\cdot +x)\) must coincide with the law of \(a(\cdot )\) for every \(x\in \mathbb {R}^d\). On a heuristic level, this means that “the probability distribution of a is everywhere the same” or, in other words, that the material is spatially statistically homogeneous.
 (A3)
Finite range of dependence\(\varepsilon \) means that for any two Borel sets \(A,B\subset \mathbb {R}^d\) with \({\text {dist}}(A,B)\geqq \varepsilon \) the restrictions \(a_A\) and \(a_B\) must be stochastically independent. In particular, this assumption restricts the correlations in the coefficient field to the scale \(\varepsilon \ll 1\).

(\(\hbox {A3}_a\)) The coefficient field a is almost surely \(L \varepsilon \mathbb {Z}^d\)periodic.

(\(\hbox {A3}_b\)) There exists a finite range of dependence\(\varepsilon >0\) such that for any two measurable \(L \varepsilon \mathbb {Z}^d\)periodic sets \(A,B\subset \mathbb {R}^d\) with \({\text {dist}}(A,B)\geqq \varepsilon \) the restrictions \(a_A\) and \(a_B\) are stochastically independent.

(\(\hbox {A3}_c\)) For any \(x_0\in \mathbb {R}^d\) the law of the restriction \(a_{x_0+[\frac{L\varepsilon }{4},\frac{L\varepsilon }{4}]^d}\) coincides with the corresponding law for some (nonperiodic) ensemble of coefficient fields \(a^{\mathbb {R}^d}\) satisfying (A1)–(A3).
 (A2’)
We say that our probability distribution of coefficient fields a satisfies discrete stationarity if the law of the shifted coefficient field \(a(\cdot +x)\) coincides with the law of \(a(\cdot )\) for every shift \(x\in \varepsilon \mathbb {Z}^d\).
Assumption 1
Under the above assumptions, the selection approach for representative volumes to capture certain statistical properties of the material in the representative volume particularly well—as proposed by Le Bris et al. [64]—leads to the following increase in accuracy of the computed material coefficients:
Theorem 2
 (a)The systematic error of the approximation \(a^{{\text {selRVE}}}\) satisfies the estimate$$\begin{aligned} \big \mathbb {E}\big [a^{{\text {selRVE}}}\big ]a_{\mathsf {hom}}\big  \leqq \frac{C \kappa _{ij}^{3/2}}{\delta ^N} L^{d} \log L^{C(d,\gamma )}. \end{aligned}$$(15)
 (b)The variance of the approximation \(a^{{\text {selRVE}}}\) is estimated from above bywhere \(\rho ^2\) is the fraction of the variance of \(a^{{\text {RVE}}}_{ij}\) explained by the \(\mathcal {F}(a)\), that is, \(\rho ^2\) is the maximum of the squared correlation coefficient between \(a^{{\text {RVE}}}_{ij}\) and any linear combination of the \(\mathcal {F}_n(a)\). The explained fraction of the variance is given by the formula$$\begin{aligned} \frac{{{\text {Var}}~}a^{{\text {selRVE}}}_{ij}}{{{\text {Var}}~}a^{{\text {RVE}}}_{ij}} \leqq 1(1\delta ^2) \rho ^2 + \frac{C \kappa _{ij}^{3/2}r_{{\text {Var}},ij}}{\delta ^N} L^{d/2} \log L^{C(d,\gamma )}, \end{aligned}$$(16)$$\begin{aligned} \rho ^2 := \frac{{\text {Cov}}[a^{{\text {RVE}}}_{ij},\mathcal {F}(a)] \cdot ({{\text {Var}}~}\mathcal {F}(a))^{1} {\text {Cov}}[\mathcal {F}(a),a^{{\text {RVE}}}_{ij}]}{{{\text {Var}}~}a^{{\text {RVE}}}_{ij}}. \end{aligned}$$(17)
 (c)The probability that a randomly chosen coefficient field a satisfies the selection criterion (14) is at least$$\begin{aligned} \mathbb {P}\big [\mathcal {F}(a)\mathbb {E}\big [\mathcal {F}(a)\big ]\leqq \delta L^{d/2}\big ] \geqq c(N) \delta ^N. \end{aligned}$$(18)
 (d)The systematic error and the variance of \(a^{{\text {selRVE}}}\) may be estimated independently of \(\kappa _{ij}\) at the price of lower rate of convergence in Land$$\begin{aligned} \big \mathbb {E}\big [a^{{\text {selRVE}}}\big ]a_{\mathsf {hom}}\big  \leqq \frac{C}{\delta ^N} L^{d/2d/8} \log L^{C(d,\gamma )} \end{aligned}$$(19)$$\begin{aligned} \frac{{{\text {Var}}~}a^{{\text {selRVE}}}_{ij}}{{{\text {Var}}~}a^{{\text {RVE}}}_{ij}} \leqq 1(1\delta ^2) \rho ^2 + \frac{Cr_{{\text {Var}},ij}}{\delta ^N} L^{d/8} \log L^{C(d,\gamma )}. \end{aligned}$$(20)
The previous theorem states that the approximation of effective coefficients by the selection approach for representative volumes is essentially at least as accurate as a random selection of samples (except for a possible additional relative error of the order \(C L^{d/2} \log L^C\), which however converges to zero quickly as L increases), at least when measuring the meansquare error. If the selection is based on a statistical quantity \(\mathcal {F}(a)\) which is capable of explaining a large part of the variance of \(a^{{\text {RVE}}}_{ij}\), the selection approach achieves a much better accuracy than a random selection of samples (namely, by a factor of about \(\sqrt{1\rho ^2}\)).
However, the previous theorem only provides a statement about the reduction of the meansquare error by the selection approach for representative volumes. A natural question is whether this reduction of the error also applies to rare events: More precisely, if we fix a small probability \(p>0\), is the bound on the error \(a^{{\text {selRVE}}}_{ij}a_{{\mathsf {hom}},ij}\) which holds with probability \(1p\) also improved as suggested by the variance reduction estimate (16)? The following theorem shows that this is in fact true for “moderate deviations”, that is basically for probabilities \(p\gtrsim \exp (L^\beta )\) for some \(\beta >0\). More precisely, the theorem is to be read as follows: up to error terms that converge to zero as \(L\rightarrow \infty \) and \(s\rightarrow \infty \), the probability of \(a^{{\text {selRVE}}}_{ij}\) deviating from \(a_{{\mathsf {hom}},ij}\) by more than s times the ideally reduced standard deviation \(\sqrt{(1\rho ^2){{\text {Var}}~}a^{{\text {RVE}}}_{ij}}\) behaves like the probability of a normal distribution deviating from its mean by more than s standard deviations, at least in some regime \(s\leqq L^{\beta /3}\).
Theorem 3
We have shown in the preceding two theorems that the selection approach for representative volumes by Le Bris et al. essentially does not increase the error; it succeeds in reducing the fluctuations of the approximations as soon as the functionals \(\mathcal {F}(a)\) and the approximation \(a^{{\text {RVE}}}\) have a nonzero covariance.
Theorem 4
Let us note that it is presumably not too difficult to replace the random checkerboard in our construction of the counterexample featuring (23) by random spherical inclusions distributed according to a Poisson point process (with overlaps of the inclusions). This would yield a counterexample subject to the continuous stationarity (A2).
The next theorem suggests that the failure of effective variance reduction is atypical and may be limited to rather artificial examples. For a large class of random coefficient fields—namely for coefficient fields that are obtained from a collection of iid random variables \(\xi _{k}\), \(k\in \varepsilon \mathbb {Z}^d\), by applying a stationary monotone map with finite range of dependence—the correlation coefficient between \(a^{{\text {RVE}}}\) and the average Open image in new window is bounded from below by a positive number. Therefore, for such (ensembles of) coefficient fields both the method of special quasirandom structures and the method of control variates in fact reduce the variance by some factor \(\tau <1\) when applied with the choice Open image in new window .
Proposition 5
(Reduction of the Variance for a Large Class of Coefficient Fields) Let \(\varepsilon >0\) and let \(L\geqq 2\) be an integer and let V denote some measure space. Let \((\Gamma _k)\), \(k\in \varepsilon \mathbb {Z}^d\cap [0,L\varepsilon )^d\), be a collection of independent identically distributed Vvalued random variables, and denote by \(({\tilde{\Gamma }}_k)\) an independent copy. Extend \(\Gamma _k\) to \(k\in \varepsilon \mathbb {Z}^d\) by \(L\varepsilon \)periodicity. For \(k\in \varepsilon \mathbb {Z}^d\) and \(z\in V\), denote by \(\Delta _{k,z} \Gamma \) the collection \(({\tilde{\Gamma }}_k)\) obtained by setting \({\tilde{\Gamma }}_k:=z\) and \({\tilde{\Gamma }}_j=\Gamma _j\) for all \(j\ne k\).
Let \(a=a(x,\Gamma )\) be a measurable map into the uniformly elliptic \(L\varepsilon \)periodic symmetric coefficient fields with the property that \(a(x,\Gamma )\) depends only on the \(\Gamma _k\) with \(xk_{{\text {per}}}\leqq K\varepsilon \) for some \(K\geqq 1\) (in a measurable way). Suppose that the map is stationary in the sense that \(a(x+y,\Gamma )=a(x,\Gamma _{\cdot +y})\) for any \(y\in \varepsilon \mathbb {Z}^d\).
In the statements of our main theorems, we have made use of the following notion of “multilevel local dependence decomposition”; this structure will also be at the heart of the proof of our main results (an illustration of this decomposition is provided in Fig. 4):
Definition 6
(Sums of Random Variables with Multilevel Local Dependence Structure) Let \(d\geqq 1\), \(N\in \mathbb {N}\), \(\varepsilon >0\), and \(L\geqq 2\). Consider a probability distribution of coefficient fields a on \(\mathbb {R}^d\) subject to the assumptions of ellipticity and boundedness, stationarity, and finite range of dependence \(\varepsilon \) (A1), (A2), and (A3), or the periodization of such an ensemble subject to the conditions (A1), (A2), and (A3\(_a\))  (A3\(_c\)). Let \(X=X(a)\) be an \(\mathbb {R}^N\)valued random variable.

The random variable \(X_y^m(a)\) only depends on \(a_{y+K \log L \, [2^m \varepsilon ,2^m \varepsilon ]^d}\). More precisely, \(X_y^m(a)\) is a measurable function of \(a_{y+K \log L \, [2^m \varepsilon ,2^m \varepsilon ]^d}\) equipped with the topology of Hconvergence.
 We have$$\begin{aligned} X=\sum _{m=0}^{1+\log _2 L} \sum _{y\in 2^m \varepsilon \mathbb {Z}^d\cap [0,L\varepsilon )^d} X_y^m. \end{aligned}$$
 The random variables \(X_y^m\) satisfy the bound$$\begin{aligned} X_y^m_{\exp ^\gamma } \leqq B L^{d}. \end{aligned}$$(26)
The next proposition shows that the approximation \(a^{{\text {RVE}}}\) of the effective coefficient by the method of representative volumes may indeed be rewritten as a sum of random variables with a multilevel local dependence structure. We establish the same result for the spatial average of the coefficient field Open image in new window and the secondorder term \(\mathcal {F}_{2\mathrm{point}}(a)\) in the low ellipticity contrast expansion of \(\smash {a^{{\text {RVE}}}}\) given by (10).
Furthermore, the last result of the next proposition shows that the fraction of the variance of \(a^{{\text {RVE}}}\) that is explained by the statistical quantities \(\mathcal {F}_{avg}(a)\) and \(\mathcal {F}_{2\mathrm{point}}(a)\)—that is, the gain in accuracy achieved by the selection approach for representative volumes when employing these statistical quantities—stabilizes as the size L of the representative volume increases; more precisely, it converges to some limit with rate \(L^{d/2}\log L^C\).
Proposition 7
It is interesting to compare our approach on quantitative normal approximation of \(a^{{\text {RVE}}}\) with concepts employed in the derivation of optimal error estimates in stochastic homogenization [5, 6, 55]. A central theme in [5] is the approximate additivity of certain energetic quantities: the energy quantity on a certain scale may approximately be written as a sum of the energy quantities on smaller scales, allowing for an application of the central limit theorem. In [55], the application of the central limit theorem is facilitated by the homogenization of the flux propagation in the parabolic semigroup associated with the random elliptic operator. In our context, while we also introduce an additive decomposition of \(a^{{\text {RVE}}}\), we do not require the summands to be of the same structure as \(a^{{\text {RVE}}}\) and allow for a multilevel structure. This enables us to derive an optimalorder normal approximation result for the fluctuations.
Note that in [5, 6] a certain localization property of the considered energy quantity has been established. In principle, sufficiently strong localization properties of a random field allow for a multilevel decomposition of (linear functionals of) the random field in the sense of Definition 6 and therefore for an application of our quantitative normal approximation result in Theorem 9; see, in particular, the proof of [43, Theorem 2] for such a construction. However, the locality of the energy quantity established in [5, 6] is nonoptimal and in general not sufficient for our purposes. In the forthcoming work [37], an optimalorder localization result for (linear functionals of) the homogenization commutator \(\Xi :=(aa_{\mathsf {hom}})({\text {Id}}+\nabla \phi )\) will be provided, implying an optimalorder normal approximation result.
3 Strategy of the Proof and Intermediate Results
Our main result relies on a quantitative normal approximation result for the joint probability distribution of the approximation of the effective conductivity \(a^{{\text {RVE}}}\) and auxiliary random variables \(\mathcal {F}(a)\) like the spatial average Open image in new window . The distance of the probability distribution to a multivariate Gaussian will be quantified through the following notion of distance between probability measures. Note that this distance is a standard choice in the theory of multivariate normal approximation, see for example [33] and the references therein.
Definition 8

\(\phi \) is smooth and its first derivative is bounded in the sense \(\nabla \phi (x) \leqq {\bar{L}}\) for all \(x\in \mathbb {R}^N\).
 For any \(r>0\) and any \(x_0\in \mathbb {R}^N\), we havewhere \({{\text {osc}}}_r \phi (x)\) is the oscillation of \(\phi \) defined as$$\begin{aligned} \int _{\mathbb {R}^N} {{\text {osc}}}_r \phi (x) ~\mathcal {N}_{\Lambda }(xx_0) \,\mathrm{d}x \leqq r, \end{aligned}$$(29)and where$$\begin{aligned} {{\text {osc}}}_r\phi (x):=\sup _{z\leqq r}\phi (x+z)\inf _{z\leqq r} \phi (x+z) \end{aligned}$$$$\begin{aligned} \mathcal {N}_{\Lambda }(x):=\frac{1}{(2\pi )^{N/2}\sqrt{\det \Lambda }} \exp \bigg (\frac{1}{2}\Lambda ^{1} x \cdot x\bigg ). \end{aligned}$$
It is wellknown that Stein’s method of normal approximation allows one to establish a quantitative result on normal approximation for sums of random variables with local dependence structure, see for example [33, 34, 80] and the references therein. However, the approximation of the effective coefficient \(a^{{\text {RVE}}}\)—that is, the random variable \(a^{{\text {RVE}}}\) as defined by (4)—features global dependencies. It is shown in Proposition 7 that \(a^{{\text {RVE}}}\) may nevertheless be approximated by a sum of random variables with a multilevel local dependence structure. We then employ the following quantitative central limit theorem for sums of vectorvalued random variables with a multilevel local dependence structure, which is not covered by the normal approximation results for sums of random variables with a given dependency graph in the literature and which is established in the companion article [43]:
Theorem 9
Our result on moderate deviations of the probability distribution of \(a^{{\text {selRVE}}}\) is based on the following simple general moderate deviations result for sums of random variables with multilevel local dependence structure:
Theorem 10
([43, Theorem 5]) Consider an ensemble of coefficient fields a on \(\mathbb {R}^d\), \(d\geqq 1\), or its periodization for some \(L\geqq 1\), subject to the conditions (A1)–(A3) respectively (A1), (A2), and (A3\(_a\))–(A3\(_c\)). Let \(X=X(a)\) be a random variable that may be written as a sum of random variables with multilevel local dependence structure \(X=\sum _{m=0}^{1+\log _2 L} \sum _{i\in 2^m \varepsilon \mathbb {Z}^d \cap [0,L\varepsilon )^d} X_i^m\) in the sense of Definition 6.
4 Justification of the Selection Approach for Representative Volumes
We now provide the proof of our main result—the error estimates for the selection approach for representative volumes by Le Bris et al. [64]—which is stated in Theorems 2 and 3.
The idea for the proof of all statements of Theorem 2 is that Theorem 9 enables us in conjunction with Proposition 7 to approximate the joint probability distribution of \(a^{{\text {RVE}}}\) and \(\mathcal {F}(a)\) by a multivariate Gaussian with the same covariance matrix. The probability distribution of \(a^{{\text {selRVE}}}\) arises as the probability distribution of \(a^{{\text {RVE}}}\) conditioned on the event (14). As a consequence, the probability distribution of \(a^{{\text {selRVE}}}\) may be approximated by the marginal of the conditional probability distribution of an ideal multivariate Gaussian. The results of Theorem 2 on the probability distribution of \(a^{{\text {selRVE}}}\) are then a consequence of corresponding properties of multivariate normal distributions.
Proof of Theorem 2
For the proof of the theorem we may assume without loss of generality that \(\mathbb {E}[\mathcal {F}(a)]=0\). Throughout the proof, the constants c and C may depend on d, \(\lambda \), N, \(\gamma \), \(c_0\), and \(C_0\), if not otherwise stated.
Note that the lower bound (22) on the variance given in Theorem 4 follows also from the estimates (43) and (15) and the lower bound \(\int (x\mathbb {E}[a^{{\text {RVE}}}_{ij}])^2 \mathcal {M}^\delta (x) \,\mathrm{d}x\geqq (1\rho ^2){{\text {Var}}~}a^{{\text {RVE}}}_{ij}\), the latter of which is derived analogously to the upper bound \(\int (x\mathbb {E}[a^{{\text {RVE}}}_{ij}])^2 \mathcal {M}^\delta (x) \,\mathrm{d}x\leqq (1(1\delta ^2)\rho ^2){{\text {Var}}~}a^{{\text {RVE}}}_{ij}\).
The estimate (36b) is a consequence of the estimate on \({{\text {Var}}~}(a^{{\text {RVE}}},\mathcal {F}(a))\) which follows from (36a), (13), and the exponential moment bounds for Gaussians. The bound (36a) is a consequence of Lemma 12 (note that by Proposition 7, Lemma 12 is indeed applicable).
Plugging in the estimate (48), the lower bound (18), and the estimate (49) as well as the assumption (38a) into (46), we deduce (37). The estimate (39) follows by repeating the above steps, but appealing in the proof of (48) to the bound (33) instead of (32) and choosing \(\Lambda :={{\text {Var}}~}(a^{{\text {RVE}}}_{ij},\mathcal {F}(a))+L^{d/2d/8}{\text {Id}}\) (which ensures by (13) that \(\kappa (\Lambda )\leqq CL^{d/8}\)). \(\quad \square \)
We now turn to the proof of the moderatedeviationstype result for the selection approach for representative volumes stated in Theorem 3.
Proof of Theorem 3
5 The Multilevel Local Dependence Structure of the Approximation for the Effective Conductivity
We now prove that the approximation \(a^{{\text {RVE}}}\) for the effective conductivity obtained by the representative volume element method may indeed be written as a sum of a family of random variables with multilevel local dependence structure in the sense of Definition 6. Furthermore, we show that the same is true for the spatial average of the coefficient field Open image in new window and also for the secondorder correction \(\mathcal {F}_{2\mathrm{point}}(a)\) to \(a^{{\text {RVE}}}\) in the setting of small ellipticity contrast.
Proof of Proposition 7
Part 2: The approximation\(a^{{\text {RVE}}}\)for the effective coefficient. Next, let us show that \(a^{{\text {RVE}}}\) is approximately the sum of a family of random variables with multilevel local dependence structure. For simplicity of notation, let us assume that \(\varepsilon =1\).
Theorem 11
Note that the second inequality (62b) is actually not contained in [55, Corollary 4]. However, it is an easy consequence of (62a) (the proof is provided below).
The proof of the other cases is analogous. \(\quad \square \)
Proof of Theorem 11
In the previous proofs, we have made use of the following elementary concentration estimate for sums of random variables with multilevel local dependence:
Lemma 12
6 Failure and Success of the Variance Reduction Approaches
We now establish our theorems on the failure and the success of the variance reduction approaches in stochastic homogenization. We start with the counterexample that shows that in general there is no guarantee that the variance reduction techniques provide an effective reduction of the variance, even for “natural” choices of the statistical quantity \(\mathcal {F}(a)\) like the spatial average Open image in new window .
Proof of Theorem 4
Before turning to the main result of Theorem 4, the failure of the spatial average \(\mathcal {F}_{avg}(a)\) to explain a fraction of the variance of \(a^{{\text {RVE}}}\) (inequality (23)), let us first show (22). The estimate (22) is in fact a consequence of the estimate (43) in the proof of Theorem 2 in combination with (41) and the lower bound for the variance of \(\mathcal {M}^\delta \) which is a straightforward consequence of the formula (34) and the definition of \({{{\text {Var}}~}a^{{\text {RVE}}}_{ij}_{{\text {unexpl}}}}=(1\rho ^2){{\text {Var}}~}a_{ij}^{{\text {RVE}}}\).
Note that the derivation of (24) from (23) requires the estimate (22) under the assumption (A2’) instead of (A2). However, the only place where the assumption (A2) entered in our analysis is in Proposition 7, where it was used to apply the result of [55] on the decay of the semigroup. However, the arguments of [55] may be modified to yield the corresponding estimate under the assumption of discrete stationarity (A2’).

The approximation \(a^{{\text {RVE}}}\) for the effective coefficient depends in a uniformly continuous way on a as a map \(L^\infty ([0,L\varepsilon ]^d;\mathbb {R}^{d\times d})\rightarrow \mathbb {R}^{d\times d}\), as long as a is uniformly elliptic and bounded.
 Consider a probability distribution of coefficient fields a for which a is almost surely almost everywhere a multiple of the identity matrix. If in addition the law of a is invariant under reflections of coordinate axes and invariant under exchange of coordinate axes (that is, invariant under diagonal reflections), the covariance is a multiple of \({\text {Id}}\otimes {\text {Id}}\). For a proof of this fact, see Lemma 13, below.
 Consider the “periodized random checkerboard” with the set of tiles \(\mathcal {T}:=\{x_0+[0,\varepsilon )^d: x_0\in \varepsilon \mathbb {Z}^d\cap [0,L\varepsilon )^d\}\). On each tile \(T\in \mathcal {T}\), choose at random (and independently from the other tiles) \(a(x)={\text {Id}}\) with probability 0.5 and \(a(x)=\frac{1}{2} {\text {Id}}\) with probability 0.5. By Proposition 5 and the preceding considerations, for this probability distribution the covariance is a positive multiple of \({\text {Id}}\otimes {\text {Id}}\); in fact, one has a lower bound of the form \(\gtrsim L^{d}{\text {Id}}\otimes {\text {Id}}\).
 We now consider a “periodized random checkerboard with microstructure” with the set of tiles \(\mathcal {T}:=[0,\varepsilon )^d+ (\varepsilon \mathbb {Z}^d\cap [0,L\varepsilon )^d)\): Fix some \(\tau \ll 1\) with \(1/\tau \in 2\mathbb {N}\). On each tile \(T=\varepsilon k +[0,\varepsilon )^d \in \mathcal {T}\), choose at random (and independently from the other tiles) \(a_\tau (x)=\sigma {\text {Id}}\) with probability 0.5 (where \(\sigma >0\) is to be chosen below) and \(a_\tau (x)=A_{\tau }((x\varepsilon k)/\varepsilon )\) with probability 0.5, where \(A_{\tau }:[0,1]^2\rightarrow \mathbb {R}^{2\times 2}\) is the tile described in Fig. 5, rotated and reflected at random (with equal probability for all 8 orientations and independently on all such tiles; see Fig. 6 for an illustration). The probability distribution of a satisfies the same isotropy properties as in the case of the periodized random checkerboard. Thus, by Lemma 13 the covariance is a multiple of \({\text {Id}}\otimes {\text {Id}}\).
 We shall argue below that for suitable \(\sigma ,\lambda ,\mu >0\) and for \(\tau \ll 1\) small enough the covariance is negative; in fact, one has an upper bound of the form \(\lesssim L^{d} {\text {Id}}\otimes {\text {Id}}\).
 Linearly interpolating between \(a_\tau \) and a—that is, considering for \(\kappa \in [0,1]\) the coefficient fielddefined on the product probability space, that is for independent \(a_\tau \) and a—we find a probability distribution of coefficient fields \({\tilde{a}}\) for which the covariance vanishes. This is possible by the continuous dependence of \(a^{{\text {RVE}}}\) and Open image in new window on a (and hence the continuous dependence on \(\kappa \in [0,1]\) in the case of the family \(a_{\tau ,\kappa }\)) and by the fact that for all \(\kappa \in [0,1]\) the covariance is a multiple of \({\text {Id}}\otimes {\text {Id}}\) (this latter property holds again by the isotropy properties of the probability distribution and Lemma 13, below).$$\begin{aligned} a_{\tau ,\kappa }:=(1\kappa ) a + \kappa a_\tau \end{aligned}$$
 For any \(\kappa \in (0,1)\) the variances Open image in new window and \({{\text {Var}}~}a^{{\text {RVE}}}_{\tau ,\kappa }\) are nondegenerate in the sense \(\gtrsim L^{d}{\text {Id}}\otimes {\text {Id}}\). For the spatial average Open image in new window this nondegeneracy is an easy consequence of the formula (which follows from the definition of \(a_{\tau ,\kappa }\) and the independence of a and \(a_\tau \)) and the fact that the latter two variances satisfy such a lower bound (note that the spatial average of the coefficient field on a tile with microstructure \(A_{\tau }\) does not equal \(\sigma {\text {Id}}\)). The nondegeneracy of \({{\text {Var}}~}a_{\tau ,\kappa }^{{\text {RVE}}}\) is shown as follows: first, a new coefficient field \(a_{\tau ,\kappa ,{\text {eff}}}\) is introduced by letting \(a_{\tau ,\kappa ,{\text {eff}}}=a_{\tau ,\kappa }\) on each tile without microstructure but replacing the values of \(a_{\tau ,\kappa }\) by the effective coefficient from periodic homogenization on each tile with microstructure. Note that \(a_{\tau ,\kappa ,{\text {eff}}}\) corresponds to a standard random checkerboard. Denote by \(a_{\tau ,\kappa ,{\text {eff}}}^{{\text {RVE}}}\) the approximation for the effective coefficient associated with the coefficient field \(a_{\tau ,\kappa ,{\text {eff}}}\) (that is the result of formula (8) for the coefficient field \(a_{\tau ,\kappa ,{\text {eff}}}\)). The nondegeneracy of \({{\text {Var}}~}a_{\tau ,\kappa }^{{\text {RVE}}}\) now follows from the nondegeneracy \({{\text {Var}}~}a_{\tau ,\kappa ,{\text {eff}},ii}^{{\text {RVE}}}\gtrsim L^{d}\) and the convergence \(a_{\tau ,\kappa }^{{\text {RVE}}}a_{\tau ,\kappa ,{\text {eff}}}^{{\text {RVE}}}\rightarrow 0\) for \(\tau \rightarrow 0\) (uniformly in \(\kappa \), see below). Note that \(a_{\tau ,\kappa ,{\text {eff}}}^{{\text {RVE}}}\) corresponds to a random checkerboard with tiles \((\kappa \sigma + (1\kappa )){\text {Id}}\), \(\kappa \sigma + (1\kappa )\cdot \frac{1}{2}{\text {Id}}\), \(\kappa A_{\tau } + (1\kappa ){\text {Id}}\), and \(\kappa A_{\tau } + (1\kappa ) \cdot \frac{1}{2}{\text {Id}}\), each tile chosen with probability \(\frac{1}{4}\) (and the microscopic tiles rotated and reflected at random). Thus the nondegeneracy of \({{\text {Var}}~}a_{\tau ,\kappa ,{\text {eff}},ii}^{{\text {RVE}}}\) for \(1\leqq i\le d\) follows from the covariance estimate of Proposition 5 and the quantitative upper bound Open image in new window .
 Consider our (sub)pattern of periodic horizontal stripes of equal height (that is the redandblue subpattern in Fig. 5), in which the coefficient field a alternatingly takes the values \({\text {Id}}\) and \(\lambda {\text {Id}}\). Then the (largescale) effective coefficient for this pattern is given bythat is by the arithmetic mean in the horizontal direction and by the harmonic mean in the vertical direction.$$\begin{aligned} \begin{pmatrix} \frac{1+\lambda }{2}&{}0\\ 0&{}\frac{2\lambda }{1+\lambda } \end{pmatrix}, \end{aligned}$$
 Consider now the pattern of periodic vertical stripes of equal width, in which the coefficient alternatingly takes the value \(\mu {\text {Id}}\) respectively is given by the pattern of horizontal stripes from the previous step. The effective coefficient for this (secondorder laminate) pattern is (at least in the limit of an infinitesimally fine horizontal pattern) given by the arithmetic mean of the effective coefficients in the vertical direction and the harmonic mean of the effective coefficients in the horizontal direction, that is byChoosing \(\mu :=\frac{3\lambda ^2+(1\lambda )\sqrt{9\lambda ^2+14\lambda +9}+2\lambda +3}{4(\lambda +1)}\)—which is positive for any \(\lambda \in (0,1]\)—, the effective coefficient becomes a multiple of the identity matrix. Note that the spatial average of the coefficient field on a tile is given by$$\begin{aligned} \begin{pmatrix} \frac{2\mu (1+\lambda )}{2\mu +1+\lambda }&{}\quad 0\\ 0&{}\quad \frac{\lambda }{1+\lambda }+\frac{\mu }{2}. \end{pmatrix}. \end{aligned}$$$$\begin{aligned} \frac{\mu +\frac{\lambda +1}{2}}{2} {\text {Id}}. \end{aligned}$$
 Consider the coefficient field \(a_{\tau ,{\text {eff}}}\) that is obtained from our random checkerboard with microstructure \(a_\tau \) by replacing \(a_\tau \) on the tiles with microstructure with the effective coefficient \((\frac{\lambda }{1+\lambda }+\frac{\mu }{2}){\text {Id}}\). The coefficient field \(a_{\tau ,{\text {eff}}}\) is now just a usual random checkerboard; by Lemma 13 and Proposition 5, the covariance is a positive multiple of \({\text {Id}}\otimes {\text {Id}}\), and we have a lower bound of the form \(\geqq cL^{d} {\text {Id}}\otimes {\text {Id}}\) for the choice of \(\lambda \), \(\mu \), and \(\tau \) to be made below. Note that \(a_{\tau ,{\text {eff}}}\)—and hence also the preceding covariance—is actually independent of \(\tau \) (we just keep the \(\tau \) to emphasize that \(a_{\tau ,{\text {eff}}}\) is the coefficient field obtained from \(a_\tau \) in the homogenization limit \(\tau \rightarrow 0\)). We shall prove below that \(a_\tau ^{{\text {RVE}}}\) is (quantitatively) close to \(a_{\tau ,{\text {eff}}}^{{\text {RVE}}}\) for \(\tau \ll 1\) small enough, which implies that is close to a positive multiple of \({\text {Id}}\otimes {\text {Id}}\) (again with a lower bound of the form \(\geqq c L^{d} {\text {Id}}\otimes {\text {Id}}\)).
 The average Open image in new window is an affine function of Open image in new window : The coefficient field \(a_{\tau ,{\text {eff}}}\) is constant on each tile and may only take the values \(\sigma {\text {Id}}\) or \((\frac{\lambda }{1+\lambda }+\frac{\mu }{2}){\text {Id}}\). On the tiles on which the value of \(a_{\tau ,{\text {eff}}}\) is \(\sigma {\text {Id}}\), \(a_\tau \) also takes the constant value \(\sigma {\text {Id}}\). However, on the tiles on which \(a_{\tau ,{\text {eff}}}\) is given by \((\frac{\lambda }{1+\lambda }+\frac{\mu }{2}){\text {Id}}\) (that is on the tiles on which \(a_\tau \) features a microstructure), the average of \(a_\tau \) is \(\frac{2\mu +\lambda +1}{4}{\text {Id}}\). We thus have and Choosing \(\sigma \) such that \(\sigma >\frac{\lambda }{1+\lambda }+\frac{\mu }{2}\) but \(\sigma <\frac{2\mu +\lambda +1}{4}\)—which is possible for \(\lambda >0\) small enough—, we obtain a relation of the form for suitable positive constants A and B. Thus, the sign of the covariance flips upon replacing the \(a_{\tau ,{\text {eff}}}\) by \(a_\tau \) in the spatial average, that is must be a negative multiple of \({\text {Id}}\otimes {\text {Id}}\), with an upper bound of the form \(\leqq cL^{d} {\text {Id}}\otimes {\text {Id}}\).
For the remainder of the proof, we shall fix without loss of generality \(\varepsilon :=1\) to avoid even more cumbersome notation. Again, to avoid even more cumbersome notation, we only give the proof in the case that all tiles with microstructure have the same orientation as in Fig. 5.
Lemma 13
Proof
We now turn to the proof of our theorem on successful variance reduction for random coefficient fields that are obtained by applying “monotone” functions to a collection of iid random variables.
Proof of Proposition 5
Without loss of generality (by rescaling), we may consider the case \(\varepsilon =1\).
In the previous proof, we have used the following standard estimate for covariances of nonlinear functions of a finite number of independent random variables:
Lemma 14
Proof
The proof proceeds similarly to the proof of the standard form of this lemma which provides the weaker assertion \({\text {Cov}}[f(X),g(X)]\geqq 0\); see for example [68, page 24] or [23, Lemma 2.1].
Footnotes
 1.
Note that for onedimensional linear elliptic PDEs—a case in which homogenization is linear in the inverse of the coefficient and thus independent of the geometry of the material—an analysis has directly been provided in [64].
 2.
At least if a suitable periodization of the probability distribution of the coefficient field is available, see below for an explanation of this concept.
 3.
This limit is to be read in an almost sure sense: By ergodicity, for almost every realization of a this limit exists and is equal to a matrix which is independent of the realization.
Notes
Acknowledgements
Open access funding provided by Institute of Science and Technology (IST Austria).
References
 1.Abdulle, A.: On a priori error analysis of fully discrete heterogeneous multiscale FEM. Multiscale Model. Simul. 4(2), 447–459, 2005MathSciNetzbMATHGoogle Scholar
 2.Andres, S., Neukamm, S.: Berry–Esseen theorem and quantitative homogenization for the random conductance model with degenerate conductances. Preprint arXiv:1706.09493, 2017
 3.Armstrong, S., Bordas, A., Mourrat, J.C.: Quantitative stochastic homogenization and regularity theory of parabolic equations. Preprint arXiv:1705.07672, 2017
 4.Armstrong, S., Dario, P.: Elliptic regularity and quantitative homogenization on percolation clusters. to appear in Commun. Pure Appl. Math. arXiv:1609.09431, 2018
 5.Armstrong, S., Kuusi, T., Mourrat, J.C.: The additive structure of elliptic homogenization. Invent. Math. 208(3), 999–1154, 2017ADSMathSciNetzbMATHGoogle Scholar
 6.Armstrong, S., Kuusi, T., Mourrat, J.C.: Quantitative stochastic homogenization and largescale regularity. Lecture Notes. Preprint arXiv:1705.05300, 2017
 7.Armstrong, S., Lin, J.: Optimal quantitative estimates in stochastic homogenization for elliptic equations in nondivergence form. Arch. Ration. Mech. Anal. 225(2), 937–991, 2017MathSciNetzbMATHGoogle Scholar
 8.Armstrong, S.N., Mourrat, J.C.: Lipschitz regularity for elliptic equations with random coefficients. Arch. Ration. Mech. Anal. 219(1), 255–348, 2016MathSciNetzbMATHGoogle Scholar
 9.Armstrong, S.N., Smart, C.K.: Quantitative stochastic homogenization of convex integral functionals. Ann. Sci. Éc. Norm. Supér. (4) 49(2), 423–481, 2016MathSciNetzbMATHGoogle Scholar
 10.Armstrong, S.N., Souganidis, P.E.: Stochastic homogenization of Hamilton–Jacobi and degenerate Bellman equations in unbounded environments. J. Math. Pures Appl. (9) 97(5), 460–504, 2012MathSciNetzbMATHGoogle Scholar
 11.Avellaneda, M., Lin, F.: Une théorème de liouville pour des équations elliptiques à coefficients périodiques. C. R. Acad. Sci. Paris Sér. I Math. 309, 245–250, 1989MathSciNetzbMATHGoogle Scholar
 12.Avellaneda, M., Lin, F.H.: Compactness methods in the theory of homogenization. Commun. Pure Appl. Math. 40(6), 803–847, 1987MathSciNetzbMATHGoogle Scholar
 13.AyoulGuilmard, Q., Nouy, A., Binetruy, C.: Tensorbased numerical method for stochastic homogenisation. Preprint arXiv:1805.00902, 2018
 14.Babuška, I., Caloz, G., Osborn, J.E.: Special finite element methods for a class of second order elliptic problems with rough coefficients. SIAM J. Numer. Anal. 31(4), 945–981, 1994MathSciNetzbMATHGoogle Scholar
 15.Balzani, D., Brands, D., Schröder, J.: Construction of statistically similar representative volume elements. In: Plasticity and Beyond, Vol. 550 (Eds. Schröder J. and Hackl K.) Springer, Berlin, 355–412, 2014Google Scholar
 16.Balzani, D., Brands, D., Schröder, J., Carstensen, C.: Sensitivity analysis of statistical measures for the reconstruction of microstructures based on the minimization of generalized leastsquare functionals. Tech. Mech. 30, 297–315, 2010Google Scholar
 17.Balzani, D., Scheunemann, L., Brands, D., Schröder, J.: Construction of two and threedimensional statistically similar RVEs for coupled micromacro simulations. Comput. Mech. 54, 1269–1284, 2014zbMATHGoogle Scholar
 18.Balzani, D., Schröder, J.: Some basic ideas for the reconstruction of statistically similar microstructures for multiscale simulations. PAMM 8(1), 10533–10534, 2009zbMATHGoogle Scholar
 19.Bella, P., Fehrman, B., Fischer, J., Otto, F.: Stochastic homogenization of linear elliptic equations: higherorder error estimates in weak norms via secondorder correctors. SIAM J. Math. Anal. 49(6), 4658–4703, 2017MathSciNetzbMATHGoogle Scholar
 20.Bella, P., Giunti, A., Otto, F.: Effective multipoles in random media. Preprint arXiv:1708.07672, 2017
 21.Benoit, A., Gloria, A.: Longtime homogenization and asymptotic ballistic transport of classical waves. Preprint arXiv:1701.08600, 2017
 22.Blanc, X., Costaouec, R., Le Bris, C., Legoll, F.: Variance reduction in stochastic homogenization: the technique of antithetic variables. In: Engquist, B., Runborg, O., Tsai, Y.H. (eds.) Numerical Analysis of Multiscale Computations, Volume 82 of Lect. Notes Comput. Sci. Eng. Springer, Heidelberg, 47–70, 2012Google Scholar
 23.Blanc, X., Costaouec, R., Le Bris, C., Legoll, F.: Variance reduction in stochastic homogenization using antithetic variables. Markov Process. Relat. Fields 18(1), 31–66, 2012MathSciNetzbMATHGoogle Scholar
 24.Blanc, X., Le Bris, C.: Improving on computation of homogenized coefficients in the periodic and quasiperiodic settings. Netw. Heterog. Media 5, 1–29, 2010MathSciNetzbMATHGoogle Scholar
 25.Blanc, X., Le Bris, C., Legoll, F.: Some variance reduction methods for numerical stochastic homogenization. Philos. Trans. A 374(2066), 20150168, 2016. 15ADSMathSciNetzbMATHGoogle Scholar
 26.Boucheron, S., Lugosi, G., Massart, P.: Concentration inequalities using the entropy method. Ann. Probab. 31(3), 1583–1614, 2003MathSciNetzbMATHGoogle Scholar
 27.Braides, A., Cicalese, M., Ruf, M.: Continuum limit and stochastic homogenization of discrete ferromagnetic thin films. Anal. PDE 11(2), 499–553, 2018MathSciNetzbMATHGoogle Scholar
 28.Brands, D., Balzani, D., Scheunemann, L., Schröder, J., Richter, H., Raabe, D.: Computational modeling of dualphase steels based on representative threedimensional microstructures obtained from ebsd data. Arch. Appl. Mech. 86(3), 575–598, 2016ADSGoogle Scholar
 29.Brezzi, F., Franca, L., Hughes, T., Russo, A.: \(b=\int g\). Comput. Methods Appl. Mech. Eng. 145, 329–339, 1997ADSzbMATHGoogle Scholar
 30.Burkholder, D.L.: Distribution function inequalities for martingales. Ann. Probab. 1(1), 19–42, 1973MathSciNetzbMATHGoogle Scholar
 31.Caffarelli, L.A., Souganidis, P.E.: Rates of convergence for the homogenization of fully nonlinear uniformly elliptic pde in random media. Invent. Math. 180(2), 301–360, 2010ADSMathSciNetzbMATHGoogle Scholar
 32.Cancès, É., Ehrlacher, V., Legoll, F., Stamm, B.: An embedded corrector problem to approximate the homogenized coefficients of an elliptic equation. C. R. Math. 353(9), 801–806, 2015MathSciNetzbMATHGoogle Scholar
 33.Chen, L.H.Y., Goldstein, L., Shao, Q.M.: Normal Approximation by Stein’s Method. Probability and Its Applications (New York). Springer, Heidelberg 2011Google Scholar
 34.Chen, L.H.Y., Shao, Q.M.: Normal approximation under local dependence. Ann. Probab. 32(3A), 1985–2028, 2004MathSciNetzbMATHGoogle Scholar
 35.Dal Maso, G., Modica, L.: Nonlinear stochastic homogenization and ergodic theory. J. Reine Angew. Math. 368, 28–42, 1986MathSciNetzbMATHGoogle Scholar
 36.Dario, P.: Optimal corrector estimates on percolation clusters. Preprint arXiv:1805.00902, 2018
 37.Duerinckx, M., Fischer, J., Gloria, A., Otto, F.: The structure of fluctuations in stochastic homogenization: the case of finite range of dependence, 2019 (in preparation) Google Scholar
 38.Duerinckx, M., Gloria, A.: Weighted secondorder Poincaré inequalities: application to RSA models. Preprint arXiv:1711.03158, 2017
 39.Duerinckx, M., Gloria, A., Otto, F.: The structure of fluctuations in stochastic homogenization. Preprint arXiv:1602.01717, 2016
 40.E, W., Engquist, B.: The heterogeneous multiscale methods. Commun. Math. Sci. 1(1), 87–132, 2003Google Scholar
 41.Efendiev, Y., Kronsbein, C., Legoll, F.: Multilevel Monte Carlo approaches for numerical homogenization. Multiscale Model. Simul. 13(4), 1107–1135, 2015MathSciNetzbMATHGoogle Scholar
 42.Eigel, M., Peterseim, D.: Simulation of composite materials by a network FEM with error control. Comput. Methods Appl. Math. (online) 15(1), 21–37, 2015MathSciNetzbMATHGoogle Scholar
 43.Fischer, J.: Quantitative normal approximation for sums of random variables with multilevel local dependence. Preprint arXiv:1905.10273, 2018
 44.Fischer, J., Otto, F.: A higherorder largescale regularity theory for random elliptic operators. Commun. Partial Differ. Equ. 41(7), 1108–1148, 2016MathSciNetzbMATHGoogle Scholar
 45.Fischer, J., Raithel, C.: Liouville principles and a largescale regularity theory for random elliptic operators on the halfspace. SIAM J. Math. Anal. 49(1), 82–114, 2017MathSciNetzbMATHGoogle Scholar
 46.Giunti, A., Mourrat, J.C.: Quantitative homogenization of degenerate random environments. Ann. Inst. Henri Poincaré Probab. Stat. 54(1), 22–50, 2018MathSciNetzbMATHGoogle Scholar
 47.Gloria, A.: Reduction of the resonance error. part 1: approximation of homogenized coefficients. Math. Models Methods Appl. Sci. 21(08), 1601–1630, 2011MathSciNetzbMATHGoogle Scholar
 48.Gloria, A.: Numerical approximation of effective coefficients in stochastic homogenization of discrete elliptic equations. ESAIM: M2AN 46(1), 1–38, 2012MathSciNetzbMATHGoogle Scholar
 49.Gloria, A., Neukamm, S., Otto, F.: An optimal quantitative twoscale expansion in stochastic homogenization of discrete elliptic equations. ESAIM Math. Model. Numer. Anal. 48(2), 325–346, 2014MathSciNetzbMATHGoogle Scholar
 50.Gloria, A., Neukamm, S., Otto, F.: A regularity theory for random elliptic operators. Preprint arXiv:1409.2678, 2014
 51.Gloria, A., Neukamm, S., Otto, F.: Quantification of ergodicity in stochastic homogenization: optimal bounds via spectral gap on Glauber dynamics. Invent. Math. 199(2), 455–515, 2015ADSMathSciNetzbMATHGoogle Scholar
 52.Gloria, A., Nolen, J.: A quantitative central limit theorem for the effective conductance on the discrete torus. Commun. Pure Appl. Math. 69(12), 2304–2348, 2016MathSciNetzbMATHGoogle Scholar
 53.Gloria, A., Otto, F.: An optimal variance estimate in stochastic homogenization of discrete elliptic equations. Ann. Probab. 39(3), 779–856, 2011MathSciNetzbMATHGoogle Scholar
 54.Gloria, A., Otto, F.: An optimal error estimate in stochastic homogenization of discrete elliptic equations. Ann. Appl. Probab. 22(1), 1–28, 2012MathSciNetzbMATHGoogle Scholar
 55.Gloria, A., Otto, F.: The corrector in stochastic homogenization: optimal rates, stochastic integrability, and fluctuations. Preprint arXiv:1510.08290, 2015
 56.Gu, Y.: High order correctors and twoscale expansions in stochastic homogenization. Probab. Theory Relat. Fields 169(3), 1221–1259, 2017MathSciNetzbMATHGoogle Scholar
 57.Gu, Y., Mourrat, J.C.: Scaling limit of fluctuations in stochastic homogenization. Multiscale Model. Simul. 14(1), 452–481, 2016MathSciNetzbMATHGoogle Scholar
 58.Heida, M., Schweizer, B.: Stochastic homogenization of plasticity equations. ESAIM Control Optim. Calc. Var. 24(1), 153–176, 2018MathSciNetzbMATHGoogle Scholar
 59.Hornung, P., Pawelczyk, M., Velčić, I.: Stochastic homogenization of the bending plate model. J. Math. Anal. Appl. 458(2), 1236–1273, 2018MathSciNetzbMATHGoogle Scholar
 60.Hou, T.Y., Wu, X.H.: A multiscale finite element method for elliptic problems in composite materials and porous media. J. Comput. Phys. 134(1), 169–189, 1997ADSMathSciNetzbMATHGoogle Scholar
 61.Hughes, T.J., Feijóo, G.R., Mazzei, L., Quincy, J.B.: The variational multiscale method—a paradigm for computational mechanics. Comput. Methods Appl. Mech. Eng. 166(1), 3–24, 1998. (Advances in Stabilized Methods in Computational Mechanics) ADSMathSciNetzbMATHGoogle Scholar
 62.Khoromskaia, V., Khoromskij, B., Otto, F.: A numerical primer in 2D stochastic homogenization: CLT scaling in the representative volume element, 2017. PreprintGoogle Scholar
 63.Kozlov, S.M.: The averaging of random operators. Mat. Sb. (N.S.) 109(151), 188–202, 1979. 327MathSciNetGoogle Scholar
 64.Le Bris, C., Legoll, F., Minvielle, W.: Special quasirandom structures: a selection approach for stochastic homogenization. Monte Carlo Methods Appl. 22(1), 25–54, 2016MathSciNetzbMATHGoogle Scholar
 65.Legoll, F., Minvielle, W.: A control variate approach based on a defecttype theory for variance reduction in stochastic homogenization. Multiscale Model. Simul. 13(2), 519–550, 2015MathSciNetzbMATHGoogle Scholar
 66.Lin, J., Smart, C.K.: Algebraic error estimates for the stochastic homogenization of uniformly parabolic equations. Anal. PDE 8(6), 1497–1539, 2015MathSciNetzbMATHGoogle Scholar
 67.Lions, P.L., Souganidis, P.E.: Correctors for the homogenization of Hamilton–Jacobi equations in the stationary ergodic setting. Commun. Pure Appl. Math. 56(10), 1501–1524, 2003MathSciNetzbMATHGoogle Scholar
 68.Liu, J.S.: MonteCarlo Strategies in Scientific Computing. Springer Series in Statistics. Springer, New York 2001Google Scholar
 69.Lu, J., Otto, F.: Optimal artificial boundary condition for random elliptic media. Preprint arXiv:1803.09593, 2018
 70.Målqvist, A., Peterseim, D.: Localization of elliptic multiscale problems. Math. Comput. 83, 2583–2603, 2014MathSciNetzbMATHGoogle Scholar
 71.Matache, A.M., Schwab, C.: Twoscale FEM for homogenization problems. M2AN Math. Model. Numer. Anal. 36(4), 537–572, 2002MathSciNetzbMATHGoogle Scholar
 72.Mourrat, J.C.: Efficient methods for the estimation of homogenized coefficients. Preprint arXiv:1609.06674, 2016
 73.Mourrat, J.C., Nolen, J.: Scaling limit of the corrector in stochastic homogenization. Ann. Appl. Probab. 27(2), 944–959, 2017MathSciNetzbMATHGoogle Scholar
 74.Mourrat, J.C., Otto, F.: Correlation structure of the corrector in stochastic homogenization. Ann. Probab. 44(5), 3207–3233, 2016MathSciNetzbMATHGoogle Scholar
 75.Murat, F., Tartar, L.: HConvergence. Progress in Nonlinear Differential Equations and Their Applications, vol. 31. Birkhäuser Boston Inc, Boston 1997zbMATHGoogle Scholar
 76.Naddaf, A., Spencer, T.: Estimates on the variance of some homogenization problems, 1998. Unpublished preprintGoogle Scholar
 77.Nolen, J.: Normal approximation for the net flux through a random conductor. Stoch. Partial Differ. Equ. Anal. Comput. 4(3), 439–476, 2016MathSciNetzbMATHGoogle Scholar
 78.Papanicolaou, G.C., Varadhan, S.R.S.: Boundary value problems with rapidly oscillating random coefficients. In: Random Fields, Vol. I, II (Esztergom, 1979), volume 27 of Colloquia Mathematica Societatis János Bolyai, pp. 835–873. NorthHolland, Amsterdam, 1981Google Scholar
 79.Peterseim, D., Carstensen, C.: Finite element network approximation of conductivity in particle composites. Numer. Math. 124(1), 73–97, 2013MathSciNetzbMATHGoogle Scholar
 80.Rinott, Y., Rotar, V.: A multivariate CLT for local dependence with \(n^{1/2}\log n\) rate and applications to multivariate graph related statistics. J. Multivariate Anal. 56(2), 333–350, 1996MathSciNetzbMATHGoogle Scholar
 81.Schröder, J., Balzani, D., Brands, D.: Approximation of random microstructures by periodic statistically similar representative volume elements based on linealpath functions. Arch. Appl. Mech. 81(7), 975–997, 2011ADSzbMATHGoogle Scholar
 82.von Pezold, J., Dick, A., Friák, M., Neugebauer, J.: Generation and performance of special quasirandom structures for studying the elastic properties of random alloys: application to alti. Phys. Rev. B 81, 094203, 2010ADSGoogle Scholar
 83.Wei, S.H., Ferreira, L.G., Bernard, J.E., Zunger, A.: Electronic properties of random alloys: special quasirandom structures. Phys. Rev. B 42, 9622–9649, 1990ADSGoogle Scholar
 84.Yue, X., E, W.: The local microscale problem in the multiscale modeling of strongly heterogeneous media: effects of boundary conditions and cell size. J. Comput. Phys. 222(2), 556–572, 2007Google Scholar
 85.Yurinskiĭ, V.V.: Averaging of symmetric diffusion in a random medium. Sibirsk. Mat. Zh. 27, 167–180, 1986. 215MathSciNetGoogle Scholar
 86.Zunger, A., Wei, S.H., Ferreira, L.G., Bernard, J.E.: Special quasirandom structures. Phys. Rev. Lett. 65, 353–356, 1990ADSGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.