Circulant embedding with QMC: analysis for elliptic PDE with lognormal coefficients

In a previous paper (Graham et al. in J Comput Phys 230:3668–3694, 2011), the authors proposed a new practical method for computing expected values of functionals of solutions for certain classes of elliptic partial differential equations with random coefficients. This method was based on combining quasi-Monte Carlo (QMC) methods for computing the expected values with circulant embedding methods for sampling the random field on a regular grid. It was found capable of handling fluid flow problems in random heterogeneous media with high stochastic dimension, but no convergence theory was provided. This paper provides a convergence analysis for the method in the case when the QMC method is a specially designed randomly shifted lattice rule. The convergence result depends on the eigenvalues of the underlying nested block circulant matrix and can be independent of the number of stochastic variables under certain assumptions. In fact the QMC analysis applies to general factorisations of the covariance matrix to sample the random field. The error analysis for the underlying fully discrete finite element method allows for locally refined meshes (via interpolation from a regular sampling grid of the random field). Numerical results on a non-regular domain with corner singularities in two spatial dimensions and on a regular domain in three spatial dimensions are included.


Introduction
In the paper [12], the present authors proposed a new practical algorithm for solving a class of elliptic partial differential equations with coefficients given by statistically homogeneous lognormal random fields -and in particular for computing expected values of spatial functionals of such solutions.In this algorithm, the required expected value is written as a multidimensional integral of (possibly) high dimension, which is then approximated by a quasi-Monte Carlo (QMC) method.Each evaluation of the integrand is obtained by using a fully discrete finite element (FE) method to approximate the PDE.A key original feature of the method in [12] was the procedure for sampling the random field: instead of sampling the continuous random field by a truncated Karhunen-Loève (KL) expansion, the field was sampled discretely on a regular grid covering the domain and then interpolated at the (irregularly spaced) quadrature points.This completely eliminated the problem of truncation error from the KL expansion, but requires the factorisation of a dense matrix of dimension equal to the number of sample points.In [12] this was done using a circulant embedding technique.The method was found to be effective even for problems with high stochastic dimension, but [12] did not contain a convergence analysis of the algorithm.
The main purpose of the present paper is to provide an analysis for a method closely related to that of [12], with an error bound that is independent of stochastic dimension, and a convergence rate faster than that of a simple Monte Carlo method.The setting differs in two ways from [12]: first, the FE method considered here is the standard nodal FE method for elliptic problems, whereas in [12] the mixed FE method was used; and second, the QMC method considered here is a specially designed randomly shifted lattice rule (see (1.12) below), instead of using Sobol points as in [12].(We expect the present analysis can be extended to mixed FEs using results in [14], but do not attempt this here.) Thus our PDE model (written initially in strong form) is Given a functional G of u with respect to the spatial variable x, our aim here (as in [12]) is to compute efficiently and accurately E[G(u)], the expected value of G(u(•, ω)).The (spatial) domain D ⊂ R d (d = 1, 2, 3) in (1.1) is assumed to be a bounded interval (d = 1), polygon (d = 2) or Lipschitz polyhedron (d = 3), while Ω is the set of events in a suitable probability space (Ω, A, P).The solution u is required to satisfy the homogeneous Dirichlet condition u = 0 on the boundary ∂D of D. The spatial domain D is allowed to be irregular but we assume for convenience that it can be embedded in the d-dimensional unit cube; this is always possible after a suitable affine scaling.(The length-scale of our random field is therefore always considered with respect to the unit cube.)The driving term f is for simplicity taken to be deterministic.We consider the lognormal case where a(x, ω) = exp(Z(x, ω)), with Z(x, ω) a Gaussian random field with prescribed mean Z(x) and covariance where the expectation is with respect to the Gaussian measure.Lognormal random fields are commonly used in applications, for example in hydrology (see, e.g., [18] and the references there).Throughout we will assume that Z is stationary (see, e.g., [1, p. 24]), i.e., its covariance function satisfies r cov (x, x ) = ρ(x − x ). (1.4) Strictly speaking, ρ only needs to be defined on a sufficiently large ball B(0, diam(D)) for the prescription above, but as in many applications we assume that it is defined on all of R d .A particular case that will be discussed extensively, is the Matérn covariance, where ρ is isotropic, i.e., ρ depends on x only through its Euclidean length x 2 .
In the present paper (1.1) is discretised by piecewise linear finite elements in space, using simplicial meshes with maximum diameter h, and a simple low order quadrature rule for suitable approximation of the associated stiffness matrix.In consequence, the only values of the stochastic coefficient Z(x, ω) that enter the FE computation are its values at the quadrature points.However, the FE quadrature points will in general be irregularly distributed, and (for refined meshes) very large in number, typically rendering a direct evaluation of the field at the quadrature points prohibitively expensive.It is much more efficient, as explained below, to instead evaluate exactly the realisation of the field at a uniform grid of M = (m 0 + 1) d points x 1 , x 2 , . . ., x M on the d-dimensional unit cube [0, 1] d (containing the domain D), with a fixed integer m 0 and with grid spacing h 0 := 1/m 0 .We assume further that h 0 ∼ h.(The extension to general tensor product grids with different mesh sizes in the different coordinate directions is straightforward and not discussed here.)We use multilinear interpolation to obtain a sufficiently good approximation of the field Z(x, ω) at any other spatial point x ∈ D, i.e., we use repeated linear interpolation in each coordinate direction with respect to the vertices of the surrounding grid cell.At this stage of the algorithm, the output is the approximate FE solution u h (x, ω), which inherits randomness from the input data Z(ω) := (Z(x 1 , ω), . . ., Z(x M , ω)) .
The M -vector Z(ω) is a Gaussian random vector with a covariance structure inherited from the continuous field Z. Thus it has mean Z := (Z(x 1 ), . . ., Z(x M )) and a positive definite covariance matrix Because of its finite length, Z(ω) can be expressed exactly (but not uniquely) as a linear combination of a finite number of i.i.d.standard normal random variables, i.e., as for some real M × s matrix B with s ≥ M satisfying To see this, note that (1.6) and (1.7) imply An efficient computation of a suitable factorisation (1.7) using the extension of R to a nested block circulant matrix and then diagonalisation using FFT (the "circulant embedding method") is described in detail in [5,6,9,12].It is essential for that approach that the random field is sampled on a uniform grid of points.In a related paper [13], we have analysed certain key properties of the circulant extension and its factorisation, which will be crucial for efficiency and for the dimension independence of the QMC convergence analysis in this paper.Other approaches, such as Cholesky factorisation or direct spectral decomposition, could also be used to find a factorisation of the form (1.7).These alternative approaches have the advantage that they do not require the sample grid to be uniform, but when M is large these approaches are likely to be prohibitively expensive.Some ideas of how to overcome this problem using a pivoted Cholesky factorisation or hierarchical matrices can be found in [15] or [4,10], respectively.
From now on, realisations of the random vector Y (ω) are denoted by y.Thus, y contains s independent realisations of N (0, 1).Hence, if F : R s → R is any Lebesgue measurable function then the expected value of F (Y (ω)) may be written as where φ is the one-dimensional standard normal probability density, and Φ −1 s is the inverse of the cumulative normal distribution function applied componentwise on (0, 1) s .Since u h (•, ω) is derived from Y (ω), we make the notational convention and so our approximation to Because s ≥ M , the integral (1.8) can have very high dimension.However, two important empirical findings in [12] were that (for all the applications considered) the accuracy of the QMC cubature rule based on Sobol points did not appear to be affected by the size of s and that it was always superior to classical Monte Carlo (MC) methods.Successful computations with s ∼ 4 × 10 6 were reported in [12].One aim of the present work is to provide a rigorous proof of the independence of the QMC error on the dimension s (under appropriate conditions) for a specially designed randomly shifted lattice rule.Furthermore, we will also prove here the superior asymptotic convergence rate of QMC over MC in this setting.
The accuracy of the approximation to E[G(u)] depends on the FE mesh diameter h (through the FE error and the interpolation error), as well as on the number n of QMC points.We analyse convergence with respect to both these parameters.Our first set of theoretical results concern the accuracy with respect to h.In particular, for bounded linear functionals G (with respect to the spatial variable x) and under suitable assumptions, one result, obtained in §2.2, is that for some parameter t ∈ (0, 1], determined by the smoothness of realisations of a(x, ω), and with a constant C independent of h.This result differs from that of [21] through the use of interpolation to approximate the random vector Z.
Then, a substantial part of the paper is concerned with the convergence of the QMC rules for (1.8).In particular, we consider randomly shifted lattice rules, where z ∈ N s is some suitably chosen generating vector, ∆ ∈ [0, 1] s is a uniformly distributed random shift, and "frac" denotes taking the fractional part of every component in a vector.For the particular integrand F (y) := G(u h (•, y)), we provide in §3 upper bounds on the error of approximating the integral (1.10) by (1.12).In particular, in Theorem 8, we give sufficient conditions on the matrix B appearing in (1.7) for the rootmean-square error to satisfy an estimate of the form: for arbitrary δ > 0. Here, E ∆ denotes expectation with respect to the random shift ∆.Moreover, we also provide conditions under which the constant C δ in (1.13) is independent of s.Our proof of Theorem 8 differs from the corresponding result in [11] because of the use of multilinear interpolation of the random field in space, and because of the different meaning of the parameters y j .The problem is no longer about truncating an infinite Karhunen-Loève expansion, but rather of dealing with a sequence of matrix problems with ever increasing size s.Finally, combining (1.11) and (1.13), the overall error estimate is Although the algorithm in [12] applies to both linear and nonlinear functionals, our theory at present is restricted to the linear case.In the numerical experiments, we will use the average of q different random shifts as our final QMC estimator, bringing the total amount of integrand evaluations (i.e., PDE solves) to N = q n.We compare with a classical Monte Carlo (MC) method for which we use N i.i.d.random samples w k ∼ U [0, 1) s , i.e., The layout of the paper is as follows.The PDE with random coefficient and its FE approximation with quadrature are described in §2.1.The estimate (1.11) is proved in §2.2.The QMC theory is given in §3.In particular, one of the key results proved in §3.3 is the upper bound (1.13).A sufficient condition on B for this result in the case of circulant embedding is identified in §3.4.The circulant embedding algorithm is summarised briefly in §3.4 and we refer to [13] for its theoretical analysis.Numerical experiments are given in §4, illustrating the performance of the algorithm on PDE problems on an irregular domain with corners and holes in two space dimensions, as well as on the unit cube in three dimensions.

Finite element implementation and analysis
In §2.1, we first give the algorithmic details of our practical finite element method, before proving error estimates for this method in §2.2, in particular Theorem 3 and Corollary 4.

Model formulation and implementation
We start with (1.1) written pathwise in weak form: seek u(•, ω) ∈ V := H 1 0 (D) such that where and a(x, ω) is given by (1.2) -(1.4).The norm in V is v V := ∇v L 2 (D) .For simplicity we assume that f ∈ L 2 (D), so that f, v reduces to the L 2 (D) inner product in (2.1).In general, it denotes the duality pairing between V and its dual space V := H −1 (D).
To discretise (2.1) in the physical domain D, let {T h } h>0 denote a family of conforming, simplicial meshes on D, parametrised by the maximum mesh diameter h := max τ ∈T h diam(τ ) with diam(τ ) := max x,x ∈τ x − x 2 .On this mesh, we let V h ⊂ V denote the usual finite element space of continuous piecewise linear functions that vanish on ∂D.We assume that dim(V h ) = O(h −d ).This includes many locally refined mesh families, including anisotropic refinement in 3D (e.g., [2], [3]).Since any function in V h has a piecewise constant gradient, we have where We approximate the required integrals (2.3) using a two-stage interpolation/quadrature process as follows.Recall the uniform grid on the cube containing D, with points x j for j = 1, . . ., M , and grid spacing h 0 ∼ h, defined in §1.Let x ∈ D and let {t i,x } 2 d i=1 ⊂ {x 1 , . . ., x M } be the vertices of the surrounding grid cell labelled in arbitrary order.Since multilinear interpolation is done by repeatedly applying linear interpolation in each coordinate direction, we can write the interpolated value of g at x as a convex combination of the surrounding vertex values {t i,x } 2 d i=1 , i.e., The operator I h 0 : C(D) → C(D) is linear and satisfies I h 0 (g; {x j } M j=1 )(x i ) = g(x i ), for every point x i of the uniform grid.
Let us further define an r-point quadrature rule on each element τ , which is exact for constant functions, has positive weights µ τ,k ≥ 0 and only uses quadrature points x τ,k ∈ τ , i.e., (2.5) Here, |τ | denotes the volume of τ .The quadrature points x τ,k in (2.5) are unrelated to the uniform grid points x j in general.Examples of rules satisfying (2.5) are the centroid, nodal or face-centroid rules.
Using the rule (2.5) to approximate all the integrals (2.3) would require evaluating a(•, ω) at the union of all the (in general irregularly distributed) quadrature points {x τ,k }, which could be costly.We avoid that and compute the field only at the points of the uniform grid.We then interpolate these values using I h 0 (a(•, ω); {x j } M j=1 ) and then approximate a τ (ω) using (2.5).In summary, we approximate the bilinear form in (2.2) by where (2.7) Proposition 1 For all τ ∈ T h , there is a sparse positive vector p τ,j a(x j , ω), and a τ (ω) ≥ |τ | a min,M (ω) , for all τ ∈ T h , where a min,M (ω) := min 1≤j≤M a(x j , ω).
Proof.It follows from (2.7) together with (2.4) and (2.5) that The second result then follows from the definition of a min,M (ω) and the fact that the coefficients µ τ,k and w i,x τ,k are all positive and their sum is |τ |. 2 Extending the notational convention (1.9), we may thus write our discrete finite element method for (2.1) as the problem of finding u h (•, y) which satisfies where

Finite element error analysis
Let us first define some relevant function spaces.Let C 1 (D) denote the space of continuously differentiable functions on D with seminorm |φ| C 1 (D) := sup x∈D |∇φ(x)|.For β ∈ (0, 1), let C β (D) denote the space of Hölder continuous functions on D with exponent 2 < ∞ denote the Hölder coefficient which is, in fact, a seminorm.Also let L p (Ω, X) denote the space of all random fields in a Banach space X with bounded pth moments over Ω.
There are two factors that limit the convergence rate of the finite element error: (i) the regularity of the coefficient field a(•, ω) and (ii) the shape of the domain Here, when β < 1, the loss of H 2 regularity is global and the resulting reduction in the finite element convergence rate cannot be corrected by local mesh refinement.On the other hand, the influence of corner or edge singularities can typically be eliminated by suitable local mesh refinement near ∂D.
Using the notation in [21, Def.2.1], let λ ∆ (D) be the order of the strongest singularity of the Dirichlet-Laplacian on D. Then u(•, ω) ∈ H 1+t (D), for all t ≤ λ ∆ (D) and t < β, and u Lp(Ω,H 1+t (D)) is bounded for all p ∈ [1, ∞) (see [21,Lem. 5.2]).When λ ∆ (D) ≥ β uniform mesh refinement leads to a best approximation error that satisfies inf with ) cannot be achieved by uniform refinement.However, it can be recovered by a suitable local refinement.For example, consider the 2D case where D is smooth except for a single reentrant corner with interior angle θ > π and where However, by considering the best approximation error over W and over D\W separately, we see that it suffices to grade the meshes such that the mesh size is O(h βθ/π ) near the reentrant corner and O(h) away from it.This is because inf for all 0 < t < β.Such a mesh grading can often be achieved while retaining the desired complexity estimate dim(V h ) ≤ Ch −2 (e.g., [20]).Thus, using similar techniques to those in the proof of [21,Lem. 5.2] it can be shown that (2.10) holds with The case of multiple reentrant corners can be treated in an identical fashion.Analogous but more complicated (anisotropic)refinement is needed in 3D, especially in the presence of edge-singularities (e.g., [2], [3]).In practice, such local refinements can be constructed adaptively.The important observation here is that the locally refined mesh needs to be constructed only once for exp(Z(•)) (or for one sample of a), since the boundary singularities will be the same for all samples.
We start our analysis by estimating the error in approximating a τ (ω) by a τ (ω).
Proof.Using the fact that a(•, ω) is continuous, the integral mean value theorem asserts the existence of an x * τ ∈ τ such that where we used (2.4) and (2.5).Then it follows from (2.8) that In the last step we used the fact that the distance between a point in a cell of the regular grid and a vertex of that cell is at most √ d h 0 .If the quadrature points all lie on the regular grid then the second term can be omitted.This completes the proof. 2 Theorem 3 Suppose that Z(•, ω) ∈ C β (D) for some β ∈ (0, 1) and suppose that there exists a family {T h } h>0 of conforming, simplicial meshes on D such that (2.10) holds with dim(V h ) ≤ Ch −d .Let I h 0 and Q τ be defined in (2.4) and (2.5), respectively.Then, we have P-almost surely Proof.The proof follows that of [7,Prop. 3.13].First, using Lemma 2 and the fact that ∇v h,τ := ∇v h | τ is constant, for all piecewise linear finite element functions v h ∈ V h , as well as applying the Cauchy-Schwarz inequality in the last step we obtain the estimate Now, using this bound in Strang's First Lemma (cf.[7, Lem.3.12]), we can write Since we can combine (2.11) with (2.10) to establish the result.
The fact that the constant C IQ (ω) in the above bounds has bounded moments of any (finite) order is a consequence of our assumptions that Z(x, ω) is Gaussian and that Z(•, ω) ∈ C β (D).As stated above, it can be proved as in [7] via Fernique's Theorem. 2 An O(h 2t ) bound on the L 2 -norm of the error follows via the well-known Aubin-Nitsche trick (cf.[7,Cor. 3.10]).We omit this and finish the section with an error bound for linear functionals G of u, which we have already stated in (1.11).
Corollary 4 Let G be a bounded linear functional on L 2 (D).Then, under the assumptions of Theorem 3, there exists a constant C > 0 independent of h and u such that Proof.The proof follows, as in [21, Lem.3.1 and Prop.3.4], from Hölder's inequality using the fact that Theorem 3 applies verbatim also to the FE error z( Using the techniques in [21, §3], this corollary can be extended in a straightforward way also to higher order moments of the error or to functionals G of u that are random, nonlinear or bounded only on a subspace of L 2 (D).In summary, we have provided in this section a recipe for extending all results of [21, §3] to general meshes, with the random field being sampled on a regular grid and then interpolated onto the finite element grid.

QMC error analysis
The QMC theory for integrals of the form (1.8) is set in a special weighted Sobolev space.Provided the integrand lies in this space, we obtain an estimate for the root mean square error when a specially chosen, randomly shifted lattice rule (1.12) is used to approximate (1.8).The cost for explicitly constructing a good rule tailored to our analysis with n points in s dimensions grows log-linearly in n and quadratically in s (cf.Remark 9 below).However, applying the rule is essentially as cheap as obtaining samples from a random number generator, see, e.g., [16, §7].Full details of the convergence theory are in other sources, e.g., [11,16], so we will be brief here.
Later in this section we use this theory to estimate the error when the rule is applied to the particular F given in (1.10).We assume first that the random field Z is sampled by employing the quite general factorisation (1.7) of the covariance matrix R. Later, in §3.4 we will discuss the case when this is done by circulant embedding.

Abstract convergence result and proof strategy
The relevant weighted Sobolev norm is defined as: where Here, {1 : s} denotes the set {1, 2, . . ., s}, ∂ |u| F ∂y u denotes the mixed first order derivative with respect to the "active" variables y j with j ∈ u, y {1:s}\u denotes the "inactive" variables y j with j / ∈ u, and φ is the univariate normal probability density (see (1.8)).The remaining ingredients in (3.1) are the weight parameters γ u and the weight functions ψ j , which are used, respectively, to moderate the relative importance of the derivatives of F with respect to y u and to control the behaviour of these derivatives asymptotically as y ∞ → ∞.As in [11], we shall restrict ourselves to the choice The following result is then essentially [11, Theorem 15] (see also [16]).
Theorem 5 Suppose F s,γ < ∞ and n is a power of a prime.Then, a generating vector z ∈ N s for a randomly shifted lattice rule (1.12) can be constructed so that the root mean square error in applying (1.12) to (1.8) satisfies for all κ ∈ (1/2, 1], where and ζ is the Riemann zeta function.
A method for constructing the vector z one component at a time is described in [19] for weights γ u of a special form, see Remark 9 below.
In the remainder of this section we shall apply this theory to the function F given in (1.10).The main result is Theorem 8.It is obtained in the steps summarised as follows.
1.By differentiating the parametrised discrete weak form (2.9), we estimate the norms ∂ |u| u h /∂y u (•, y) V for any u ∈ {1 : s}.The estimate (which uses an induction argument over the set of all partial derivatives of u h (•, y)) is given in Theorem 6 and involves the quantities where B j is the jth column of the M × s matrix B introduced in (1.7).We let b ∈ R s be the vector (b 1 , . . ., b s ) .

Using the result from
Step 1 and the linearity of G, we estimate F s,γ in Theorem 7.
The shape parameters α j from (3.2) are constrained to be in the range α j > b j .The precise values of α j are arbitrary at this point, and so are the values of the weight parameters γ u .
3. We substitute the result from Step 2 into the right hand side of (3.3) to obtain an error bound for this particular F , and we choose the weight parameters γ u and then the shape parameters α j to minimise this bound.The end result, Theorem 8, is a convergence estimate with order O(n −1/(2κ) ), valid for κ ∈ (1/2, 1], and with the implied constant depending on the sum s j=1 b 2κ/(1+κ) j .
The theory is essentially independent of the choice of factorisation (1.7).However, we will work out the detailed theory, in §3.4 and §4, only for the circulant embedding approach.

Regularity of F
In this subsection it is helpful to introduce more general partial derivatives than those mixed first order derivatives which appear in (3.1).Thus for any multiindex ν ∈ N s with order |ν| = ν j , we let ∂ ν denote the corresponding mixed derivative.For any multiindex ν ∈ N s and any vector c ∈ R s we also write c ν = s j=1 c ν j j .
Theorem 6 For any y ∈ R s , any f ∈ V , and for any multiindex ν ∈ N s , the solution u h (•, y) of (2.9) satisfies ) and a min,M (y) = min 1≤i≤M a(x i , y).
Proof.The proof is similar to that of [11,Theorem 14], but there are some differences due to the fact that we are working with the FE discretisation (2.9) with quadrature and interpolation, and because of the finiteness of the 'expansion' (1.6).(In [11] an infinite KL expansion was used in the context of the continuous problem.) To simplify the proof we introduce the y-dependent discrete norm • on V h : with âτ (y) = âτ (ω) given by (2.7).Then we have v h ).Since we consider piecewise linear finite elements, for v h ∈ V h we have that ∇v h is piecewise constant on each element τ .
We first prove by induction on |ν| that the solution u h (•, y) of (2.9) satisfies where the sequence (Λ n ) n≥0 is defined recursively by Clearly (3.8) holds for |ν| = 0.For ν = 0, we differentiate (2.9), using the multivariate Leibniz rule to obtain (since the right-hand side is independent of y), Now inserting v h = ∂ ν u h (•, y) into (3.9),keeping the term with m = ν in the outer sum on the left-hand side and moving the remaining terms to the right-hand side, we have Then, after an application of the Cauchy-Schwarz inequality and a cancellation, we obtain To estimate (3.10), we have from Proposition 1 that a τ (y) = M i=1 p τ,i a(x i , y) with all p τ,i ≥ 0, and we recall from (1.2) and (1.6) that a(x i , y) = exp( s j=1 B i,j y j + Z i ).Then, noting that a(x i , y) ≥ 0 it is easy to see that, for any multiindex ν, Inserting this into (3.10)we obtain Using (3.11), the estimate (3.8) then follows by induction in exactly the same way as in [11,Theorem 14].Now, using the definition of the discrete norm (3.7) and the fact that u h (•, y) is the solution of (2.9), we have min Hence, using Proposition 1, we conclude that Using the same argument as for the lower bound in (3.12), together with Proposition 1 again, we obtain from (3.8) and (3.13) that .
This together with the estimate Λ n ≤ n!/(log 2) n (proved in [11,Theorem 14]) completes the proof. 2 We can now use this theorem to show that F lies in the weighted Sobolev space characterised by the norm (3.1).We make use of the s-dependent quantities Theorem 7 Suppose that b 1,s is uniformly bounded with respect to s. Suppose also that α j > b j for all j.Then, for any f ∈ V and for any linear functional G ∈ V , the integrand , where C is independent of s and with Φ denoting the univariate standard cumulative normal distribution function.
Proof.Using the linearity of G, together with Theorem 6, but replacing the multiindex ν with any set u ⊆ {1 : s} (i.e., restricting to the case where all ν j ≤ 1), we obtain the following estimate for the first order partial derivatives of F appearing in the norm (3.1), where we used the estimate a min,M (y Examining the right-hand side of (3.14), we see that the only factor which depends on y is exp(b |y|).An elementary calculation (see [11,Theorem 16]) shows that Since 2Φ(b j ) ≤ 1 + 2b j / √ 2π < exp(b j ) for all j, we have Thus, it follows from (3.14) and the definition of the norm (3.1) that The final result, with the constant factor C being independent of s, is then a consequence of the assumption on b. 2

Error estimate
In order to obtain a dimension-independent estimate for the QMC method we need a stronger assumption on b than that used in Theorem 7. The following theorem shows that under that stronger assumption there is a choice of γ and α which ensures that the QMC error is bounded independently of s.The appropriate choice of γ is of "POD" type, which allows a good generating vector z for the QMC rule to be efficiently computed by the "component-by-component" procedure, see Remark 9 below.
Theorem 8 Under the assumptions of Theorem 7, let κ ∈ (1/2, 1), set p = 2κ/(1 + κ) and assume in addition that b p,s is uniformly bounded with respect to s.Then there exists a positive constant C(κ) depending on κ (as well as on Z, f , G) such that Proof.Some parts of the proof are similar to that of [11,Theorem 20], and for these parts we will be brief.We remind the readers that each vector b = (b 1 , . . ., b s ) depends fundamentally on s; changing the value of s leads to completely different components b j .This is different from the situation in [11] where there is just one infinite sequence b that is truncated to s terms.First, combining Theorem 5 and Theorem 7 we see that (3.15) holds with C(κ) proportional to , with C s (γ, α, κ) defined in (3.4).Now, we choose the weight parameters γ to minimise C s (γ, α, κ).This minimisation problem was solved in [11, Lemma 18], yielding the solution: which is of "product and order dependent" (POD) form.With this choice, one can show that where . (3.17) It remains to estimate S s (α, κ).Apart from the constraint α j > b j , the shape parameters α j are still free at this stage and so we choose them to minimise the right-hand side of (3.17).This minimisation problem is also solved in [11,Corollary 21]); the solution is Now, to estimate S s (α, κ), let b max be an upper bound on b ∞,s for all s (guaranteed by assumption).Then b j ≤ b max for all j = 1, . . ., s and all s.For a given value of κ ∈ (1/2, 1], let α max denote the value of (3.18) with b j replaced by b max .Then we have α j ≤ α max for all j = 1, . . ., s and all s, and Note also that for defined in (3.5) we have (α j , κ) ≤ (α max , κ) for all j and all s.Moreover, since 2 exp(b 2 j /2)Φ(b j ) ≥ 1, it follows that b j ≤ b j , and so from (3.17) we can conclude that, with p := 2κ/(1 + κ), where .

Now using the inequality
(which holds since for |u| = each term j∈u a j from the left-hand side appears in the expansion of the right-hand side exactly !times, but the right-hand side includes other terms) and with K denoting the assumed uniform bound on b p p,s , we obtain The finiteness of the right-hand side follows by the ratio test, on noting that p < 1. 2 Remark 9 A generating vector z ∈ N s for a randomly shifted lattice rule with n points in s dimensions that achieves the desired error bound can be constructed using a componentby-component (CBC) algorithm, which goes as follows: (1) Set where the function θ j (x) is symmetric around 1/2 for x ∈ [0, 1] and can be computed for x ∈ [0, 1/2] by The integral in the above formula for θ j only needs to be calculated once, while the general formulation [19,Equation (50)] also has an integral for the first part, which we evaluate explicitly here for our particular choice of ψ j .Note that γ u and α j fundamentally depend on s through b j .When (and only when) the algorithm reaches k = s, the expression E 2 s,n,s (z 1 , . . ., z s ) is the so-called squared shift-averaged worst case error.See [19] for the analysis of an efficient implementation of the algorithm for POD weights (3.16), so that the cost is O(sn log n+s 2 n) operations using FFT.We refer to the accompanying software of [16] for an implementation.

QMC convergence in the case of circulant embedding
The circulant embedding technique is a method of computing efficiently the factorisation (1.7), thus yielding a method of sampling the random vector Z via (1.6).We describe the process briefly here before verifying the assumptions of Theorem 8.This section is a summary of our results in [13].
The M = (m 0 + 1) d points {x i : i = 1, . . ., M } are assumed to be uniformly spaced with spacing h 0 := 1/m 0 on a d-dimensional grid over the unit cube [0, 1] d enclosing the domain D. Using a vector notation, we may relabel the points x 1 , . . ., x M to be indexed by k as Then it is easy to see that (with analogous vector notation for the rows and columns) the M × M covariance matrix R defined in (1.5) can be written as If the vectors k are enumerated in lexicographical ordering, then we obtain a nested block Toeplitz matrix where the number of nested levels is the physical dimension d.
We extend R to a nested block circulant matrix R ext .To do this, it is convenient to extend to the infinite grid: Then, to define R ext , we consider an enlarged cube [0, ] d of edge length := mh 0 ≥ 1 with integer m ≥ m 0 .We assume that m 0 (and hence h 0 ) is fixed and we enlarge m (or equivalently ) as appropriate.We introduce a 2 -periodic map on R by specifying its action on [0, 2 ]: Now we apply this map elementwise and define an extended version ρ ext of ρ as follows: Note that ρ ext is 2 -periodic in each coordinate direction and ρ ext (x) = ρ(x) when x ∈ [0, ] d .Then R ext is defined to be the s × s symmetric nested block circulant matrix with s = (2m) d , defined, analogously to (3.19), by It follows that R is the submatrix of R ext in which the indices are constrained to lie in the range k, k ∈ {0, . . ., m 0 } d .Since R ext is nested block circulant, it is diagonalisable by FFT.The following theorem is taken from [12]: Theorem 10 R ext has the spectral decomposition: where Λ ext is the diagonal matrix containing the eigenvalues of R ext , which can be obtained by √ s times the Fourier transform on the first column of R ext , and denoting the d-dimensional Fourier matrix.If the eigenvalues of R ext are all non-negative then the required B in (1.7) can be obtained by selecting M appropriate rows of The use of FFT allows fast computation of the matrix-vector product B ext y for any vector y, which then yields By needed for sampling the random field in (1.6).Our algorithm from [13] for obtaining a minimal positive definite R ext is given in Algorithm 1.Our algorithm from [13] for sampling an instance of the lognormal random field is given in Algorithm 2. Note that the normalisation used within the FFT routine differs among particular implementations.Here, we assume the Fourier transform to be unitary.In the case of QMC sampling, the random sample y in Step 1 of Algorithm 2 is replaced with a randomly shifted lattice point from [0, 1] s , mapped to R s elementwise by the inverse of the cumulative normal distribution function (see (1.12)).The relative size of the quantities b j = B j ∞ (as defined in (3.6)) determines the ordering of the QMC variables in order to benefit from the good properties of lattice rules in earlier coordinate directions in the construction of the generating vector in Remark 9.
We prove in [13] (under mild conditions) that Algorithm 1 will always terminate.Moreover, in many cases the required m (equivalently ) can be quite small.Theorem 12 below gives an explicit lower bound for the required value of .

Example 11
The Matérn family of covariances are defined by where Γ is the Gamma function and K ν is the modified Bessel function of the second kind, σ 2 is the variance, λ is the correlation length and ν ≥ 1/2 is a smoothness parameter.The limiting cases ν → 1/2 and ν → ∞ correspond to the exponential and Gaussian covariances respectively, see, e.g., [11], however, using a slightly different scaling.
The following result, proved in [13, Thm.2.9], shows that the growth of the size of with respect to the mesh size h 0 and with respect to the parameters in the Matérn family is moderate.In particular, for fixed ν < ∞, it establishes a bound on that grows only logarithmically with λ/h 0 and gets smaller as λ decreases.Experiments illustrating the sharpness of this bound are given in [13].
In order to verify the QMC convergence estimate given in Theorem 8 in the case of circulant embedding, we need to bound b p,s , where b is defined in (3.6).Since every entry in Re(F) + Im(F) is bounded by 2/s, we have where Λ ext s,j , j = 1, . . ., s, are the eigenvalues of the nested block circulant matrix R ext .Notice that we added 's' explicitly to the notation to stress the dependence of these eigenvalues on s.A sufficient condition to ensure the uniform boundedness of b s,p required in Theorem 8 is that there exists a constant C > 0, independent of s, such that In [13, §3], we conjecture (with supporting mathematical arguments and empirical evidence) that the eigenvalues Λ ext s,j , when rearranged in non-increasing order, decay like j −(1+2ν/d) in case of the Matérn covariance.This is the same as the decay rates of both the eigenvalues of the original nested block Toeplitz matrix R and of the KL eigenvalues of the underlying continuous field Z.Under this conjecture, it follows that the smallest value of p allowed for (3.23) to hold is just bigger than 2/(1 + 2ν/d).In turn this yields a theoretical convergence rate of nearly O(n − min(ν/d,1) ) in Theorem 8 above, for any ν > d/2, independently of s.To see this, recall that the convergence rate of −1/(2κ) with respect to n in Theorem 8 is related to p via p = 2κ/(1 + κ) with κ ∈ (1/2, 1).These bounds on κ imply that, for the conjectured rate of decay of the eigenvalues Λ ext s,j , Theorem 8 is only applicable for ν > d/2.These conjectures will be investigated in detail in our numerical experiments in the next section.As we will see there, the theoretically predicted rates may be pessimistic.
In the experiments here, we see nearly optimal QMC convergence, i.e., O(n −1 ), even when ν < d, and at least as good convergence as for standard MC, i.e., O(n −1/2 ), even when ν < d/2.All these findings are in line with the results we obtained in the case of KL expansions in [11], and they guarantee a dimension-independent optimal QMC convergence for sufficiently large smoothness parameter ν.

Numerical Experiments
In this section we perform numerical experiments on problem (1.1) in 2D and 3D which illustrate the power of the proposed algorithm.Our quantity of interest will be the average value of the solution u over some measurable T ⊆ D, with D being an L-shaped domain with a hole in 2D or the unit cube in 3D; all details to be specified below.In both cases, the domain D is contained in the unit cube [0, 1] d , as assumed.
Random field generation.In all experiments the random coefficient a is of the form (1.2) where Z is a Gaussian random field with the Matérn covariance (3.21).We take the mean Z to be 0, the variance to be σ 2 = 0.25, and we consider two different values for the correlation length, namely λ ∈ {0.2, 0.5}, combined with three different values for the smoothness parameter ν ∈ {0.5, 2, 4} in 2D and ν ∈ {0.5, 3, 4} in 3D (thus illustrating the cases ν < d, ν = d, ν > d in each case).The forcing term is taken to be f ≡ 1.
For different values of m 0 , we first obtain values of the random field on a uniform grid with (m 0 + 1) d points on the unit cube [0, 1] d by circulant embedding as described in §3.4.We choose m 0 ∈ {12, 24, 48, 96} in 2D and m 0 ∈ {7, 14, 28} in 3D.The necessary length = m/m 0 of the extended cube [0, ] d to ensure positive definiteness, where m ≥ m 0 , depends on the values of d, λ and ν, and affects the dimensionality s = (2m) d of y.This dependence is investigated in detail in [13] (see Theorem 12).In Table Table 1: The values of s needed to ensure positive definiteness for different combinations of parameters in 2D and 3D.The numbers in round brackets show the cost of random field generation as a fraction of the total computational cost per sample.These numbers increase with increasing s (from left to right) for a fixed FE mesh.For reference, we also provide the number of elements nbe for each of the FE meshes.
FE solution of the PDE.For each realisation of the random field (i.e., for each y), we solve the PDE using a finite element (FE) method with piecewise linear elements on an unstructured grid produced with the help of the Matlab PDE toolbox.The quadrature rule for the matrix assembly is based on the mid point rule, where the values of the random field at the centroids of the elements are obtained by multi-linear interpolation of the values on the uniform grid, computed with circulant embedding (see (2.6) and (2.7)).In order to balance the quadrature error and the FE discretisation error in light of Lemma 2 and Theorem 3, the maximum FE mesh diameter is chosen such that h ≈ √ d h 0 = √ d/m 0 .In particular, we choose h ∈ {0.12, 0.06, 0.03, 0.015} in 2D and h ∈ {0.24, 0.12, 0.06} in 3D, for each of the respective values of m 0 above.In 2D, the Matlab function adaptmesh is used to build a family of adaptive meshes for the L-shaped domain with a hole (see Fig. 1).We use the same adaptive mesh, constructed with a ≡ 1, for all realisations.To find meshes with our desired maximum mesh diameters h, we gradually increase the maxt parameter of the Matlab adaptmesh command.Fig. 2 zooms in on the shaded region in the bottom left corner of each of the adaptive meshes to show the centroids of the triangles in relation to the uniform grids.The PDE is solved with the Matlab function assempde.For the 3D problem, we use the Matlab PDE toolbox to mesh and solve the PDE.The integral in (4.1) is approximated by applying the midpoint rule on each of the elements in T .In 2D, the resulting linear system is solved with the default sparse direct solver ("backslash") in Matlab.We believe that that is also the solver used in the Matlab PDE toolbox for our 3D experiments, but we could not verify this.
As we can see from the fraction of time needed to construct the random field, which  is shown in brackets in Table 1 for each case, the majority of time is spent on assembling and solving the FE systems.As expected this is even more pronounced in 3D, since sparse direct solvers are known to be significantly more expensive for 3D FE matrices with respect to the number of unknowns (both in terms of the order and in terms of the asymptotic constant), while the cost of the FFT factorisation grows log-linearly with the number of unknowns in 2D and in 3D.In any case, the random field generation is insignificant in the majority of cases and it takes less than 50% of the computational time in all cases.
Construction of lattice sequences.We approximate the expected value of (4.1) by randomly shifted lattice rules obtained using the fast CBC code from the QMC4PDE website https://people.cs.kuleuven.be/~dirk.nuyens/qmc4pde/(see also [16]).A typical call to the QMC4PDE construction script would be: ./lat-cbc.py--s=2000 --b file=[f] --m=16 --d1=1 --d2=2 --a3=1 --outputdir= [o] where s=2000 specifies an initial maximum number of dimensions and [f] is a file con-taining the calculated values of b j in (3.6) in nonincreasing order for a particular case of d, λ and ν.See [16] for an explanation of the other parameters.
In specifying parameters for the CBC construction, we follow the theory of [16] as closely as possible, but we make a couple of modifications for practical reasons.Firstly, the implementation follows [8] to construct "embedded" lattice sequences in base 2, so that in practice we can increase n gradually without throwing away existing function evaluations.At every power of 2, the points form a full lattice rule which complies with the theory from §3.Secondly, with the POD weights in (3.16) the CBC construction to find the generating vector z has a cost of O(s 2 n+sn log n) operations which becomes quite high for the large s we are considering (see Table 1).Thus, we only carry out the CBC construction up to a certain dimension s * and then randomly generate the remaining components of z.In particular, we stop the CBC algorithm at the first component s * where the generating vector has a repeated entry.Repeated components in z yield bad two-dimensional projections of lattice points and randomly generated components are intuitively better in that situation.The highest dimensionality for the switch-over dimension for all cases in Table 1 is s * = 1811.The only two cases where we did not need to add random components were ν = 2 and ν = 4, for d = 2, m 0 = 12 and λ = 0.2.
Estimation of (Q)MC error.For fixed h, we can compute the standard error on the QMC estimate of the expected value of (4.1) by using a number of random shifts.Specifically, for each case we took q = 64 independent random shifts of one n-point lattice rule, giving q independent approximations Q 1 , . . ., Q q to the expected value.We take their average Q q as our final approximation, and we estimate the standard error on Q by The total number of function evaluations in this case is N = q n.According to our theory (see also [16]), the convergence rate for our randomised QMC method is of the order q −1/2 n −r = q r−1/2 N −r , with r ≈ min(ν/d, 1).Hence, for r > 1/2, the constant in any of the convergence graphs with respect to N depends on q r−1/2 .To provide less erratic curves, the number of random shifts is chosen to be fairly large here.In practice, e.g., q = 16 shifts would be sufficient.This would effectively push all convergence graphs down, leading to bigger gains for QMC.
We compare QMC with the simple Monte Carlo (MC) method (1.14) based on N random samples.Denoting the function values for these samples by Y 1 , . . .Y N , then the MC approximation of the integral is Y In our figures later, we plot the relative standard error obtained by dividing the estimated standard error by the estimated mean.
Computing environment.All our computations were run as serial jobs on reserved 8-core Intel Xeon E5-2650v2 2.60 GHz nodes on the computational cluster Katana at UNSW (or on almost identical hardware).Since they are embarrassingly parallel, both the MC and QMC simulations could easily be parallelised with roughly linear speedup.We chose to run different jobs in parallel instead of parallelising individual jobs, and to report the actual serial computation times for our test cases.

Results for an L-shaped domain in 2D
In this example the domain D is the complex 2D domain shown in Fig. 1: an L-shaped region with a hole.We consider five choices for the averaging domain T ⊆ D in (4.1): T1 the full domain, T2 the bottom left corner with a circular segment cut out, T3 the lower right interior square, and T4 the upper left interior square in a symmetrical location to T3, T5 the L-shape near the reentrant corner.
Fig. 1 shows the different averaging domains T , as well as some of the adaptive meshes that were used.Note that the circular sections of the boundary of D are approximated polygonally and the averaging domains T2, . . ., T5 are resolved on all meshes.The meshes are adapted to capture the loss of regularity near the reentrant corner, but nevertheless the number of elements grows roughly with O(h −2 ), the same as for a uniform family of meshes with mesh size h.We specify the domain in Matlab by means of constructive solid geometry (CSG), i.e., the union and subtraction of the basic pieces.This ensures that all the averaging domains T are covered by complete elements.
Mesh errors.Before we compare the performance of our QMC method with the basic MC method, let us first estimate the discretisation errors for each of the adaptive meshes.In Table 2, we present results for T1 and T5 for the case λ = 0.2 and ν = 2.The estimates of E[G(u h )], obtained using QMC, are stated together with the estimated standard error for each mesh.We use sufficiently many QMC cubature points, so that the significant figures of the estimates are not affected by QMC errors.The product of m 0 and h is kept fixed (approximately equal to √ d), as discussed above.From the results we can clearly see that the convergence rate for the discretisation error is O(h 2 ) on the given meshes.This shows that the mesh refinement near the reentrant corner is working optimally.Using Richardson extrapolation, we can thus compute a higher order approximation of the limit of E[G(u h )] that is stated in the last row of Table 2.The relative error with respect to this extrapolated value is stated in the last column for both T1 and T5.
The behaviour is similar for the other quantities of interest and for the other values of λ when ν = 2.For ν = 0.5, on the other hand, the solution is globally only in H 3/2 , so that the local mesh refinement near the reentrant corner plays no role and the convergence of E[G(u h )] is only O(h).Thus, to achieve acceptable accuracy, comparable to the QMC errors we quote below, we would in practice need much finer meshes for ν = 0.5.However, since this would not affect the behaviour of the QMC cubature errors, we did not do that.QMC convergence rates.For the remainder we will now demonstrate the dimension independence of our QMC method and its superiority over basic MC.In Fig. 3, we plot the relative standard error in (4.1) with T = T1 against the total number of PDE solves (top) and against the total calculation time (bottom).We consider six combinations of the values of ν and λ and plot the graphs for four levels of mesh refinement.(The MC estimates were not computed on the finest mesh.)The convergence of the MC method (stated together with one standard deviation) are computed with our randomised lattice rule with n sufficiently large, such that the standard error is significantly smaller than the discretisation error.Richardson extrapolation is used to compute a more accurate estimate of the limit of E[G(u h )], as h → 0 (final row).The columns denoted "rel.h-error" give the relative error with respect to these extrapolated estimates.
is proportional to N −0.5 , as expected.The convergence of the QMC method ranges from O(N −0.72 ) up to O(N −0.89 ).For example, for ν = 0.5 and λ = 0.2, to achieve the same relative standard error of 10 −4 we need about 10 6 PDE solves with the MC method while the QMC method only needs about 3 • 10 4 PDE solves.Also in terms of computational time, all the results consistently show huge computational savings for the QMC method over the MC method, even with the relatively large number of q = 64 random shifts.We note that the convergence graphs are meant to illustrate the convergence behaviour and in practice one would not try to achieve such high precision, especially not on the coarser meshes.One would rather aim to balance the QMC errors with the discretisation errors in Table 2. From the theory, we expect the smoothness ν of the random field to have an effect on the convergence rate of QMC.Specifically, the bound in Theorem 8 would suggest a rate of only O(N − min(ν/2,1) ) in 2D, as discussed in §3.4.This effect is not observed in the graphs.We even observe excellent convergence behaviour for the case ν = 0.5 for which our theory does not apply.On the other hand, we see clearly that a smaller correlation length λ corresponds to a better convergence rate, which is in full agreement with our findings in [12].
A more important observation is the overlay of the convergence lines for the different meshes in the plots of relative standard error versus number of PDE solves in Fig. 3.For example, for the case λ = 0.5 and ν = 4 the dimensionality s increases from about 9 thousand to 1 million as we increase m 0 from 12 to 96 (see Table 1), while the convergence rate and the asymptotic constant for the relative standard error are clearly independent of the increasing dimension.
In Fig. 4, we confirm that the superiority is independent of the quantity of interest, by presenting similar graphs as for T1 also for T2 and T5.We do not include the results for the two symmetrical squares T3 and T4, which look very similar.

Results for the unit cube in 3D
Our second example is the unit cube in 3D and our quantity of interest is (4.1) with T = D = [0, 1] 3 .We use the mesh generator and the solver from the Matlab PDE toolbox to obtain three meshes with desired maximum mesh diameters h and to calculate the solution.To evaluate the random field Z at the centroids of the elements we used the function interpolateSolution from the toolbox.
In Fig. 5 we plot again the relative standard error for six combinations of parameters against the total number of PDE solves and against the calculation time.We observe convergence rates from N −0.73 up to N −0.84 for our QMC rules, and the expected N −0.5 for the plain MC method.The lattice sequences were again constructed with the actual values of b j from (3.6), except for the combination of λ = 0.5 and m 0 = 28 where we get the near astronomical dimensionalities ranging from about 15 to 37 million, as ν increases (see Table 1); for these cases we replaced b j by the convenient upper bound given in (3.22).

Algorithm 1 3 .
Input: d, m 0 , and covariance function ρ. 1. Set m = m 0 .2. Calculate r, the first column of R ext in (3.20).3. Calculate v, the vector of eigenvalues of R ext , by d-dimensional FFT on r. 4. If the smallest eigenvalue < 0 then increment m and go to Step 2. Output: m, v. Algorithm 2 Input: d, m 0 , mean field Z, and m and v obtained by Algorithm 1. 1.With s = (2m) d , sample an s-dimensional normal random vector y. 2. Update y by elementwise multiplication with √ v. Set w to be the d-dimensional FFT of y. 4. Update w by adding its real and imaginary parts.5. Obtain z by extracting the appropriate M = (m 0 + 1) d entries of w. 6. Update z by adding Z. Output: exp(z).
important to investigate for what values of p this inequality holds.The smaller the value of p the faster the convergence will be in Theorem 8.
1, we summarise the values of s for the different combinations of parameters.

Table 2 :
Mesh convergence for the case λ = 0.2 and ν = 2 for T1 and T5.The estimates of E[G(u h )]