Skip to main content
Log in

Fast random field generation with H-matrices

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

A Correction to this article was published on 21 March 2019

This article has been updated

Abstract

We use the H-matrix technology to compute the approximate square root of a covariance matrix in linear cost. This allows us to generate normal and log-normal random fields on general point sets with optimal cost. We derive rigorous error estimates which show convergence of the method. Our approach requires only mild assumptions on the covariance function and on the point set. Therefore, it might be also a nice alternative to the circulant embedding approach which applies only to regular grids and stationary covariance functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Change history

  • 21 March 2019

    The corrected version states the parameter range as p���������2��� instead of p������������. The effect is to disallow non-smooth norms such as the ���1-norm for the distance measure.

  • 21 March 2019

    The corrected version states the parameter range as p���������2��� instead of p������������. The effect is to disallow non-smooth norms such as the ���1-norm for the distance measure.

References

  1. Babuška, I., Andersson, B., Smith, P.J., Levin, K.: Damage analysis of fiber composites. I. Statistical analysis on fiber scale. Comput. Methods Appl. Mech. Eng. 172(1–4), 27–77 (1999)

    Article  MathSciNet  Google Scholar 

  2. Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (eds.): Bayesian Statistics 6, Proceedings of the 6th Valencia International Meeting held in Alcoceber, 6–10 June 1998. The Clarendon Press, Oxford University Press, New York, pp. x+867 (1999)

  3. Börm, S.: Efficient Numerical Methods for Non-local Operators. Volume 14 of EMS Tracts in Mathematics. European Mathematical Society (EMS), Zürich (2010)

  4. Chan, G., Wood, A.T.A.: Algorithm as 312: an algorithm for simulating stationary gaussian random fields. J. R. Stat. Soc. Ser. C (Appl. Stat.) 46(1), 171–181 (1997)

    Article  Google Scholar 

  5. Dietrich, C.R., Newsam, G.N.: Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix. SIAM J. Sci. Comput. 18(4), 1088–1107 (1997)

    Article  MathSciNet  Google Scholar 

  6. Dölz, J., Harbrecht, H., Schwab, Ch.: Covariance regularity and h-matrix approximation for rough random fields. Numerische Mathematik, pp. 1–27 (2016)

  7. Elishakoff, I. (ed): Whys and Hows in Uncertainty Modelling. In: Probability, Fuzziness and Anti-optimization, Volume 388 of CISM Courses and Lectures. Springer, Vienna (1999)

  8. Frommer, A.: Monotone convergence of the Lanczos approximations to matrix functions of Hermitian matrices. Electron. Trans. Numer. Anal. 35, 118–128 (2009)

    MathSciNet  MATH  Google Scholar 

  9. Graham, I.G., Kuo, F.Y., Nuyens, D., Scheichl, R., Sloan, I.H.: Quasi-Monte Carlo methods for elliptic PDEs with random coefficients and applications. J. Comput. Phys. 230(10), 3668–3694 (2011)

    Article  MathSciNet  Google Scholar 

  10. Grasedyck, L., Hackbusch, W.: Construction and arithmetics of \(H\)-matrices. Computing 70(4), 295–334 (2003)

    Article  MathSciNet  Google Scholar 

  11. Hackbusch, Wolfgang: Hierarchical Matrices: Algorithms and Analysis. Volume 49 of Springer Series in Computational Mathematics. Springer, Heidelberg (2015)

    Book  Google Scholar 

  12. Harbrecht, H., Peters, M., Siebenmorgen, M.: Efficient approximation of random fields for numerical applications. Numer. Linear Algebra Appl. 22(4), 596–617 (2015)

    Article  MathSciNet  Google Scholar 

  13. Higham, N.J.: Computing real square roots of a real matrix. Linear Algebra Appl. 88(89), 405–430 (1987)

    Article  MathSciNet  Google Scholar 

  14. Higham, N.J.: Stable iterations for the matrix square root. Numer. Algorithms 15(2), 227–242 (1997)

    Article  MathSciNet  Google Scholar 

  15. Kenney, C., Laub, A.J.: Rational iterative methods for the matrix sign function. SIAM J. Matrix Anal. Appl. 12(2), 273–291 (1991)

    Article  MathSciNet  Google Scholar 

  16. Khoromskij, B.N., Litvinenko, A., Matthies, H.G.: Application of hierarchical matrices for computing the Karhunen–Loève expansion. Computing 84(1–2), 49–67 (2009)

    Article  MathSciNet  Google Scholar 

  17. Moret, I.: Rational Lanczos approximations to the matrix square root and related functions. Numer. Linear Algebra Appl. 16(6), 431–445 (2009)

    Article  MathSciNet  Google Scholar 

  18. Schmitt, B.A.: Perturbation bounds for matrix square roots and pythagorean sums. Linear Algebra Appl. 174, 215–227 (1992)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Feischl.

Appendices

Appendix—Proof of Lemma 1

The following lemma is an elementary statement on holomorphic functions.

Lemma 9

Let \(f:O\rightarrow {{\mathbb {C}}}\) be a continuous function on the domain \(O\subset {{\mathbb {C}}}^n\) which is holomorphic in O in all variables \({\varvec{x}}_i\), \(i\in \{1,\ldots ,n\}\), i.e.,

$$\begin{aligned} {\varvec{x}}_i\mapsto f({\varvec{x}}_1,\ldots ,{\varvec{x}}_i,\ldots ,{\varvec{x}}_n) \end{aligned}$$

is holomorphic in \(\big \{{\varvec{x}}_i\in {{\mathbb {C}}}\,:\,({\varvec{x}}_1,\ldots ,{\varvec{x}}_i,\ldots ,{\varvec{x}}_n)\in O\big \}\) for all \({\varvec{x}}_1,\ldots ,{\varvec{x}}_{i-1},{\varvec{x}}_{i+1},\ldots ,{\varvec{x}}_n\in {{\mathbb {C}}}\). Then, for all multi-indices \(\alpha \in {{\mathbb {N}}}_0^n\), the function \(\partial _{\varvec{x}}^\alpha f\) is holomorphic in O in all variables \({\varvec{x}}_i\), \(i\in \{1,\ldots ,n\}\) as defined above.

Proof

The result is proved by induction on \(|\alpha |_1\). Obviously, for \(|\alpha |_1=0\), \(\partial _{\varvec{x}}^\alpha f=f\) and the statement is true. Assume the statement holds for all \(|\alpha |_1\le k\) and choose some \(\alpha \in {{\mathbb {N}}}_0^n\) with \(|\alpha |_1=k+1\). Then, we have for some \(i\in \{1,\ldots ,n\}\) and some \(\alpha _0\in {{\mathbb {N}}}_0^n\) with \(|\alpha _0|_1=k\) that

$$\begin{aligned} \partial _{\varvec{x}}^\alpha f =\partial _{{\varvec{x}}_i}\partial _{\varvec{x}}^{\alpha _0}f. \end{aligned}$$

Since, \(\partial _{\varvec{x}}^{\alpha _0}f\) is holomorphic in O in all variables by the induction hypothesis, obviously \(\partial _{\varvec{x}}^\alpha f\) is holomorphic in O at least in \({\varvec{x}}_i\) (derivatives of holomorphic functions are holomorphic). To prove the statement for all other variables, we may employ Cauchy’s integral formula to obtain

$$\begin{aligned} \partial _{\varvec{x}}^\alpha f({\varvec{x}})=\partial _{{\varvec{x}}_i}\partial _{\varvec{x}}^{\alpha _0}f = \frac{1}{2\pi i}\int _{\partial B_\varepsilon ({\varvec{x}}_i)} \frac{ \partial _{\varvec{x}}^{\alpha _0}f({\varvec{x}}_1,\ldots ,{\varvec{x}}_{i-1},{\varvec{z}},{\varvec{x}}_{i+1},\ldots ,{\varvec{x}}_n)}{({\varvec{z}}-{\varvec{x}}_i)^2}\,\mathrm{d}{\varvec{z}}, \end{aligned}$$

for some \(\varepsilon >0\) with \(B_\varepsilon ({\varvec{x}}_i)\subset {{\mathbb {C}}}\) being the ball with radius \(\varepsilon \). The integrand is holomorphic in all variables \({\varvec{x}}_j\), \(j\ne i\). Hence, we conclude that \(\partial _{\varvec{x}}^\alpha f({\varvec{x}})\) is holomorphic in all variables and prove the assertion. \(\square \)

The following result is elementary but technical.

Lemma 10

For \(n,p\in {{\mathbb {N}}}\), define the set \(M:=\big \{{\varvec{x}}\in {{\mathbb {C}}}^{n}\,:\,\mathrm{real}(\sum _{i=1}^n {\varvec{x}}_i^p) \le 0\big \}\). Then, there holds \(({{\mathbb {R}}}^n)_+:=\big \{{\varvec{x}}\in {{\mathbb {R}}}^n{\setminus }\{0\}\,:\,{\varvec{x}}_i\ge 0\big \}\cap M=\emptyset \) and

$$\begin{aligned} \mathrm{dist}(M,{\varvec{x}})\ge |\sin \left( \frac{\pi }{2p}\right) ||{\varvec{x}}|\quad \text {for all }{\varvec{x}}\in ({{\mathbb {R}}}^n)_+. \end{aligned}$$
Fig. 6
figure 6

The situation of the proof of Lemma 10. The distance between \(\partial C_p\) and x is \(x\sin (\pi /(2p))\)

Proof

Let \({\varvec{x}}\in ({{\mathbb {R}}}^n)_+\), then we have \(\sum _{i=1}^n {\varvec{x}}_i^p>0\) and hence \({\varvec{x}}\notin M\). It is easy to see that the cone \(C_p:=\big \{r\exp (i\phi )\,:\,r>0,\,\phi \in (-\frac{\pi }{2p},\frac{\pi }{2p})\big \}\subset {{\mathbb {C}}}\) satisfies \(\mathrm{real}(x^p)>0\) for all \(x\in C_p\). Thus, we have that

$$\begin{aligned} C_p^n:=\Big (\prod _{i=1}^n(\{0\}\cup C_p)\Big ){\setminus } \{0\}\subset {{\mathbb {C}}}^n \end{aligned}$$

satisfies \(C_p^n\cap M=\emptyset \).

Moreover, a simple geometric argument (see Figure 6) shows that all \(x>0\) satisfy

$$\begin{aligned} \mathrm{dist}(x,\partial C_p) = x \sin (\pi /(2p)). \end{aligned}$$

Since \(({{\mathbb {R}}}^n)_+\subseteq C_p^n\), this implies

$$\begin{aligned} \mathrm{dist}(M,{\varvec{x}})\ge \mathrm{dist}(\partial C_p^n,{\varvec{x}})=\Big (\sum _{i=1}^n {\varvec{x}}_i^2\sin (\pi /(2p))^2\Big )^{1/2}=|\sin (\pi /(2p))||{\varvec{x}}|. \end{aligned}$$

This concludes the proof. \(\square \)

Products of asymptotically smooth functions are again asymptotically smooth. This is shown in the next lemma.

Lemma 11

Given two functions \(f,g:D\times D\rightarrow {{\mathbb {R}}}\) which are asymptotically smooth (1). Then, also their product fg satisfies (1).

Proof

To simplify the notation, we consider fg as functions of one variable \({\varvec{z}}=({\varvec{x}},{\varvec{y}})\in D\times D\subset {{\mathbb {R}}}^{2d}\). For multi-indices \(\alpha ,\beta \in {{\mathbb {N}}}^{2d}\), define

$$\begin{aligned} \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) :=\prod _{i=1}^{2d}\left( {\begin{array}{c}\alpha _i\\ \beta _i\end{array}}\right) . \end{aligned}$$

Note that there holds \(\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \le \left( {\begin{array}{c}|\alpha |_1\\ |\beta |_1\end{array}}\right) \). This follows from the basic combinatorial fact that the number of possible choices of \(\beta _i\) elements out of a set of \(\alpha _i\) elements for all \(i=1,\ldots ,2d\) is smaller than the number of choices of \(|\beta |_1\) elements out of a set of \(|\alpha |_1\) elements.

The Leibniz formula together with the definition of asymptotically smooth function (1) show for \(\alpha \in {{\mathbb {N}}}^{2d}\)

$$\begin{aligned} |\partial _{\varvec{z}}^\alpha (fg)|({\varvec{z}})&\le \sum _{\begin{array}{c} \beta \in {{\mathbb {N}}}^{2d}_0\\ \beta \le \alpha \end{array}}\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) |\partial _{\varvec{z}}^{\beta }f|({\varvec{z}})|\partial _{\varvec{z}}^{\alpha -\beta } g|({\varvec{z}})\\&\le \sum _{\begin{array}{c} \beta \in {{\mathbb {N}}}^{2d}_0\\ \beta \le \alpha \end{array}}\left( {\begin{array}{c}|\alpha |_1\\ |\beta |_1\end{array}}\right) c_1(c_2|{\varvec{x}}-{\varvec{y}}|)^{-|\beta |_1}|\beta |_1!c_1(c_2|{\varvec{x}}-{\varvec{y}}|)^{{-|\alpha |_1+|\beta |_1}}(|\alpha |_1-|\beta |_1)!\\&\le \sum _{\begin{array}{c} \beta \in {{\mathbb {N}}}^{2d}_0\\ \beta \le \alpha \end{array}} c_1^2(c_2|{\varvec{x}}-{\varvec{y}}|)^{-|\alpha |_1}|\alpha |_1!\\&\le (|\alpha |_1+1)^{2d} c_1^2(c_2|{\varvec{x}}-{\varvec{y}}|)^{-|\alpha |_1}|\alpha |_1!\\&\lesssim c_1^2({\widetilde{c}}_2|{\varvec{x}}-{\varvec{y}}|)^{-|\alpha |_1}|\alpha |_1!, \end{aligned}$$

where we used \((|\alpha |_1+1)^{2d}\le (2d\exp (2d))^{|\alpha |_1}\) and \({\widetilde{c}}_2=c_2/(2d\exp (2d))\). This concludes the proof.

The final lemma of this section proves the concatenations of certain asymptotically smooth functions are asymptotically smooth.

Lemma 12

Let \(g:D\times D\rightarrow {{\mathbb {R}}}\) be asymptotically smooth (1) with constants \(c_1,c_2>0\).

  1. (i)

    If \(c_g:=\sup _{{\varvec{x}}\in D\times D}g({\varvec{x}})<\infty \). Then, \(\exp \circ g\) satisfies (1) with constants \(\widetilde{c}_1:= \exp (c_g)\) and \({\widetilde{c}}_2:= c_2/(2\max \{1,c_1\})\).

  2. (ii)

    If g satisfies \(\partial _{\varvec{x}}^\alpha \partial _{\varvec{y}}^\alpha g({\varvec{x}},{\varvec{y}})\le C_g\) for all \(\alpha ,\beta \in {{\mathbb {N}}}_0^d\) and some \(C_g<\infty \) as well as \(g({\varvec{x}},{\varvec{y}})\ge C_g^{-1}|{\varvec{x}}-{\varvec{y}}|\), then, \(g^{1/q}\) satisfies (1) with \({{\widetilde{\varrho }}}_1=1/2\) and \({{\widetilde{\varrho }}}_2 =C_g^{-1}\) for all \(q\in {{\mathbb {N}}}\).

  3. (iii)

    If g satisfies the assumptions from (ii) and additionally \(g({\varvec{x}},{\varvec{y}})\ge c_0>0\) for all \({\varvec{x}},{\varvec{y}}\in D\), then \(g^{-1/q}\) satisfies (1) for all \(q\in {{\mathbb {N}}}\).

Proof

To simplify the notation, we consider g as a function of one variable \({\varvec{z}}=({\varvec{x}},{\varvec{y}})\in D\times D\subset {{\mathbb {R}}}^{2d}\). Define the set of all partitions of \(\{1,\ldots ,n\}\) as

$$\begin{aligned} \varPi (n):=\Bigg \{P\subseteq 2^{\{1,\ldots ,n\}}{S\cap S^\prime =\emptyset \text { or }S=S^\prime \text { for all }}S,S^\prime \in P,\, \bigcup _{S\in P} S=\{1,\ldots ,n\Bigg \}. \end{aligned}$$

For a multi-index \(\alpha \in {{\mathbb {N}}}^{2d}\), we define \(\widetilde{\alpha }\in \{1,\ldots ,2d\}^n\) by \({{\widetilde{\alpha }}}_i=j\) for all \(1+\sum _{k=1}^{j-1}\alpha _k\le i\le \sum _{k=1}^j\alpha _k\) and all \(1\le j\le 2d\) (e.g., \(\alpha =(2,3,1,1)\) yields \(\widetilde{\alpha }=(1,1,2,2,2,3,4)\)). With \(n=|\alpha |_1\) and some \(S\in P\in \varPi (n)\), we define

$$\begin{aligned} \partial _{{\varvec{z}}}^S g({\varvec{z}})=\Bigg (\prod _{i\in S}\partial _{{\varvec{z}}_{{{\widetilde{\alpha }}}_i}}\Bigg )g({\varvec{z}}). \end{aligned}$$

(the definition implies \(\partial _{{\varvec{z}}}^{\{1,\ldots ,n\}}g({\varvec{z}})=\partial _{\varvec{z}}^\alpha g({\varvec{z}})\).) With those definitions and given a function \(f:{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\), Faà di Bruno’s formula reads for a multi-index \(\alpha \in {{\mathbb {N}}}^{2d}\)

$$\begin{aligned} \partial _{{\varvec{z}}}^\alpha (f\circ g)({\varvec{z}}) = \sum _{P\in \varPi (|\alpha |_1)} (\partial _x^{|P|} f)\circ g({\varvec{z}})\prod _{S\in P}{\partial _{{\varvec{z}}}^S} g({\varvec{z}}). \end{aligned}$$
(37)

For (i), Faà di Bruno’s formula (37) and \(\partial _x^{|P|}\exp =\exp \) show for all multi indices \(\alpha \in {{\mathbb {N}}}^{2d}\) with \(n=|\alpha |_1\) that

$$\begin{aligned} \partial _{\varvec{x}}^\alpha (\exp \circ g)({\varvec{z}}){ = \sum _{P\in \varPi (n)} \exp \circ g({\varvec{z}})\,\prod _{S\in P}\partial _{{\varvec{z}}}^Sg({\varvec{z}})}. \end{aligned}$$

The definition of asymptotically smooth (1) and \(\Vert g\Vert _{L^\infty (D\times D)}=c_g\) imply

$$\begin{aligned} |\partial _{\varvec{x}}^\alpha (\exp \circ g({\varvec{z}}))|&\le \exp (c_g)\sum _{P\in \varPi (n)}\prod _{S\in P}c_1(c_2|{\varvec{x}}-{\varvec{y}}|)^{-|S|}|S|!\\&{\le \exp (c_g)\sum _{P\in \varPi (n)}(c_2|{\varvec{x}}-{\varvec{y}}|)^{-\sum _{S\in P}|S|}\,c_1^{|P|}\prod _{S\in P}|S|!}\\&\le \exp (c_g) \max \{1,c_1\}^n(c_2|{\varvec{x}}-{\varvec{y}}|)^{-n}\sum _{P\in \varPi (n)}\prod _{S\in P}|S|!. \end{aligned}$$

With \(f(x):=(1-x)^{-1}\), \(x\in {{\mathbb {R}}}{\setminus }\{1\}\), we have \(\partial _x^k f(x) = k!(1-x)^{-1-k}\). Hence, the last factor can be written, using Faà di Bruno’s formula again, as

$$\begin{aligned} \sum _{P\in \varPi (n)}\prod _{S\in P}|S|!= \sum _{P\in \varPi (n)}\exp \circ f(0)\prod _{S\in P}\partial _{\varvec{x}}^{|S|} f(0) = \partial _x^n(\exp \circ f)(0). \end{aligned}$$

As the function \(h(x):=\exp ((1-x)^{-1})\), \(x\in {{\mathbb {C}}}\) is holomorphic at least for \(|x|<1\), Cauchy’s integral formula shows

$$\begin{aligned} |\partial _x^n h(0)|=\frac{n!}{2\pi }\Big |\int _{|z|=1/2} \frac{h(z)}{z^{n+1}}\,\mathrm{d}{\varvec{z}}\Big |\le n! 2^{n} \exp (2). \end{aligned}$$

Altogether, we conclude the proof of (i) by

$$\begin{aligned} |\partial _{\varvec{z}}^\alpha (\exp \circ g({\varvec{z}}))|&\le \exp (c_g) \Big (\frac{c_2}{2\max \{1,c_1\}}\,|{\varvec{x}}-{\varvec{y}}|\Big )^{-n}n!. \end{aligned}$$

For (ii), Faà di Bruno’s formula (37) shows again for \(q>1\)

$$\begin{aligned} |\partial _{\varvec{z}}^\alpha ( g^{1/q})({\varvec{z}})|&\le \sum _{P\in \varPi (n)} |P|!|g({\varvec{z}})|^{1/q-|P|}\,\prod _{S\in P}C_g\le C_g^n |{\varvec{x}}-{\varvec{y}}|^{-|n|}\sum _{P\in \varPi (n)} |P|!, \end{aligned}$$

where we used \(f(x):=x^{1/q}\) and \(|\partial _x^{|P|}f(x)|=|(1/q)(1/q-1)(1/q-2)\cdots (1/q-|P|+1)||x|^{1/q-|P|}\le |P|!|x|^{1/q-|P|}\) as well as the boundedness assumption on the derivatives of g from (ii). With \(r(x):=\exp (x)-1\) and \(f(x):=(1-x)^{-1}\), \(x\in {{\mathbb {R}}}\), the last factor satisfies

$$\begin{aligned} \sum _{P\in \varPi (n)} |P|! = \sum _{P\in \varPi (n)} (\partial _x^{|P|} f)\circ r(0)\prod _{S\in P}(\partial _x^{|S|}r)(0)=\partial _x^n(f\circ r)(0). \end{aligned}$$

The function \(h(x):=f\circ r(x)= (2-\exp (x))^{-1}\), \(x\in {{\mathbb {C}}}\) is holomorphic at least for \(|x|\le 1/2\). As above, this implies

$$\begin{aligned} \partial _x^n(f\circ r)(0)\le n!2^n \end{aligned}$$

and thus concludes the proof of (ii).

For (iii), we conclude the proof as for (ii) by use of the estimate \(g(z)^{{-}1/q-|P|}\le c_0^{-1-n}\).

At last, we are ready to prove Lemma 1 which states that the covariance functions from (2) and (3) are asymptotically smooth (1).

Proof of Lemma 1

To see (1), consider \(\varrho (\cdot ,\cdot )\) from (2). We define for complex variables \({\varvec{x}}_i,{\varvec{y}}_i\in {{\mathbb {C}}}\)

$$\begin{aligned} d({\varvec{x}}-{\varvec{y}})= \Big (\sum _{i=1}^d ({\varvec{x}}_i-{\varvec{y}}_i)^p\Big )^{1/p}\in {{\mathbb {C}}}, \end{aligned}$$

whenever \((\cdot )^{1/p}\) is defined in \({{\mathbb {C}}}\). and consider \({\widetilde{\varrho }}({\varvec{x}},{\varvec{y}})\) which is \(\varrho ({\varvec{x}},{\varvec{y}})\) from (2) but with \(d({\varvec{x}}-{\varvec{y}})\) instead of \(|{\varvec{x}}-{\varvec{y}}|_p\). With the notation of Lemma 10, the above sum has positive real part in \(O:=\big \{({\varvec{x}},{\varvec{y}})\in {{\mathbb {C}}}^{2d}\,:\,{\varvec{x}}-{\varvec{y}}\notin M\big \}\). Thus, the function \(({\varvec{x}},{\varvec{y}})\mapsto d({\varvec{x}}-{\varvec{y}})\) is holomorphic in each variable in O. Since for \(a>0\), \({\varvec{x}}\mapsto {\varvec{x}}^\mu K_\mu (ax)\) is a holomorphic function on \({{\mathbb {C}}}{\setminus }({{\mathbb {R}}}_-\cup \{0\})\), and \(d({\varvec{x}}-{\varvec{y}})\) has positive real part, we deduce that \(({\varvec{x}},{\varvec{y}})\mapsto {\widetilde{\varrho }}({\varvec{x}},{\varvec{y}})\) is holomorphic in each variable in O. Thus, Lemma 9 proves that \(\partial _{\varvec{x}}^\alpha \partial _{\varvec{y}}^\beta {\widetilde{\varrho }}({\varvec{x}},{\varvec{y}})\) is holomorphic in O in all variables \({\varvec{x}}_i\) and \({\varvec{y}}_i\). Therefore, Cauchy’s integral formula applied in all variables shows

$$\begin{aligned} \partial _{\varvec{x}}^\alpha \partial _{\varvec{y}}^\beta {\widetilde{\varrho }}({\varvec{x}},{\varvec{y}}) =&\frac{\prod _{i=1}^d\alpha _i ! \beta _i!}{(2\pi i)^{2d}} \int _{\partial B_{{\varvec{x}},1}}\ldots \int _{\partial B_{{\varvec{x}},d}}\int _{\partial B_{{\varvec{y}},1}}\ldots \\&\quad \int _{\partial B_{{\varvec{y}},d}}\frac{\widetilde{\varrho }(s,t)}{\prod _{i=1}^d(s_i-{\varvec{x}}_i)^{\alpha _i+1}(t_i-{\varvec{y}}_i)^{\beta _i+1}}\,\mathrm{d}t\,\mathrm{d}s. \end{aligned}$$

The balls \(B_{{\varvec{x}},i}\) and \(B_{{\varvec{y}},i}\) have to be chosen such that \(\prod _{i=1}^d B_{{\varvec{x}},i}\times \prod _{i=1}^d B_{{\varvec{y}},i}\subset O\). With Lemma 10, and for \(({\varvec{x}},{\varvec{y}})\in {{\mathbb {R}}}^{2d}\) such that \({\varvec{x}}-{\varvec{y}}\in ({{\mathbb {R}}}^n)_+\) (note that Lemma 10 implies \(({\varvec{x}},{\varvec{y}})\in O\)), this can be achieved by setting \(B_{{\varvec{x}},i}:= B_\varepsilon ({\varvec{x}}_i)\) and \(B_{{\varvec{y}},i}:=B_\varepsilon ({\varvec{y}}_i)\) with \(\varepsilon :=\sin (\pi /(2p))|{\varvec{x}}-{\varvec{y}}|/(2d+1)\). From this, we obtain the estimate

$$\begin{aligned} |\partial _{\varvec{x}}^\alpha \partial _{\varvec{y}}^\beta \varrho ({\varvec{x}},{\varvec{y}})|= |\partial _{\varvec{x}}^\alpha \partial _{\varvec{y}}^\beta \widetilde{\varrho }({\varvec{x}},{\varvec{y}})|\lesssim \frac{\alpha !\beta ! (2d+1)^{|\alpha |_1+|\beta |_1}}{|{\varvec{x}}-{\varvec{y}}|^{|\alpha |_1+|\beta |_1}}\max _{(s,t)\in D\times D} |{\widetilde{\varrho }}(s,t)| \end{aligned}$$
(38)

for all \(({\varvec{x}},{\varvec{y}})\in {{\mathbb {R}}}^{2d}\) such that \({\varvec{x}}-{\varvec{y}}\in ({{\mathbb {R}}}^n)_+\), where the first equality follows from \(d({\varvec{x}}-{\varvec{y}})=|{\varvec{x}}-{\varvec{y}}|_p\) for all \({\varvec{x}}-{\varvec{y}}\in ({{\mathbb {R}}}^n)_+\). To remove the restriction \({\varvec{x}}-{\varvec{y}}\in ({{\mathbb {R}}}^n)_+\), consider \(b\in \{0,1\}^d\) and define the function

$$\begin{aligned} F_b({\varvec{x}},{\varvec{y}}):=((-1)^{b_1}{\varvec{x}}_1,\ldots ,(-1)^{b_d}{\varvec{x}}_d,(-1)^{b_1}{\varvec{y}}_1,\ldots ,(-1)^{b_d}{\varvec{y}}_d). \end{aligned}$$

Since we consider \(\varrho (\cdot ,\cdot )\) from (2), there holds \(\varrho \circ F_b=\varrho \). Since for all \({\varvec{x}},{\varvec{y}}\in {{\mathbb {R}}}^{d}\) with \({\varvec{x}}\ne {\varvec{y}}\), there exists some \(b\in \{0,1\}^d\) such that \(({\varvec{x}}_b,{\varvec{y}}_b):=F_b({\varvec{x}},{\varvec{y}})\) satisfies \({\varvec{x}}_b-{\varvec{y}}_b\in ({{\mathbb {R}}}^n)_+\), we prove (38) for all \({\varvec{x}},{\varvec{y}}\in {{\mathbb {R}}}^d\) with \({\varvec{x}}\ne {\varvec{y}}\). Finally, the fact \(\alpha !\beta ! \le |\alpha +\beta |_1!\), proves that \(\varrho (\cdot ,\cdot )\) from (2) is asymptotically smooth (1).

Next, consider the covariance function \(\varrho (\cdot ,\cdot )\) from (3). By definition \(\varvec{\varSigma }_{\varvec{x}}\) is continuous on \(\overline{D}\). Hence, \(\mathrm{det}(\varvec{\varSigma }_{\varvec{x}})\ge c_0>0\) for all \({\varvec{x}}\in D\). The assumption (4) implies that also \(\mathrm{det}(\varvec{\varSigma }_{\varvec{x}})\) has bounded derivatives in the sense of (4) (since \(\mathrm{det}(\varvec{\varSigma }_{\varvec{x}})\) is a polynomial in the matrix entries of \(\varvec{\varSigma }_{\varvec{x}}\)). Thus, Lemma 12 shows that the functions \(({\varvec{x}},{\varvec{y}})\mapsto \mathrm{det}(\varvec{\varSigma }_{\varvec{x}})^{1/4}\), \(({\varvec{x}},{\varvec{y}})\mapsto \mathrm{det}(\varvec{\varSigma }_{\varvec{y}})^{1/4}\), and \(({\varvec{x}},{\varvec{y}})\mapsto \mathrm{det}(\varvec{\varSigma }_{\varvec{x}}+\varvec{\varSigma }_{\varvec{y}})^{-q}\), \(q\in \{1/2,1\}\) satisfy (1). With \(\varvec{\varSigma }_{\varvec{x}}\), also all functions \(\widetilde{\varvec{\varSigma }}_{\varvec{x}}\) defined by considering only sub-matrices of \(\varvec{\varSigma }_{\varvec{x}}\) satisfy (4). Thus, Cramer’s rule and Lemma 11 show that the map \(({\varvec{x}},{\varvec{y}})\mapsto ((\varvec{\varSigma }_{\varvec{x}}+\varvec{\varSigma }_{\varvec{y}})^{-1})_{i,j}\) for all \(i,j\in \{1,\ldots ,d\}\) satisfies (1). From this, we conclude (again with Lemma 11), that \(({\varvec{x}},{\varvec{y}})\mapsto ({\varvec{x}}-{\varvec{y}})^T(\varvec{\varSigma }_{\varvec{x}}+\varvec{\varSigma }_{\varvec{y}})^{-1}({\varvec{x}}-{\varvec{y}})\) as sum and product of asymptotically smooth functions is asymptotically smooth (1). Finally, Lemma 12 shows that \(\varrho ({\varvec{x}},{\varvec{y}})\) satisfies (1). This concludes the proof.

Appendix—Proof of Proposition 1

The following lemmas state facts about the \(H^2\)-matrix block partitioning, which are well-known but cannot be found explicitly in the literature.

Lemma 13

Under Assumption 1, there exists a constant \(C_{B}>0\) which depends only on d, \(C_\mathrm{u}\), D, and \(B_{X_\mathrm{root}}\) such that all \(X\in {{\mathbb {T}}}_\mathrm{cl}\) satisfy

$$\begin{aligned} \mathrm{diam}(B_X)^d&\le C_B |B_X|, \end{aligned}$$
(39a)
$$\begin{aligned} {C_B^{-1}N|B_X|-1}&{\le |X|\le 1+C_BN|B_X|}, \end{aligned}$$
(39b)
$$\begin{aligned} |B_X|&= 2^{-\mathrm{level}(X)}|B_{X_\mathrm{root}}|. \end{aligned}$$
(39c)

Moreover, all \((X,Y)\in {{\mathbb {T}}}\) satisfy

$$\begin{aligned} C_{BB}^{-1} \mathrm{diam}(B_X)\le \mathrm{diam}(B_Y)\le C_{BB} \mathrm{diam}(B_X), \end{aligned}$$
(40)

where \(C_{BB}>0\) depends only on \(C_B\), \(C_\mathrm{leaf}\), and D.

Proof

The first estimate (39a) follows from the fact that always the longest edge of a bounding box is halved. This means that the ratio \(L_\mathrm{max}/L_\mathrm{min}\) of the maximal and the minimal side length of a bounding box \(B_X\) stays bounded in terms of the corresponding ratio for \(B_{X_\mathrm{root}}\). Therefore, we have

$$\begin{aligned} \mathrm{diam}(B_X)^d\le (\sqrt{d} L_\mathrm{max})^d\lesssim d^{d/2} L_\mathrm{min}^d\le d^{d/2}|B_X|. \end{aligned}$$

To see the second estimate (39b), consider a given bounding box B with side lengths \(L_1,\ldots , L_d\). Due to Assumption 1 the balls \(Q_{\varvec{x}}\) with centre \({\varvec{x}}\) and radius \(C_\mathrm{u}^{-1}N^{-1/d}/2\) for all \({\varvec{x}}\in {{\mathcal {N}}}\) do not overlap. All balls \(Q_{\varvec{x}}\) with \({\varvec{x}}\in B\) are containted in a box with sidelengths \(L_\mathrm{max}+C_\mathrm{u}^{-1}N^{-1/d}\). Thus, the number \(m_B\) of \({\varvec{x}}\in {{\mathcal {N}}}\) contained in B can be bounded by

$$\begin{aligned} m_B\lesssim \frac{(L_\mathrm{max}+C_\mathrm{u}^{-1}N^{-1/d})^d}{C_\mathrm{u}^{-d}/(N2^d)}\le \frac{dL_\mathrm{max}^d}{C_\mathrm{u}^{-d}/(N2^d)}+d2^d. \end{aligned}$$

Since \(m_B\le 1\) if \(L_\mathrm{max}< C_\mathrm{u}^{-1}N^{-1/d}/2\) and since \(L_\mathrm{max}^d\simeq |B|\), we may improve the estimate to

$$\begin{aligned} m_B\le 1+ C_B|B|N, \end{aligned}$$

where \(C_B\) depends only on d and \(C_\mathrm{u}\). On the other hand, Assumption 1 implies that any ball with radius \(C_\mathrm{u}N^{-1/d}\) contains at least one point \({\varvec{x}}\in {{\mathcal {N}}}\). Since each such ball fits inside a box with sidelength \(2C_\mathrm{u}N^{-1/d}\), we obtain

$$\begin{aligned} m_B\gtrsim \Big \lfloor \frac{L_\mathrm{min}^d}{2^dC_\mathrm{u}^{-d}/N}\Big \rfloor \end{aligned}$$

points of \({{\mathcal {N}}}\). This allows us to estimate \(m_B\ge C_{B}^{-1}|B|N-1\) and conclude (39b). The estimate (39c) follows from the fact \(|B_X|=|B_{X^\prime }|/2\) for all \(X\in \mathrm{sons}(X^\prime )\). For (40), we observe with (39b) that

$$\begin{aligned} \frac{|X|-1}{N}{\lesssim |B_X|\lesssim \frac{|X|+1}{N}} \end{aligned}$$

for all \(X\in {{\mathbb {T}}}_\mathrm{cl}\) with hidden constants depending only on \(C_B\). Thus, with (39c), we have for all \(X\in {{\mathbb {T}}}_\mathrm{cl}\) with \(X\in \mathrm{sons}(X^\prime )\) that

$$\begin{aligned} 2^{-\mathrm{level}(X)}\ge 2^{-\mathrm{level}(X^\prime )}/2\simeq |B_{X^\prime }|\gtrsim C_\mathrm{leaf}/N. \end{aligned}$$

Moreover, if additionally \(\mathrm{sons}(X)=\emptyset \), we have even \(2^{-\mathrm{level}(X)}\simeq |B_x|\lesssim C_\mathrm{leaf}/N\). By definition of the block-tree \({{\mathbb {T}}}\), a level difference between X and Y for \((X,Y)\in {{\mathbb {T}}}\) can only happen, if \(\mathrm{sons}(X)=\emptyset \) or \(\mathrm{sons}(Y)=\emptyset \). Assume \(\mathrm{sons}(X)=\emptyset \). In this case, we have \(\mathrm{level}(Y)\ge \mathrm{level}(X)\). Then, we have

$$\begin{aligned} 2^{-\mathrm{level}(X)}\simeq C_\mathrm{leaf}/N\lesssim |Y|/N\simeq 2^{-\mathrm{level}(Y)}, \end{aligned}$$

with hidden constants depending only on \(C_B\) and D. This implies \(\mathrm{level}(Y)\le \mathrm{level}(X) + C\) for some constant \(C>0\) which depends only on \(C_\mathrm{leaf}\), D, and \(C_B\) from (39). From this we derive (40) by use of (39).

Lemma 14

Given the definition of \({{\mathbb {T}}}_\mathrm{far}\) in Sect. 3.1, there exists a constant \(C>0\) such that all \((X,Y)\in {{\mathbb {T}}}_\mathrm{far}\) satisfy

$$\begin{aligned} C^{-1}\mathrm{diam}(B_{X})\le \mathrm{dist}(B_{X},B_Y)\le C\,\mathrm{diam}(B_{X}). \end{aligned}$$
(41)

Proof

By Lemma 13, we have

$$\begin{aligned} \max \{\mathrm{diam}(B_X),\mathrm{diam}(B_Y)\}\simeq 2^{-\mathrm{level}(X)/d}. \end{aligned}$$

For \((X,Y)\in \mathrm{sons}(X^\prime ,Y^\prime )\), we obtain additionally

$$\begin{aligned} \mathrm{dist}(B_{X^\prime },B_{Y^\prime })+2^{-\mathrm{level}(X^\prime )/d}\gtrsim \mathrm{dist}(B_X,B_Y). \end{aligned}$$

By definition of the block-partitioning, for \((X,Y)\in {{\mathbb {T}}}_\mathrm{far}\) there holds that \(B_X,B_Y\) satisfy (5) and \(B_{X^\prime },B_{Y^\prime }\) do not satisfy (5). Altogether, this implies

$$\begin{aligned} \mathrm{dist}(B_X,B_Y)\lesssim&\, \mathrm{dist}(B_{X^\prime },B_{Y^\prime })+2^{-\mathrm{level}(X^\prime )/d}\lesssim \Big (\frac{1}{\eta }+1\Big ) 2^{-\mathrm{level}(X^\prime )/d}\\ \lesssim&\max \{\mathrm{diam}(B_X),\mathrm{diam}(B_Y)\}, \end{aligned}$$

where we used \(|\mathrm{level}(X^\prime )-\mathrm{level}(X)|\le 1\). This concludes the proof. \(\square \)

The following lemma gives some basic facts about tensorial Chebychev-interpolation (see, e.g., [3, Section 4.4])

Lemma 15

Let \(f:B\rightarrow {{\mathbb {R}}}\) for an axis parallel box \(B\subseteq {{\mathbb {R}}}^{2d}\) such that \(\partial _j^k f\in L^\infty (B)\) for all \(j=1,\ldots ,d\) and all \(0\le k\le p\). Then, the tensorial Chebychev-interpolation operator of order p, \(I_p:C(B) \rightarrow {{\mathcal {P}}}^p(B)\) satisfies

$$\begin{aligned} \sup _{{\varvec{x}}\in B}|I_p f({\varvec{x}})-f({\varvec{x}})|\le 2d\varLambda _p^{2d-1}4 \frac{4^{-p}}{(p+1)!}\mathrm{diam}(B)^p\sum _{i=1}^{2d}\Vert \partial _{{\varvec{x}}_i}^{p+1} f\Vert _{L^\infty (B)}, \end{aligned}$$
(42)

where

$$\begin{aligned} \varLambda _p:=\sup _{f\in C([-1,1])}\frac{\Vert I_p^{\varvec{x}}f\Vert _{L^\infty ([-1,1])}}{\Vert f\Vert _{L^\infty ([-1,1])}}\le \frac{2}{\pi }\log (p+1)+1 \end{aligned}$$
(43)

is the operator norm of the one dimensional Chebychev interpolation operator

Proof

It is well-known that the one dimensional Chebychev interpolation operator \(I_p^{{\varvec{x}}}\) satisfies the error estimate for any \(f\in C([-1,1])\)

$$\begin{aligned} \Vert u-I_p^{\varvec{x}}f\Vert _{L^\infty ([-1,1])}\le 4 \frac{2^{-p}}{(p+1)!}\Vert \partial ^{(p+1)} f\Vert _{L^\infty ([-1,1])} \end{aligned}$$

with an operator norm given in (43). Consider \(B:=[-1,1]^{2d}\). Then, there holds with \(I_p^{{\varvec{x}}_i}\) denoting interpolation in the \({\varvec{x}}_i\)-variable \(i\in \{1,\ldots ,2d\}\)

$$\begin{aligned} |f-I_p f|&=|f-I_p^{{\varvec{x}}_1}f + I_p^{{\varvec{x}}_1}f - I_p^{{\varvec{x}}_2}I_p^{{\varvec{x}}_1} f +\ldots - I_p f|\\&\le \sum _{i=1}^{2d} 4 \frac{2^{-p}}{p!}\Vert \partial _{{\varvec{x}}_i}^p I_p^{{\varvec{x}}_1}(I_p^{{\varvec{x}}_2}\ldots I_p^{{\varvec{x}}_{i-1}})f\Vert _{L^\infty (B)}\\&\le \sum _{i=1}^{2d}\varLambda _p^{i-1}4 \frac{2^{-p}}{p!}\Vert \partial _{{\varvec{x}}_i}^{(p+1)} f\Vert _{L^\infty (B)}\\&\le 2d\varLambda _p^{2d-1}4 \frac{2^{-p}}{p!}\Vert \partial _{{\varvec{x}}_i}^{(p+1)} f\Vert _{L^\infty (B)}. \end{aligned}$$

Since, for any affine transformation \(A:{{\mathbb {R}}}^{2d}\rightarrow {{\mathbb {R}}}^{2d}\), we have \(I_p(f\circ A)= I_p(f)\circ A\), a standard scaling argument concludes the proof.

Proof of Proposition 1

We start by proving that \(\lambda _\mathrm{min}(\varvec{C}_p)>0\) if p satisfies (7). To that end, note

$$\begin{aligned} \lambda _\mathrm{min}(\varvec{C}_p)&=\min _{{\varvec{z}}\in {{\mathbb {R}}}^N{\setminus }\{0\}} \frac{(\varvec{C}_p{\varvec{z}})^T{\varvec{z}}}{|{\varvec{z}}|}\ge \min _{{\varvec{z}}\in {{\mathbb {R}}}^N{\setminus }\{0\}} \frac{(\varvec{C}{\varvec{z}})^T{\varvec{z}}}{|{\varvec{z}}|} -\sup _{{\varvec{z}}\in {{\mathbb {R}}}^N{\setminus }\{0\}} \frac{((\varvec{C}_p-\varvec{C}){\varvec{z}})^T{\varvec{z}}}{|{\varvec{z}}|}\\&\ge \lambda _\mathrm{min}(\varvec{C}) - \Vert \varvec{C}-\varvec{C}_p\Vert _{2}\ge \lambda _\mathrm{min}(\varvec{C}) - \Vert \varvec{C}-\varvec{C}_p\Vert _{F}, \end{aligned}$$

since the Frobenius norm is an upper bound for the spectral norm. By use of (6) (which is proved below) and (7), we conclude \( \lambda _\mathrm{min}(\varvec{C}_p)>0\).

To see (6), we first estimate the maximal depth of the tree \({{\mathbb {T}}}_\mathrm{cl}\). With (39b)–(39c), we obtain \(C_\mathrm{leaf}\le |X| \lesssim 2^{-\mathrm{level}(X)}\) for all \(X\in {{\mathbb {T}}}_\mathrm{cl}\) with \(\mathrm{sons}(X)\ne \emptyset \). Thus, there holds

$$\begin{aligned} \max _{X\in {{\mathbb {T}}}_\mathrm{cl}}\mathrm{level}(X)\lesssim \log (| {{\mathcal {N}}}|). \end{aligned}$$

Second, we bound the so-called sparsity constant

$$\begin{aligned} C_\mathrm{sparse}&:= \max _{X\in {{\mathbb {T}}}_\mathrm{cl}}\Big (|\big \{Y\in {{\mathbb {T}}}_\mathrm{cl}\,:\,(X,Y)\in {{\mathbb {T}}}_\mathrm{near}\cup {{\mathbb {T}}}_\mathrm{far}\big \}|\\&\qquad +|\big \{Y\in {{\mathbb {T}}}_\mathrm{cl}\,:\,(Y,X)\in {{\mathbb {T}}}_\mathrm{near}\cup {{\mathbb {T}}}_\mathrm{far}\big \}|\Big ). \end{aligned}$$

The H-matrix case can be found in [10, Lemma 4.5]. For the \(H^2\)-matrix case, the combination of (40) and (41) (from Lemma 14) shows that \((X,Y)\in {{\mathbb {T}}}_\mathrm{far}\) only if \(B_Y\) touches the (hyper-) annulus with center \(B_X\) and radii \(C^{-1}\mathrm{diam}(B_X)\) and \(C\mathrm{diam}(B_X)\). By comparing the volumes of this annulus and of \(B_Y\) and using the fact that all the bounding boxes are disjoint, we see that the number of Y such that \((X,Y)\in {{\mathbb {T}}}_\mathrm{far}\) is bounded in terms of C and the constants in (39).

For \(Y\in {{\mathbb {T}}}_\mathrm{cl}\) such that \((X,Y)\in {{\mathbb {T}}}_\mathrm{near}\), we have with (39)–(40)

$$\begin{aligned} \mathrm{diam}(B_X)\simeq \max \{\mathrm{diam}(B_X),\mathrm{diam}(B_Y)\}> \eta \,\mathrm{dist}(B_X,B_Y). \end{aligned}$$

Again, comparing the volumes of the ball with radius \(\mathrm{diam}(B_X)\) and of \(B_Y\), we see that the number of Y such that \((X,Y)\in {{\mathbb {T}}}_\mathrm{near}\) is bounded in terms of the constants in (39). Altogether, we bound \(C_\mathrm{sparse}\) uniformly in terms of the constants of Lemma 13. Now, [3, Lemma 3.38] proves the estimate for storage requirements and [3, Theorem 3.42] proves the estimate for matrix–vector multiplication.

It remains to prove the error estimate (see also [3, Section 4.6] for the integral operator case). To that end, note that since the near field \({{\mathbb {T}}}_\mathrm{near}\) is stored exactly, there holds

$$\begin{aligned} \Vert \varvec{C}-\varvec{C}_p\Vert _{F}^2 = \sum _{(X,Y)\in {{\mathbb {T}}}_\mathrm{far}} \Vert \varvec{C}|_{I(X)\times I(Y)} - V^X M^{XY} (W^Y)^T\Vert _{F}^2. \end{aligned}$$

Given, \((i,j)\in I(X)\times I(Y)\), we have with the interpolation operator \(I_p\) from Lemma 15 and (1)

$$\begin{aligned} |\varvec{C}_{ij}-(\varvec{C}_p)_{ij}|&=\big |\varrho ({\varvec{x}}_i,{\varvec{x}}_j)-\sum _{n,m=1}^{p^d} \varrho (q_n^X,q_m^Y) L_n^X({\varvec{x}}_i)L_m^Y({\varvec{x}}_j)\big |\\&=|\varrho ({\varvec{x}}_i,{\varvec{x}}_j)-(I_p c)({\varvec{x}}_i,{\varvec{x}}_j)|\\&\lesssim (\log (p)+1)^{2d-1} \frac{4^{-p}}{(p+1)!}\mathrm{diam}(B_X\times B_Y)^p\\&\quad \times \sum _{i=1}^d\big (\Vert \partial _{{\varvec{x}}_i}^{(p+1)} c\Vert _{L^\infty (B)}+\Vert \partial _{{\varvec{y}}_i}^{(p+1)} c\Vert _{L^\infty (B_X\times B_Y)} \Big )\\&\lesssim (\log (p)+1)^{2d-1} \frac{4^{-p}}{(p+1)!}\mathrm{diam}(B_X\times B_Y)^p(c_2\mathrm{dist}(B_X,B_Y))^{-p}p!. \end{aligned}$$

With the admissibility condition (5), we get

$$\begin{aligned} \mathrm{diam}(B_X\times B_Y)\lesssim \max \{\mathrm{diam}(B_X),\mathrm{diam}(B_Y)\}\le \eta \mathrm{dist}(B_X,B_Y) \end{aligned}$$

and hence

$$\begin{aligned} |\varvec{C}_{ij}-(\varvec{C}_p)_{ij}|&\lesssim (\log (p)+1)^{2d-1}\big (\frac{\eta }{4c_2}\big )^{p}. \end{aligned}$$

The combination of the above estimates concludes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feischl, M., Kuo, F.Y. & Sloan, I.H. Fast random field generation with H-matrices. Numer. Math. 140, 639–676 (2018). https://doi.org/10.1007/s00211-018-0974-2

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-018-0974-2

Mathematics Subject Classification

Navigation