Skip to main content
Log in

Spatial sampling design and covariance-robust minimax prediction based on convex design ideas

  • Original Paper
  • Published:
Stochastic Environmental Research and Risk Assessment Aims and scope Submit manuscript

Abstract

This paper presents new ideas on sampling design and minimax prediction in a geostatistical model setting. Both presented methodologies are based on regression design ideas. For this reason the appendix of this paper gives an introduction to optimum Bayesian experimental design theory for linear regression models with uncorrelated errors. The presented methodologies and algorithms are then applied to the spatial setting of correlated random fields. To be specific, in Sect. 1 we will approximate an isotropic random field by means of a regression model with a large number of regression functions with random amplitudes, similarly to Fedorov and Flanagan (J Combat Inf Syst Sci: 23, 1997). These authors make use of the Karhunen Loeve approximation of the isotropic random field. We use the so-called polar spectral approximation instead; i.e. we approximate the isotropic random field by means of a regression model with sine-cosine-Bessel surface harmonics with random amplitudes and then, in accordance with Fedorov and Flanagan (J Combat Inf Syst Sci: 23, 1997), apply standard Bayesian experimental design algorithms to the resulting Bayesian regression model. Section 2 deals with minimax prediction when the covariance function is known to vary in some set of a priori plausible covariance functions. Using a minimax theorem due to Sion (Pac J Math 8:171–176, 1958) we are able to formulate the minimax problem as being equivalent to an optimum experimental design problem, too. This makes the whole experimental design apparatus available for finding minimax kriging predictors. Furthermore some hints are given, how the approach to spatial sampling design with one a priori fixed covariance function may be extended by means of minimax kriging to a whole set of a priori plausible covariance functions such that the resulting designs are robust. The theoretical developments are illustrated with two examples taken from radiological monitoring and soil science.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Aarts EH, Korst J (1989) Simulated annealing and Boltzman machines. Wiley, New York

    Google Scholar 

  • Angulo JM, Bueso MC (2001) Random perturbation methods applied to multivariate spatial sampling design. Environmetrics 12:631–646

    Article  Google Scholar 

  • Bandemer H et al. (1977) Theorie und Anwendung der optimalen Versuchsplanung I. Akademie-Verlag, Berlin

    Google Scholar 

  • Brown PJ, Le ND, Zidek JV (1994) Multivariate spatial interpolation and exposure to air pollutants. Can J Stat 2:489–509

    Article  Google Scholar 

  • Brus DJ, de Gruijter JJ (1997) Random sampling or geostatistical modeling? Chosing between design-based and model-based sampling strategies for soil (with discussion). Geoderma 80:1–44

    Article  Google Scholar 

  • Brus DJ, Heuvelink GBM (2007) Optimization of sample patterns for universal kriging of environmental variables. Geoderma 138:86–95

    Article  Google Scholar 

  • Bueso MC, Angulo JM, Qian G, Alonso FJ (1999) Spatial sampling design based on stochastic complexity. J Multivar Anal 71:94–110

    Article  Google Scholar 

  • Chang H, Fu AQ, Le ND, Zidek JV (2005) Designing environmental monitoring networks for measuring extremes. http://www.samsi.info/TR/tr2005-04.pdf

  • Chen VCP, Tsui KL, Barton RR, Meckesheimer M (2006) A review on design, modeling and applications of computer experiments. IIE Trans 38:273–291

    Article  Google Scholar 

  • Christensen R (1991) Linear models for multivariate time series and spatial data. Springer, Berlin

    Google Scholar 

  • Diggle P, Lophaven S (2006) Bayesian geostatistical design. Scand J Stat 33:53–64

    Article  Google Scholar 

  • Diggle P, Menezes R, Su TL (2009) Geostatistical inference under preferential sampling. Appl Stat (in press)

  • Dobbie MJ, Henderson BL, Stevens DL Jr (2008) Sparse sampling: spatial design for monitoring stream networks. Stat Surv 2:113–153

    Article  Google Scholar 

  • Dubois G (2008) Recent advances in automatic interpolation for real-time mapping. Stoch Environ Res Risk Assess, Special Volume 22, Number 5. Springer, Berlin/Heidelberg

  • Fedorov VV (1972) Theory of optimal experiments, transl. In: Studden WJ, Klimko EM (eds) Academic Press, New York, (Russian Original: Nauka, Moscow 1971)

  • Fedorov VV, Müller WG (1988) Two approaches in optimization of observing networks. In: Dodge Y, Fedorov VV, Wynn HP (ed) Optimal design and analysis of experiments. Elsevier, Amsterdam

  • Fedorov VV, Flanagan D (1997) Optimal monitoring network design based on Mercer’s expansion of the covariance kernel. J Comb Inf Syst Sci:23

  • Fedorov VV, Hackl P (1994) Optimal experimental design: spatial sampling. Calcutta Stat Assoc Bull 44:173–194

    Google Scholar 

  • Fedorov VV, Müller WG (2007) Optimum design for correlated fields via covariance kernel expansions. In: Lopez-Fidalgo J (ed) Proceedings of moda8

  • Fuentes M, Chaudhuri A, Holland DM (2007) Bayesian entropy for spatial sampling design of environmental data. J Environ Ecol Stat 14:323–340

    Article  CAS  Google Scholar 

  • van Groenigen JW, Siderius W, Stein A (1999) Constrained optimisation of soil sampling for minimisation of the kriging variance. Geoderma 87:239–259

    Article  Google Scholar 

  • Kiefer J (1959) Optimum experimental design. J Royal Stat Soc Ser B 21:272–319

    Google Scholar 

  • Kleijnen JPC, van Beers WCM (2004) Application driven sequential design for simulation experiments: Kriging metamodeling. J Oper Res Soc 55:876–893

    Article  Google Scholar 

  • Kleijnen JPC (2004) An overview of the design and analysis of simulation experiments for sensitivity analysis. Eur J Oper Res 164:287–300

    Article  Google Scholar 

  • Lim YB, Sacks J, Studden WJ (2002) Design and analysis of computer experiments when the output is highly correlated over the input space. Canad J Stat 30:109–126

    Article  Google Scholar 

  • Melas VB (2006) Functional approach to optimal experimental design. Springer, Berlin

    Google Scholar 

  • Mitchell TJ, Morris MD (1992) Bayesian design and analysis of computer experiments: two examples. Stat Sinica 2:359–379

    Google Scholar 

  • Morris MD, Mitchell TJ, Ylvisaker D (1993) Bayesian design and analysis of computer experiments: use of derivatives in surface prediction. Technometrics 35:243–255

    Article  Google Scholar 

  • Müller WG (2005) A comparison of spatial design methods for correlated observations. Environmetrics 16:495–505

    Article  Google Scholar 

  • Müller WG, Pazman A (1998) Design measures and approximate information matrices for experiments without replications. J Stat Plan Inf 71:349–362

    Article  Google Scholar 

  • Müller WG, Pazman A (1999) An algorithm for the computation of optimum designs under a given covariance structure. J Comput Stat 14:197–211

    Article  Google Scholar 

  • Müller WG, Pazman A (2003) Measures for designs in experiments with correlated errors. Biometrika 90:423–434

    Article  Google Scholar 

  • Müller P, Sanso B, De Iorio M (2004) Optimal Bayesian design by inhomogeneous Markov Chain simulation. J Am Stat Assoc 99:788–798

    Article  Google Scholar 

  • Omre H (1987) Bayesian kriging—merging observations and qualified guess in kriging. Math Geol 19:25–39

    Article  Google Scholar 

  • Omre H, Halvorsen K (1989) The Bayesian bridge between simple and universal kriging. Math Geol 21:767–786

    Article  Google Scholar 

  • Pazman A, Müller WG (2001) Optimal design of experiments subject to correlated errors. Stat Probab Lett 52:29–34

    Article  Google Scholar 

  • Pilz J (1991) Bayesian estimation and experimental design in linear regression models. Wiley, New York

    Google Scholar 

  • Pilz J, Schimek MG, Spöck G (1997) Taking account of uncertainty in spatial covariance estimation. In: Baafi E, Schofield N (ed) Proceedings of the fifth international geostatistics congress. Kluwer, Dordrecht, pp. 302–313

    Google Scholar 

  • Pilz J, Spöck G (2006) Spatial sampling design for prediction taking account of uncertain covariance structure. In: Caetano M, Painho M (ed) Proceedings of accuracy 2006. Instituto Geografico Portugues, pp. 109–118

  • Pilz J, Spöck G (2008) Why do we need and how should we implement Bayesian kriging methods. Stoch Environ Res Risk Assess 22/5:621–632

    Article  Google Scholar 

  • Sion M (1958) On general minimax theorems. Pac J Math 8:171–176

    Google Scholar 

  • Spöck G (1997) Die geostatistische Berücksichtigung von a-priori Kenntnissen über die Trendfunktion und die Kovarianzfunktion aus Bayesscher, Minimax und Spektraler Sicht, master thesis, University of Klagenfurt

  • Spöck G (2008) Non-stationary spatial modeling using harmonic analysis. In: Ortiz JM, Emery X (ed) Proceedings of the eighth international geostatistics congress. Gecamin, Chile, pp. 389–398

    Google Scholar 

  • Stein ML (1999) Interpolation of spatial data: some theory for kriging. Springer, New York

    Google Scholar 

  • Trujillo-Ventura A, Ellis JH (1991) Multiobjective air pollution monitoring network design. Atmos Environ 25:469–479

    Google Scholar 

  • Webster R, Atteia O, Dubois J-P (1994) Coregionalization of trace metals in the soil in the Swiss Jura. Eur J Soil Sci 45:205–218

    Article  CAS  Google Scholar 

  • Yaglom AM (1987) Correlation theory of stationary and related random functions. Springer, New York

    Google Scholar 

  • Zhu Z, Stein ML (2006) Spatial sampling design for prediction with estimated parameters. J Agricult Biol Environ Stat 11:24–49

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gunter Spöck.

Appendix 1: Convex design theory

Appendix 1: Convex design theory

1.1 The experimental design problem for Bayesian linear regression

Here we give a survey of convex design theory, since, as is shown in Sects. 1 and 2 of the paper, this theory can easily be used to solve spatial sampling design problems with correlated errors and to calculate minimax predictors. In contrast to the spatial sampling design problem the experimental design problem for linear regression models with uncorrelated errors may be considered as solved since the pioneering works of Kiefer (1959) and Fedorov (1972).

1.1.1 The Bayesian linear regression model

To be specific, we consider here the following Bayesian linear regression model

$$ \begin{aligned} Y(x)&={{\mathbf{h}}}(x)^{T}{{\gamma}}+\epsilon_{0}(x)\\ \hbox{E}(Y(x))&={{\mathbf{h}}}(x)^{T}{{\gamma}}, \end{aligned} $$

where h(x) is a vector of regression functions in R r dependent on explanatory variables x ∈ R q, q ≥ 1. We assume the error process ε0(x) to be uncorrelated with fixed variance

$$ \hbox{var}(\epsilon_{0}(x))=\sigma_{0}^{2}. $$

Further, we assume to have prior knowledge about the trend parameter vector \({{\gamma}}\) in the form of a known prior mean \(\hbox{E}({{\gamma}})={{\gamma}}_{0}\) and a known a priori covariance matrix \(\hbox{cov}({{\gamma}})={\varvec{\Upgamma}}>0.\) Then the best affine linear predictor of the response surface at an unmeasured explanatory vector x 0 is given by

$$ \hat{Y}(x_{0})={{\mathbf{h}}}(x_{0})^{T}\hat{{{\gamma}}}^{B}, $$

where

$$ \hat{{{\gamma}}}^{B}=({{\mathbf{H}}}^{T}{{\mathbf{H}}}+ \sigma_{0}^{2}{\varvec{\Upgamma}}^{-1})^{-1} ({{\mathbf{H}}}^{T}{{\mathbf{Y}}}+ \sigma_{0}^{2}{\varvec{\Upgamma}}^{-1}{{\gamma}}_{0}) $$

is the a posteriori mean of the trend parameter vector \({{\gamma}}\) under the Gaussian assumption for the prior distribution and for the errors. Here H = (h(x 1),...,h(x n ))T is the design matrix and Y = (Y(x 1),...,Y(x n ))T is the vector of the observations.

1.1.2 The Bayesian experimental design problem

The total mean squared error of our predictor reads

$$ \begin{aligned} &\hbox{E}\left((Y(x_{0})-\hat{Y}(x_{0}))^{2}\right)\\ &=\sigma_{0}^{2}[1+{{\mathbf{h}}}(x_{0})^{T} (\sigma_{0}^{2}{\varvec{\Upgamma}}^{-1}+ {{\mathbf{H}}}^{T}{{\mathbf{H}}})^{-1}{{\mathbf{h}}}(x_{0})]. \end{aligned} $$

We write v n  = (x 1,x 2,...,x n ) for the exact design and H(v n ) = H to express, that the above expression is dependent on the selected design points x i . Bayesian experimental design then tries to select the points in v n in such a way, that the expression

$$ \int\limits_{{{\mathbf{X}}}}{{\mathbf{h}}}(x)^{T}\left({{\mathbf{H}}}^{T}(v_{n}) {{\mathbf{H}}} (v_{n})+\sigma_{0}^{2}{\varvec{\Upgamma}}^{-1}\right)^{-1} {{\mathbf{h}}}(x)P(dx) $$

is minimized, where X is the design region and P(.) is a fixed prespecified probability measure on X, weighting the importance of the sampling points. Obviously, this is a discrete optimization problem and is therefore very complicated to solve numerically. The matrix

$$ {{\mathbf{M}}_{B}}(v_{n})={\frac{1}{n}}({{\mathbf{H}}}^{T}(v_{n}) {{\mathbf{H}}}(v_{n})+\sigma_{0}^{2}{\varvec{\Upgamma}}^{-1}) $$

is called Bayesian information matrix. Its inverse is proportional to the expectation of the posterior covariance matrix of \({{\gamma}}.\) Assuming that k different design points x 1,x 2,...,x k with multiplicities.

n1,n2,...,n k , \(\sum_{i=1}^{k}{n_{i}}=n\) are contained in v n we may write

$$ {\frac{1}{n}}{{\mathbf{H}}}^{T}(v_{n}){{\mathbf{H}}}(v_{n}) =\sum_{i=1}^{k}\frac{n_{i}} {n}{{\mathbf{h}}}(x_{i}){{\mathbf{h}}}(x_{i})^{T}. $$

The proportions \({\frac{n_{i}}{n}}\) may be interpreted as probabilities or intensities with which the different design points x i are selected. This interpretation is the key idea to define continuous design measures ξ(dx) on X. A continuous design measure is just a probability measure on X, and is a generalization of exact design measures for which

$$ \begin{aligned} \xi(x_{1})=&{\frac{n_{1}}{n}},\quad\xi(x_{2})=\frac{n_{2}}{n},\ldots, \xi(x_{k})={\frac{n_{k}}{n}} \\ \sum_{i=1}^{k}{n_{i}}=&n, \quad x_{1}\neq x_{2}\neq\cdots\neq x_{k}, \quad n_{i}\in {{\mathbb{N}}}. \end{aligned} $$

Then, we can define also a continuous Bayesian information matrix

$$ {{\mathbf{M}}}_{B}(\xi)=\int\limits_{{{\mathbf{X}}}}{{\mathbf{h}}}(x){{\mathbf{h}}} (x)^{T}\xi(dx)+{\frac{\sigma_{0}^{2}}{n}}{\varvec{\Upgamma}}^{-1} $$

and a continuous Bayesian design problem

$$ \int\limits_{{{\mathbf{X}}}}{{{\mathbf{h}}}(x)^{T}{{\mathbf{M}}}_{B}(\xi)^{-1} {{\mathbf{h}}}(x)P(dx)}\rightarrow \mathop{\hbox{min}}\limits_{\xi\in{\varvec{\Upxi}}}. $$

Here \({\varvec{\Upxi}}\) is the set of all probability measures on the set X, which is supposed to be compact. Defining

$$ {{\mathbf{U}}}=\int\limits_{{{\mathbf{X}}}} {{{\mathbf{h}}}(x) {{\mathbf{h}}}(x)^{T}P(dx)}, $$

the above minimization problem may be written as

$$ \Uppsi(\xi)=\hbox{tr}({{\mathbf{U}}}{{\mathbf{M}}}_{B}(\xi)^{-1})\rightarrow \mathop{\hbox{min}}\limits_{\xi\in{\varvec{\Upxi}}}, $$

where tr(.) is the trace functional. It may be shown that the set of all information matrices on \({\varvec{\Upxi}}\) is convex and compact. Furthermore it is possible to show that all functionals of the above form are convex and continuous in M B (ξ) and ξ. The above design functional Ψ(ξ) thus attains its minimum at a design \(\xi^{*}\in {\varvec{\Upxi}},\) see Pilz (1991).

1.2 An algorithm for solving the continuous design problem

For the minimization of the design functional

$$ \Uppsi(\xi)=\hbox{tr}({{\mathbf{UM}}}_{B}(\xi)^{-1}) $$

algorithms may be used, as they are known from the theory of convex optimization. We concentrate here on an algorithm of Fedorov (1972). A more extensive exposure of design algorithms may be found in Pilz (1991). The proposed design algorithm is based on the directional derivative at \(\xi\in{\varvec{\Upxi}}\) in the direction of \(\bar{\xi}\in {\varvec{\Upxi}}:\)

$$ \Updelta_{\psi}(\xi,\bar{\xi})=\lim_{\alpha\downarrow 0}{\frac{d} {d\alpha}}\Uppsi((1-\alpha)\xi+\alpha\bar{\xi}) $$

It may be shown that this directional derivative attains its minimum in the direction of a one-point design measure \(\delta_{x}\in {\varvec{\Upxi}}\) with support x ∈ X.

The proposed design algorithm is based on a fixed sequence \(\{\alpha_{s}\}_{s\in{\mathbb{N}}},\) such that

$$ \lim_{s\to\infty}{\alpha_{s}}=0,\quad \sum_{s\in{\mathbb{N}}}{\alpha_{s}}=\infty,\, \alpha_{s}\in[0,1),\quad s=1,2,\ldots $$

At the s-th iteration of the design algorithm we determine

$$ \Updelta_{\Uppsi}(\xi_{s-1},\delta_{x_{s}})=\mathop{\hbox{inf}}\limits_{x\in {{\mathbf{X}}}}{\Updelta}_{\Uppsi}(\xi_{s-1},\delta_{x}) $$

and form the new design measure

$$ \xi_{s}=(1-\alpha_{s})\xi_{s-1}+\alpha_{s}\delta_{x_{s}}. $$

It may be shown that the sequence \(\{\Uppsi(\xi_{s})\}_{s\in{{\mathbb{N}}}}\) of functional values converges, and it holds

$$ \lim_{s\to\infty}{\Uppsi(\xi_{s})}=\mathop{\hbox{inf}} \limits_{\xi\in\Upxi}{\Uppsi(\xi)}. $$

Since M B (ξ) is regular whatever the design \(\xi\in {\varvec{\Upxi}},\) the starting design may be taken as one-point design \(\xi_{0}=\delta_{x_{0}},\) where x 0 should be such that

$$ \Uppsi(\xi_{0})=\mathop{\hbox{inf}}\limits_{x\in {{\mathbf{X}}}}{\Uppsi(\delta_{x})}. $$

Then the weights α s needed for the construction of the s-th iteration design may be taken as \(\alpha_{s}={\frac{1} {1+s}},\;s=1,2,\ldots\) . The question remaining just is, when to stop iteration. The answer is dependent on the so-called efficiency of the design ξ:

$$ e_{\Uppsi}(\xi)={\frac{\hbox{inf}_{\xi\in {\varvec{\Upxi}}}} {\Uppsi(\xi)}}\Uppsi(\xi). $$

Defining d Ψ(ξ) = infxX  ΔΨ(ξ,δ x ), it may be shown that the following inequality holds for the efficiency:

$$ 1+{\frac{d_{\Uppsi}(\xi)}{\Uppsi(\xi)}}\le e_{\psi}(\xi)\le 1. $$

This way, the iteration should be stopped at stage s 0 if

$$ 1+{\frac{d_{\Uppsi}(\xi_{s_{0}})}{\Uppsi(\xi_{s_{0}})}}\ge e_{0}, $$

where e 0 ∈ (0, 1) is some predetermined efficiency that is to be guaranteed. We refer to Pilz (1991) for the explicit form of the necessary directional derivatives.

If the obtained discrete optimal design can be written as an exact design, i.e. if its weights are integer multiples of \({\frac{1} {n}},\) this design is also optimal within the class of exact designs. But in general this is not the case and a discrete design cannot be realized by observations. The obtained discrete design ξ* = {(x 1,p 1),...,(x m ,p m )} thus has to be rounded by means of forming the set

$$\begin{aligned} {\varvec{\Upxi}}_{{\mathbf{n}}}=\left\{\xi_{v_{n}}=\left\{\left(x_{1},{\frac{n_{1}} {n}}\right),\ldots,\ldots\left(x_{m},{\frac{n_{m}}{n}}\right)\right\}:\right.\\ \quad \left. n_{i} \ge [np_{i}],\;n_{i}\in\{0,1,\ldots,n\},\;\sum{n_{i}}=n\right\}, \end{aligned}$$

where [np i ] denotes the integer part of np i . Hereafter we choose that design \(\xi^{*}_{v_{n}}\in{\varvec{\Upxi}}_{n}\) as approximation to the optimal exact design for which \(\Uppsi(\xi_{v_{n}})\) becomes a minimum in \({\varvec{\Upxi}}_{{{\mathbf{n}}}}.\)

1.3 Iteration procedures for determining exact designs

We are now going to formulate iteration procedures for the construction of approximately optimal exact designs. Contrary to the construction of optimal discrete designs, here we cannot prove convergence of the exact designs to the functional value Ψ(v*) of an optimal exact design v*; we can only guarantee stepwise improvement of a given exact starting design, i.e. the sequence of functional values Ψ(v n,s) decreases monotonically with increasing iteration index s. The algorithm is an exchange type algorithm improving n-point designs and starting from an initial design.

1.3.1 Exchange type algorithm

Step 1:

Use some initial design v n,1 = (x 1,1,...,x n,1) ∈ X n of size n.

Step 2:

Beginning with s = 1 form the design v n+1,s = v n,s + (x n+1,s) by adding the point

$$ x_{n+1,s}=\hbox{arg}\;\mathop{\hbox{min}}\limits_{x\in{{\mathbf{X}}}} {\Uppsi({{\mathbf{M}}}_{B}(v_{n,s}+(x)))} $$

to v n,s.

Then form v jn,s  = v n+1,s−(x j,s), j = 1,2,...,n + 1 and delete that point \(x_{j^{*},s}\) from v n+1,s for which

$$ \Uppsi({{\mathbf{M}}}_{B}(v_{n,s}^{j^{*}}))= \mathop{\rm min}\limits_{j\in\{1,\ldots,n+1\}}{\Uppsi({{\mathbf{M}}}_{B}(v_{n,s}^{j})))}. $$
Step 3:

Repeat Step 2 until the point to be deleted is equivalent to the point to be added. For our design functional Step 2 is determined as follows:

$$ x_{n+1,s}=\hbox{arg}\;\mathop{\hbox{max}} \limits_{x\in{{\mathbf{X}}}}{\frac{{{\mathbf{h}}} (x)^{T}{{\mathbf{M}}}_{B}(v_{n,s})^{-1}{{\mathbf{U}}} {{\mathbf{M}}}_{B}(v_{n,s})^{-1}{{\mathbf{h}}}(x)}{n+{{\mathbf{h}}}(x)^{T} {{\mathbf{M}}}_{B}(v_{n,s})^{-1}{{\mathbf{h}}}(x)}} $$
$$ j^{*}=\hbox{arg}\;\mathop{\hbox{min}}\limits_{1\le j\le n+1}{\frac{{{\mathbf{h}}} (x_{j,s})^{T}{{\mathbf{Q}}}_{B}(v_{n+1,s}){{\mathbf{h}}}(x_{j,s})}{n+1- {{\mathbf{h}}}(x_{j,s})^{T}{{\mathbf{M}}}_{B} (v_{n+1,s})^{-1} {{\mathbf{h}}}(x_{j,s})}}, $$

where

$$ {{\mathbf{Q}}}_{B}(v_{n+1,s})={{\mathbf{M}}}_{B}(v_{n+1,s})^{-1} {{\mathbf{UM}}}_{B}(v_{n+1,s})^{-1}. $$

1.3.2 Generation of an initial design

The initial design is a one-point design which minimizes the design functional among all designs of size n = 1. Note that such a design exists since the Bayesian information matrix is positive definite even for designs of size n = 1.

Step 1:

Choose x 1 ∈ X such that \(x_{1}=\hbox{arg}\;\hbox{min}_{x\in{{\mathbf{X}}}}{\Uppsi({{\mathbf{M}}}_{B}((x)))}\) , and set v 1 = (x 1).

Step 2:

Beginning with i = 1, find x i+1 such that \(x_{i+1}=\hbox{arg}\;\hbox{min}_{x\in{{\mathbf{X}}}}{\Uppsi({{\mathbf{M}}}_{B}(v_{i}+(x)))}\) and form v i+1 = v i  + (x i+1). Continue with i replaced by i + 1 until i + 1 = n.

Step 3:

If i + 1 = n then stop and take v n,1 = (x 1,...,x n ) as an initial design.

1.3.3 Combination of the algorithms A1.3.2 and A1.3.1

It is a good idea to combine the initial design Algorithm A1.3.2 and the exchange type Algorithm A1.3.1 in the following way:

Step 1:

Start with the initial design algorithm and find a design with one first design point.

Step 2:

Having found a design with m ≥ 1 design points apply the exchange type algorithm to this design to improve it.

Step 3:

Add to the design from Step 2 one further design point by means of the initial design algorithm to get m + 1 design points.

Step 4:

Go back to Step 2 and iterate Step 2 and Step 3 until you have found n desired design points.

1.3.4 Reduction of experimental designs

Often it is desired to reduce a given experimental design v = (x 1,x 2,...,x n ) to one including only m < n design points from v:

Step 1:

Delete that design point \(x_{j^{*}}\) from v for which \(x_{j^{*}}=\hbox{arg}\;\hbox{min}_{x_{j}\in v}{\Uppsi({{\mathbf{M}}}_{B}(v-(x_{j})))},\) and set \(v:=v-(x_{j^{*}}).\)

Step 2:

Iterate Step 1 until the design v contains only m design points.

Also this algorithm may be combined with an improvement step similar to the exchange-type Algorithm A1.3.1. In Algorithm A1.3.1 merely the calculation of x n+1,s has to be replaced by

$$ x_{n+1,s}=\hbox{arg}\;\mathop{\hbox{min}}\limits_{x\in v-v_{n,s}}{\Uppsi( {{\mathbf{M}}}_{B}(v_{n,s}+(x)))}, $$

where v is the initial design that has to be reduced.

This improved algorithm has the advantage that design points once deleted can reenter the design in the exchange step.

1.3.5 Exact design algorithms for the D-optimality criterion

Besides the Bayesian design criterion discussed up to now another well known convex design criterion is the so-called D-optimality criterion, which is equivalent to minimizing the determinent of the pre posterior covariance matrix

$$ \begin{aligned}(\sigma_{0}^{2} {\varvec{\Upgamma}}^{-1}+{{\mathbf{H}}}({{\mathbf{v}}}_{{{\mathbf{n}}}})^{T} {{\mathbf{H}}}({{\mathbf{v}}}_{{{\mathbf{n}}}}))^{-1} \end{aligned} $$

of the regression parameter.

The Algorithms A1.3.1A1.3.4 may be applied to the D-optimality design criterion, too, merely the formulae given for x n+1,s and j* change and become

$$ x_{n+1,s}=\hbox{arg}\;\mathop{\hbox{max}}\limits_{x\in{\varvec{X}}}{ {{\mathbf{h}}}(x)^{T}{{\mathbf{M}}}_{B}(v_{n,s})^{-1}{{\mathbf{h}}}(x)} $$
$$ j^{*}=\hbox{arg}\;\mathop{\hbox{min}}\limits_{1\le j\le n+1}{{{\mathbf{h}}}(x_{j,s})^{T}{{\mathbf{M}}}_{B}(v_{n+1,s})^{-1} {{\mathbf{h}}}(x_{j,s})}. $$

1.3.6 Inverse of the information matrix

Obviously the calculation of exact designs requires in every step the calculation of the inverses of the information matrices M B (v n,s) and M B (v n+1,s). In the section on spatial sampling design we see that these information matrices can have a quite high dimension of about 2,500 × 2,500. So, how can one invert such large matrices in affordable time? A first artificial, inverse information matrix in spatial sampling design can always be one with block-diagonal structure corresponding to 0 selected design points, having one very small block, being the a priori covariance matrix for deterministic trend functions, and having one further block, being just a diagonal matrix of very high dimension (about 2,500 diagonal elements, being the variances of the stochastic amplitudes resulting from a harmonic decomposition of the random field into sine-cosine-Bessel surface harmonics). So, no inversion is needed at a first step. The inversion of all other information matrices becomes easy, and there is computationally no need to make explicit use of numerical matrix inversion algorithms, when one considers equations (13.26) and (13.28) in Pilz (1991):

$$ \begin{aligned} {{\mathbf{M}}}_{B}(v_{n,s}+(x))^{-1}& ={\frac{n+1}{n}}\left\{ {{\mathbf{M}}}_{B}(v_{n,s})^{-1} -{\frac{{{\mathbf{M}}}_{B}(v_{n,s})^{-1}{{\mathbf{h}}}(x){{\mathbf{h}}}(x)^{T} {{\mathbf{M}}}_{B}(v_{n,s})^{-1}}{n+{{\mathbf{h}}}(x)^{T} {{\mathbf{M}}}_{B}(v_{n,s})^{-1}{{\mathbf{h}}}(x)}}\right\},\\ {{\mathbf{M}}}_{B}(v_{n,s}^{j})^{-1}& ={\frac{n}{n+1}}\left\{ {{\mathbf{M}}}_{B}(v_{n+1,s})^{-1} +{\frac{{{\mathbf{M}}}_{B}(v_{n+1,s})^{-1}{{\mathbf{h}}}(x_{j,s}) {{\mathbf{h}}}(x_{j,s})^{T}{{\mathbf{M}}}_{B}(v_{n+1,s})^{-1}}{n+1- {{\mathbf{h}}}(x_{j,s})^{T}{{\mathbf{M}}}_{B}(v_{n+1,s})^{-1} {{\mathbf{h}}}(x_{j,s})}}\right\} \end{aligned} $$

Obviously only matrix- and vector multiplications are needed in these update formulae.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Spöck, G., Pilz, J. Spatial sampling design and covariance-robust minimax prediction based on convex design ideas. Stoch Environ Res Risk Assess 24, 463–482 (2010). https://doi.org/10.1007/s00477-009-0334-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00477-009-0334-y

Keywords

Navigation