Summary
Regression models with correlated errors lead to nonadditivity of the information matrix. This makes the usual approach of design optimization (approximation by a design measure, application of an equivalence theorem, numerical calculations by a gradient algorithm) impossible. Therefore extended information matrices depending upon design measures have been proposed recently and herein we present a first order iterative design optimization algorithm based upon them. A heuristic is formulated to circumvent the nonconvexity of the problem and the method is applied to typical examples from the literature.
Similar content being viewed by others
References
Bischoff, W.: 1992, On exact D-optimal designs for regression models with correlated observations, Ann. Inst. Statist. Math. 44(2), 229–238.
Brimkulov, U., Krug, G. and Savanov, V.: 1980, Numerical construction of exact experimental designs when the measurements are correlated (in Russian), Zavodskaya Laboratoria (Industrial Laboratory) 36, 435–442.
Fedorov, V. and Flanagan, D.: 1998, Optimal monitoring network design based on mercer’s expansion of covariance kernel, Journal of Combinatorics, Information & System Sciences.
Glatzer, E.: 1999, Über Versuchsplanungalgorithmen bei korrelierten Beobachtungen (in German), Master’s thesis, Wirtschaftsuniversität Wien.
Müller-Gronbach, T.: 1996, Optimal designs for approximating the path of a stochastic process, Journal of Statistical Planning and Inference 49, 371–385.
Müller, W.: 1995, An example of optimal design for correlated observations (in German), Österreichische Zeitschrift für Statistik 24(1), 9–15.
Müller, W.: 1998, Spatial Data Collection, Contributions to Statistics, Physica Verlag, Heidelberg.
Müller, W. and Pázman, A.: 1998, Design measures and extended information matrices for experiments without replications, Journal of Statistical Planning and Inference.
Näther, W.: 1985, Exact designs for regression models with correlated errors, Statistics 16(4), 479–484.
Pázman, A.: 1986, Foundations of Optimum Experimental Design, Mathematics and Its Applications, D.Reidel, Dordrecht.
Pázman, A. and Müller, W.: 1998, A new interpretation of design measures, in A. Atkinson, L. Pronzato and H. Wynn (eds), Model-Oriented Data Analysis 5, Physica, Heidelberg, pp. 239–246.
Rasch, D., Hendrix, E. and Boer, E.: 1997, Replicationfree optimal designs in regression analysis, Computational Statistics 12, 19–32.
Sacks, J. and Ylvisaker, D.: 1966, Design for regression problems with correlated errors, Annals of Mathematical Statistics 37, 66–89.
Author information
Authors and Affiliations
Additional information
Research supported by the Slovak VEGA grant No. 1/4196/97.
Appendices
Appendix A: The first order derivative of the extended information matrix
For λ > 0 we have from (6)
where ξλ = (1 − λ)μ + λη, and μ is supported on the whole χ.
So the derivative of the criterion function based on the information matrix \(J_\kappa ^{\left( \gamma \right)}\left( \xi \right)\) is
where
and
By taking limits we obtain
We have
Hence, we obtain, like in the proof of the Proposition in Appendix B
and
The limit derivative of ln(γξλ) and of lnγξλ(u) are given in Lemma 2 in Appendix B.
Note that in the case that μ(x) < κ or μmax < κ, the logarithm in Lemma 2 is multiplied by the number γ, which in the limit tends to zero. So in this case the limit derivative in (A.10) is infinitely larger than for the case μ(x) > κ, μmax > κ. Instead of (A.10) we then compute
This influences the algorithm which is described in Section 3.
Appendix B: Further Properties
Proposition: We have
and
where Vκ(ξ) is a diagonal matrix with entries
The proof follows directly from Lemma 1 given in the Appendix B. We note that [Vκ(ξ)]x,x can be continuously extended to the cases κ = ξ(x) and κ = ξmax, which are not considered in the Proposition.
Lemma 1:
If ξ(x) > κ, then
If ξ(x) < κ, then
Proof. We shall consider the terms in
-
i)
If ξ(x) > κ, then (4) implies limγ→0[γξ(x)] = ξ(x)−κ, hence limγ→0γ ln[γξ(x)] = 0.
-
ii)
Similarly, ξmax > κ ⇒ limγ→0 γln[γξ] = 0.
-
iii)
If ξ(x) < κ then from (4) we obtain
$$\gamma \ln \left[ {_\gamma \xi \left( x \right)} \right] = \gamma \ln \kappa + \gamma \ln \left\{ {{{\left[ {1 + {t^{{1 \over \gamma }}}\left( x \right)} \right]}^\gamma } - 1} \right\},$$((A.11))with \(t\left( x \right) = {{\xi \left( x \right)} \over \kappa } < 1\). By the Taylor formula of the function z → (1+z)γ in the neighborhood of z = 0, we have
$${\left( {1 + z} \right)^\gamma } = 1 + \gamma z + {1 \over 2}\gamma \left( {\gamma - 1} \right){z^2} + o\left( {{z^2}} \right).$$
Hence from (A.11) we obtain
□
Lemma 2:
For ξλ(x) = (1 − λ)μ(x) + λη(x), we have
where
Proof. According to (4) we have
where \({h_\gamma }\left( \xi \right) = {\kappa ^{{1 \over \gamma }}} + \sum\nolimits_{x \in {\cal X}} {\xi {{\left( x \right)}^{{1 \over \gamma }}}} \)
By direct differentiation we obtain
where \(E_\mu ^{\left( {{1 \over \gamma }} \right)}\) denotes the weighted mean with weights equal to \({{{\mu ^{{1 \over \gamma }}}\left( x \right)} \over {\sum\nolimits_{u \in {\cal X}} {{\mu ^{{1 \over \gamma }}}\left( u \right)} }}\).
Similarly
where \({l_\gamma }\left( x \right) = {\kappa ^{{1 \over \gamma }}} + {\mu ^{{1 \over \gamma }}}\left( x \right)\)
-
i)
Let μmax > κ. Then we obtain directly from (A.12)
$$\mathop {\lim }\limits_{\gamma \to 0} \mathop {\lim }\limits_{\lambda \to 0} {\partial \over {\partial \lambda }}\ln {{\rm{[}}_\gamma }{\xi _\lambda }] = {{{\mu _{\max }}} \over {{\mu _{\max }} - \kappa }}{E_{{B_\mu }}}\left[ {{{\eta (.)} \over {\mu (.)}} - 1} \right].$$ -
ii)
Similarly, if μ(x) > κ we obtain from (A.13)
$$\mathop {\lim }\limits_{\gamma \to 0} \mathop {\lim }\limits_{\lambda \to 0} {\partial \over {\partial \lambda }}\ln {{\rm{[}}_\gamma }{\xi _\lambda }(x){\rm{]}} = {{\eta (x) - \mu (x)} \over {\mu (x) - \kappa }}$$ -
iii)
Suppose that μmax < κ, and denote \(s\left( x \right) = {{\mu \left( x \right)} \over \kappa } < 1\). From (A.12) we obtain
$$\begin{array}{*{20}c}{\mathop {\lim }\limits_{\lambda \to 0} {\partial \over {\partial \lambda }}\gamma \ln {{\rm{[}}_\gamma }{\xi _\lambda }{\rm{]}} = {{{{\left[ {1 + \sum\nolimits_{x \in {\cal X}} {{s^{{1 \over \gamma }}}} (x)} \right]}^\gamma }} \over {{{\left[ {1 + \sum\nolimits_{x \in {\cal X}} {{s^{{1 \over \gamma }}}} (x)} \right]}^\gamma } - 1}}{{\gamma \sum\nolimits_{x \in {\cal X}} {{s^{{1 \over \gamma }}}} (x)} \over {\left[ {1 + \sum\nolimits_{x \in {\cal X}} {{s^{{1 \over \gamma }}}} (x)} \right]}}{\rm{E}}_\mu ^{\left( {{1 \over \gamma }} \right)}\left[ {{{\eta (.)} \over {\mu (.)}} - 1} \right]} \\ {{ \to _{\gamma \to 0}}{E_{{B_\mu }}}\left[ {{{\eta (.)} \over {\mu (.)}} - 1} \right]\mathop {\lim }\limits_{\gamma \to 0} {{\gamma \sum\nolimits_{x \in {\cal X}} {{s^{{1 \over \gamma }}}} (x)} \over {{{\left[ {1 + \sum\nolimits_{x \in {\cal X}} {{s^{{1 \over \gamma }}}} (x)} \right]}^\gamma } - 1}}{\rm{.}}} \\ \end{array} $$Using the Taylor formula, like in the proof of Lemma 1, we obtain that the last limit is equal to 1.
-
iv)
Suppose now, that μ(x) < κ. Then from (A.13) we obtain
$$\begin{array}{*{20}c}{\mathop {\lim }\limits_{\lambda \to 0} {\partial \over {\partial \lambda }}\gamma \ln {{\rm{[}}_\gamma }{\xi _\lambda }(x){\rm{]}} = \gamma {{{{\left[ {1 + {s^{{1 \over \gamma }}}(x)} \right]}^\gamma }} \over {{{\left[ {1 + {s^{{1 \over \gamma }}}(x)} \right]}^\gamma } - 1}} \times {{{s^{{1 \over \gamma }}}(x)} \over {1 + {s^{{1 \over \gamma }}}(x)}}\left[ {{{\eta (x)} \over {\mu (x)}} - 1} \right]} \\ {{\rm{ }}{ \to _{\gamma \to 0}}\left[ {{{\eta (x)} \over {\mu (x)}} - 1} \right]\mathop {\lim }\limits_{\gamma \to 0} {{\gamma {s^{{1 \over \gamma }}}(x)} \over {{{\left[ {1 + {s^{{1 \over \gamma }}}(x)} \right]}^\gamma } - 1}},} \\ \end{array} $$and the last limit is equal to 1, which can be proved in the same way as in iii). □
Rights and permissions
About this article
Cite this article
Müller, W.G., Pázman, A. An algorithm for the computation of optimum designs under a given covariance structure. Computational Statistics 14, 197–211 (1999). https://doi.org/10.1007/s001800050013
Published:
Issue Date:
DOI: https://doi.org/10.1007/s001800050013