Optimal designs in multiple group random coefficient regression models

Abstract

The subject of this work is multiple group random coefficients regression models with several treatments and one control group. Such models are often used for studies with cluster randomized trials. We investigate A-, D- and E-optimal designs for estimation and prediction of fixed and random treatment effects, respectively, and illustrate the obtained results by numerical examples.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

References

  1. Bailey RA (2008) Design of comparative experiments. Cambridge University Press, Cambridge

    Google Scholar 

  2. Bland JM (2004) Cluster randomised trials in the medical literature: two bibliometric surveys. BMC Med Res Methodol 4:21

    Article  Google Scholar 

  3. Bludowsky A, Kunert J, Stufken J (2015) Optimal designs for the carryover model with random interactions between subjects and treatments. Aust. N Z J Stat 57:517–533

    MathSciNet  Article  Google Scholar 

  4. Christensen R (2002) Plane answers to complex questions: the theory of linear models. Springer, New York

    Google Scholar 

  5. Entholzner M, Benda N, Schmelter T, Schwabe R (2005) A note on designs for estimating population parameters. Biom Lett Listy Biometryczne 42:25–41

    Google Scholar 

  6. Fedorov V, Jones B (2005) The design of multicentre trials. Stat Methods Med Res 14:205–248

    MathSciNet  Article  Google Scholar 

  7. Gladitz J, Pilz J (1982) Construction of optimal designs in random coefficient regression models. Math Operationsforschung und Stat Ser Stat 13:371–385

    MathSciNet  MATH  Google Scholar 

  8. Harman R, Prus M (2018) Computing optimal experimental designs with respect to a compound Bayes risk criterion. Stat Probab Lett 137:135–141

    MathSciNet  Article  Google Scholar 

  9. Henderson CR (1975) Best linear unbiased estimation and prediction under a selection model. Biometrics 31:423–477

    Article  Google Scholar 

  10. Henderson CR (1984) Applications of linear models in animal breeding. University of Guelph, Guelph

    Google Scholar 

  11. Henderson CR, Kempthorne O, Searle SR, von Krosigk CM (1959) The estimation of environmental and genetic trends from records subject to culling. Biometrics 15:192–218

    Article  Google Scholar 

  12. Kunert J, Martin RJ, Eccleston J (2010) Optimal block designs comparing treatments with a control when the errors are correlated. J Stat Plan Inference 140:2719–2738

    MathSciNet  Article  Google Scholar 

  13. Lemme F, van Breukelen GJP, Berger MPF (2015) Efficient treatment allocation in two-way nested designs. Stat Methods Med Res 24:494–512

    MathSciNet  Article  Google Scholar 

  14. Majumdar D, Notz W (1983) Optimal incomplete block designs for comparing treatments with a control. Ann Stat 11:258–266

    MathSciNet  Article  Google Scholar 

  15. Patton GC, Bond L, Carlin JB, Thomas L, Butler H, Glover S, Catalano R, Bowes G (2006) Promoting social inclusion in schools: a group-randomized trial of effects on student health risk behavior and well-being. Am J Public Health 96:1582–1587

    Article  Google Scholar 

  16. Piepho HP, Möhring J (2005) Best linear unbiased prediction of cultivar effects for subdivided target regions. Crop Sci 45:1151–1159

    Article  Google Scholar 

  17. Piepho HP, Möhring J (2010) Generation means analysis using mixed models. Crop Sci 50:1674–1680

    Article  Google Scholar 

  18. Prus M (2015) Optimal designs for the prediction in hierarchical random coefficient regression models. Ph.D. thesis, Otto-von-Guericke University, Magdeburg

  19. Prus M, Schwabe R (2016) Optimal designs for the prediction of individual parameters in hierarchical models. J R Stat Soc: Ser B 78:175–191

    MathSciNet  Article  Google Scholar 

  20. Rasch D, Herrendörfer G (1986) Experimental design: sample size determination and block designs. Reidel, Dordrecht

    Google Scholar 

  21. Schmelter T (2007) Experimental design for mixed models with application to population pharmacokinetic studies. Ph.D. thesis, Otto-von-Guericke University Magdeburg

  22. Schwabe R (1996) Optimum designs for multi-factor models. Springer, New York

    Google Scholar 

  23. Wierich W (1986) The D- and A-optimality of product design measures for linear models with discrete and continuous factors of influence. Freie Universiät Berlin, Habilitationsschrift

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Maryna Prus.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research has been supported by grant SCHW 531/16-1 of the German Research Foundation (DFG). The author thanks Radoslav Harman, Norbert Gaffke and Rainer Schwabe for fruitful discussions. The author is also grateful to two referees and the Editor-in-Chief, who significantly contributed to improving the presentation of the results.

Appendix

Appendix

Proofs of Theorems 14

To make use of the available results for estimation and prediction, we recognize the model (7) as a special case of the linear mixed model (see e.g., Christensen 2002)

$$\begin{aligned} {\mathbf {Y}}={\mathbf {X}} {\varvec{\beta }} + {\mathbf {Z}} {\varvec{\gamma }} + {\varvec{\varepsilon }}, \end{aligned}$$
(31)

where \({\mathbf {X}}\) and \({\mathbf {Z}}\) are known fixed and random effects design matrices, \({\varvec{\beta }}\) and \({\varvec{\gamma }}\) are vectors of fixed and random effects, respectively. The random effects \({\varvec{\gamma }}\) and the observational errors \({\varvec{\varepsilon }}\) are assumed to be uncorrelated and to have zero means and non-singular covariance matrices \({\mathbf {G}}=\text{ Cov }\,({\varvec{\gamma }})\) and \({\mathbf {R}}=\text{ Cov }\,({\varvec{\varepsilon }})\).

According to Henderson et al. (1959), for full-column rank design matrix \({\mathbf {X}}\) the BLUE \(\hat{{\varvec{\beta }}}\) for \({\varvec{\beta }}\) and the BLUP \(\hat{{\varvec{\gamma }}}\) for \({\varvec{\gamma }}\) are provided by the mixed model equations

$$\begin{aligned} \left( \begin{array}{c} \hat{{\varvec{\beta }}} \\ \hat{{\varvec{\gamma }}} \end{array} \right) = {\left( \begin{array}{cc} {\mathbf {X}}^\top {\mathbf {R}}^{-1}{\mathbf {X}} &{} {\mathbf {X}}^\top {\mathbf {R}}^{-1}{\mathbf {Z}} \\ {\mathbf {Z}}^\top {\mathbf {R}}^{-1}{\mathbf {X}} &{} {\mathbf {Z}}^\top {\mathbf {R}}^{-1}{\mathbf {Z}}+{\mathbf {G}}^{-1} \end{array} \right) ^{-1}} \left( \begin{array}{c} {\mathbf {X}}^\top {\mathbf {R}}^{-1}{\mathbf {Y}} \\ {\mathbf {Z}}^\top {\mathbf {R}}^{-1}{\mathbf {Y}} \end{array} \right) \!, \end{aligned}$$
(32)

which can be rewritten in the alternative form

$$\begin{aligned}&\hat{{\varvec{\beta }}}=\left( {\mathbf {X}}^\top ({\mathbf {Z}} {\mathbf {G}}{\mathbf {Z}}^\top +{\mathbf {R}})^{-1}{\mathbf {X}}\right) ^{-1}{\mathbf {X}}^\top ({\mathbf {Z}} {\mathbf {G}}{\mathbf {Z}}^\top +{\mathbf {R}})^{-1}{\mathbf {Y}}, \end{aligned}$$
(33)
$$\begin{aligned}&\hat{{\varvec{\gamma }}}={\mathbf {G}}{\mathbf {Z}}^\top ({\mathbf {Z}}{\mathbf {G}}{\mathbf {Z}}^\top + {\mathbf {R}})^{-1}({\mathbf {Y}}-{\mathbf {X}}\hat{{\varvec{\beta }}})\!. \end{aligned}$$
(34)

The mean squared error matrix of the estimator and predictor \(\left( \hat{{\varvec{\beta }}}^\top ,\, \hat{{\varvec{\gamma }}}^\top \right) ^\top \) is given by (see Henderson 1975)

$$\begin{aligned} \text{ Cov }\,\left( \begin{array}{c} \hat{{\varvec{\beta }}} \\ \hat{{\varvec{\gamma }}}-{\varvec{\gamma }} \end{array} \right) ={\left( \begin{array}{cc} {\mathbf {X}}^\top {\mathbf {R}}^{-1}{\mathbf {X}} &{} {\mathbf {X}}^\top {\mathbf {R}}^{-1}{\mathbf {Z}} \\ {\mathbf {Z}}^\top {\mathbf {R}}^{-1}{\mathbf {X}} &{} {\mathbf {Z}}^\top {\mathbf {R}}^{-1}{\mathbf {Z}}+{\mathbf {G}}^{-1} \end{array} \right) ^{-1}} \end{aligned}$$
(35)

and can be represented as the partitioned matrix

$$\begin{aligned} \mathrm {Cov}\,\left( \begin{array}{c} \hat{{\varvec{\beta }}} \\ \hat{{\varvec{\gamma }}}-{\varvec{\gamma }} \end{array} \right) =\left( \begin{array}{cc} {\mathbf {C}}_{11} &{} {\mathbf {C}}_{12} \\ {\mathbf {C}}_{12}^\top &{} {\mathbf {C}}_{22} \end{array} \right) \!, \end{aligned}$$
(36)

where \({\mathbf {C}}_{11}=\mathrm {Cov}(\hat{{\varvec{\beta }}})\), \({\mathbf {C}}_{22}=\mathrm {Cov}\left( \hat{{\varvec{\gamma }}}-{\varvec{\gamma }}\right) \),

$$\begin{aligned} {\mathbf {C}}_{11}= & {} \left( {\mathbf {X}}^\top \left( {\mathbf {Z}} {\mathbf {G}}{\mathbf {Z}}^\top +{\mathbf {R}}\right) ^{-1}{\mathbf {X}}\right) ^{-1},\\ {\mathbf {C}}_{22}= & {} \left( {\mathbf {Z}}^\top {\mathbf {R}}^{-1}{\mathbf {Z}} +{\mathbf {G}}^{-1}-{\mathbf {Z}}^\top {\mathbf {R}}^{-1}{\mathbf {X}} ({\mathbf {X}}^\top {\mathbf {R}}^{-1}{\mathbf {X}})^{-1}{\mathbf {X}}^\top {\mathbf {R}}^{-1}{\mathbf {Z}}\right) ^{-1},\\ {\mathbf {C}}_{12}= & {} -{\mathbf {C}}_{11}\, {\mathbf {X}}^\top {\mathbf {R}}^{-1}{\mathbf {Z}}\left( {\mathbf {Z}}^\top {\mathbf {R}}^{-1}{\mathbf {Z}} +{\mathbf {G}}^{-1}\right) ^{-1}. \end{aligned}$$

For \({\varvec{\beta }}={\varvec{\theta }}_0\), \({\varvec{\gamma }}={\varvec{\zeta }}\), \({\mathbf {X}}=\mathrm {Vec}_{j=1}^J\left( {\mathbf {1}}_{r_j}\otimes \left( {\mathbf {1}}_K\,{\mathbf {f}}(j)^\top \right) \right) \), \({\mathbf {Z}}=\mathrm {Diag}_{j=1}^J\big ({\mathbf {I}}_{r_j}\otimes \big ({\mathbf {1}}_K\,{\mathbf {f}}(j)^\top \big )\big )\), \({\mathbf {G}}=\sigma ^2\,{\mathbf {I}}_N\otimes \text {block-diag}(u,\, v\,{\mathbf {I}}_{J-1})\) and \({\mathbf {R}}=\mathrm {Cov}({\varvec{\varepsilon }})=\sigma ^2\,{\mathbf {I}}_{NK}\) our model (7) is of form (31).

Using formulas (33) and (34) and employing some linear algebra, we obtain the following BLUE and BLUP for the fixed and random effects \({\varvec{\theta }}_0\) and \({\varvec{\zeta }}\):

$$\begin{aligned} \hat{{\varvec{\theta }}}_0=\left( \begin{array}{c}\bar{{\mathbf {Y}}}_J \\ \mathrm {Vec}_{j=1}^{J-1}\left( \bar{{\mathbf {Y}}}_j-\bar{{\mathbf {Y}}}_J\right) \end{array}\right) \!, \end{aligned}$$

and

$$\begin{aligned} \hat{{\varvec{\zeta }}}=\left( \begin{array}{c}\frac{K}{K(v+u)+1}\,\mathrm {Vec}_{j=1}^{J-1}\mathrm {Vec}_{i=N_{J-1}+1}^{N_j}\left( \left( \begin{array}{c}u \\ v\,{\mathbf {e}}_j\end{array}\right) \left( \bar{{\mathbf {Y}}}_{j,i}-\bar{{\mathbf {Y}}}_j\right) \right) \\ \frac{K}{Ku+1}\,\mathrm {Vec}_{i=N_{J-1}+1}^{N_j}\left( \left( \begin{array}{c}u \\ {\mathbf {0}}_{J-1}\end{array}\right) \left( \bar{{\mathbf {Y}}}_{J,i}-\bar{{\mathbf {Y}}}_J\right) \right) \end{array}\right) \!. \end{aligned}$$

Now the results (8)–(13) of Theorems 1 and 2 are straightforward to verify.

To prove Theorems 3 and 4, we firstly compute the blocks \({\mathbf {C}}_{11}\), \({\mathbf {C}}_{12}\) and \({\mathbf {C}}_{22}\) of the mean squared error matrix (36):

$$\begin{aligned} {\mathbf {C}}_{11}= & {} \frac{\sigma ^2(Ku+1)}{Km}\left( \begin{array}{cc} 1 &{} -{\mathbf {1}}_{J-1}^\top \\ -{\mathbf {1}}_{J-1} &{} \frac{(K(u+v)+1)m}{(Ku+1)n}{\mathbf {I}}_{J-1}+{\mathbf {1}}_{J-1}{\mathbf {1}}_{J-1}^\top \end{array}\right) \!, \end{aligned}$$
(37)
$$\begin{aligned} {\mathbf {C}}_{22}= & {} \sigma ^2\left( \begin{array}{cc}{\mathbf {C}}_{221} &{} {\mathbf {0}} \\ {\mathbf {0}} &{} {\mathbf {C}}_{222} \end{array}\right) \!, \end{aligned}$$
(38)

where

$$\begin{aligned} {\mathbf {C}}_{221}= & {} {\mathbf {I}}_{n(J-1)}\otimes \text {block-diag}(u,\, v\,{\mathbf {I}}_{J-1})\nonumber \\&-\,\frac{K}{K(v+u)+1}\mathrm {Diag}_{j=1}^{J-1}\left( \left( {\mathbf {I}}_n-\frac{1}{n}{\mathbf {1}}_n{\mathbf {1}}_n^\top \right) \otimes \left( \left( \begin{array}{c}u \\ v\,{\mathbf {e}}_j\end{array}\right) \left( \begin{array}{c}u \\ v\,{\mathbf {e}}_j\end{array}\right) ^\top \right) \right) \!,\\ {\mathbf {C}}_{222}= & {} {\mathbf {I}}_{m}\otimes \text {block-diag}(u,\, v\,{\mathbf {I}}_{J-1})-\frac{k}{Ku+1}\left( {\mathbf {I}}_m-\frac{1}{m}{\mathbf {1}}_m{\mathbf {1}}_m^\top \right) \\&\otimes \left( \left( \begin{array}{c}u \\ {\mathbf {0}}_{J-1}\end{array}\right) \left( \begin{array}{c}u \\ {\mathbf {0}}_{J-1}\end{array}\right) ^\top \right) \!, \end{aligned}$$

and

$$\begin{aligned} {\mathbf {C}}_{12}=-\sigma ^2\left( {\mathbf {C}}_{121}\,\, \vdots \,\, {\mathbf {C}}_{122}\right) \!, \end{aligned}$$
(39)

where

$$\begin{aligned} {\mathbf {C}}_{121}= & {} \mathrm {tVec}_{j=1}^{J-1}\left( \frac{1}{n}{\mathbf {1}}_n^\top \otimes \left( \left( \begin{array}{c} 0 \\ {\mathbf {e}}_j\end{array}\right) \left( \begin{array}{c}u \\ v{\mathbf {e}}_j\end{array}\right) ^\top \right) \right) \!,\\ {\mathbf {C}}_{122}= & {} \frac{1}{m}{\mathbf {1}}_m^\top \otimes \left( \left( \begin{array}{c} 1 \\ -{\mathbf {1}}_{J-1}\end{array}\right) \left( \begin{array}{c}u \\ {\mathbf {0}}_{J-1}\end{array}\right) ^\top \right) \!. \end{aligned}$$

We can observe that \({\varvec{\varPsi }}_0=({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1}){\varvec{\theta }}_0\) and then \(\hat{{\varvec{\varPsi }}}_0=({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\hat{{\varvec{\theta }}}_0\) is the BLUE of \({\varvec{\varPsi }}_0\). Consequently, the covariance matrix of \(\hat{{\varvec{\varPsi }}}_0\) can be determined using the formula

$$\begin{aligned} \mathrm {Cov}\left( \hat{{\varvec{\varPsi }}}_{0}\right) =({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1}){\mathbf {C}}_{11}({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})^\top , \end{aligned}$$

which implies result (14).

For the vector \({\varvec{\varPsi }}\) of all individual treatment effects, it can be verified that

$$\begin{aligned} {\varvec{\varPsi }}=\left( {\mathbf {1}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) {\varvec{\theta }}_0+\left( {\mathbf {I}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) {\varvec{\zeta }} \end{aligned}$$

and the BLUP of \({\varvec{\varPsi }}\) is given by

$$\begin{aligned} \hat{{\varvec{\varPsi }}}=\left( {\mathbf {1}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) \hat{{\varvec{\theta }}}_0+\left( {\mathbf {I}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) \hat{{\varvec{\zeta }}}. \end{aligned}$$

Then the mean squared error matrix of \(\hat{{\varvec{\varPsi }}}\) is of general form (15) with

$$\begin{aligned} {\mathbf {B}}_{1}= & {} \left( {\mathbf {1}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) {\mathbf {C}}_{11}\left( {\mathbf {1}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) ^\top ,\\ {\mathbf {B}}_{2}= & {} \left( {\mathbf {1}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) {\mathbf {C}}_{12}\left( {\mathbf {I}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) ^\top \end{aligned}$$

and

$$\begin{aligned} {\mathbf {B}}_{3}=\left( {\mathbf {I}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) {\mathbf {C}}_{22}\left( {\mathbf {I}}_N\otimes ({\mathbf {0}}_{J-1}\, \vdots \, {\mathbf {I}}_{J-1})\right) ^\top . \end{aligned}$$

After applying (37)–(39), we obtain the result of Theorem 4.

Proof of Lemma 2

To determine the eigenvalues of \(\text {Cov}\left( \hat{{\varvec{\varPsi }}}-{\varvec{\varPsi }}\right) \), we have to solve the equation

$$\begin{aligned} \text {det}\left( \text {Cov}\left( \hat{{\varvec{\varPsi }}}-{\varvec{\varPsi }}\right) -\lambda \,{\mathbf {I}}_{N} \right) =0. \end{aligned}$$
(40)

From (26), it follows that

$$\begin{aligned} \text {Cov}\left( \hat{{\varvec{\varPsi }}}-{\varvec{\varPsi }}\right) -\lambda \,{\mathbf {I}}_{N} =:\left( \begin{array}{cc} \tilde{{\mathbf {H}}}_{11} &{} {\mathbf {H}}_{12}\\ {\mathbf {H}}_{12}^\top &{} \tilde{{\mathbf {H}}}_{22} \end{array}\right) \!, \end{aligned}$$

where

$$\begin{aligned} \tilde{{\mathbf {H}}}_{11}=(a_1-\lambda )\,\frac{1}{n}{\mathbf {1}}_n{\mathbf {1}}_n^\top +(a_2-\lambda )\left( {\mathbf {I}}_{n}-\frac{1}{n}{\mathbf {1}}_n{\mathbf {1}}_n^\top \right) \end{aligned}$$

for \(a_1=\frac{\sigma ^2\,N(Ku+1)}{K\,m}\) and \(a_2=\frac{\sigma ^2\,v\,(Ku+1)}{K(v+u)+1}\),

$$\begin{aligned} \tilde{{\mathbf {H}}}_{22}=a_3\,\frac{1}{m}{\mathbf {1}}_m{\mathbf {1}}_m^\top +(\sigma ^2v-\lambda ){\mathbf {I}}_{m} \end{aligned}$$

for \(a_3=\frac{\sigma ^2(K(v+u)+1)m}{K\,n}+\frac{a_1m}{N}\), and \({\mathbf {H}}_{12}\) is the same as in (26).

Then we compute the determinant of \(\text {Cov}\left( \hat{{\varvec{\varPsi }}}-{\varvec{\varPsi }}\right) -\lambda \,{\mathbf {I}}_{N}\) as

$$\begin{aligned} \mathrm {det}\left( \text {Cov}\left( \hat{{\varvec{\varPsi }}}-{\varvec{\varPsi }}\right) -\lambda \,{\mathbf {I}}_{N} \right) =\mathrm {det}\left( \tilde{{\mathbf {H}}}_{11}\right) \mathrm {det}\left( \tilde{{\mathbf {H}}}_{22}-{\mathbf {H}}_{12}^\top \,\tilde{{\mathbf {H}}}_{11}^{-1}{\mathbf {H}}_{12}\right) \!, \end{aligned}$$

where

$$\begin{aligned}&\mathrm {det}\left( \tilde{{\mathbf {H}}}_{11}\right) =(a_1-\lambda )(a_2-\lambda )^{n-1},\\&\tilde{{\mathbf {H}}}_{11}^{-1}=\frac{1}{a_1-\lambda }\,\frac{1}{n}{\mathbf {1}}_n{\mathbf {1}}_n^\top +\frac{1}{a_2-\lambda }\left( {\mathbf {I}}_{n}-\frac{1}{n}{\mathbf {1}}_n{\mathbf {1}}_n^\top \right) \!,\\&\tilde{{\mathbf {H}}}_{22}-{\mathbf {H}}_{12}^\top \,\tilde{{\mathbf {H}}}_{11}^{-1}{\mathbf {H}}_{12}=\left( a_3-\frac{a_1^2\,m}{(a_1-\lambda ) n}\right) \frac{1}{m}{\mathbf {1}}_m{\mathbf {1}}_m^\top +\left( \sigma ^2v-\lambda \right) {\mathbf {I}}_{m},\\&\mathrm {det}\left( \tilde{{\mathbf {H}}}_{22}-{\mathbf {H}}_{12}^\top \,\tilde{{\mathbf {H}}}_{11}^{-1}{\mathbf {H}}_{12}\right) =\left( \sigma ^2v-\lambda \right) ^{m-1}\left( a_3-\frac{a_1^2\,m}{(a_1-\lambda ) n}+\sigma ^2v-\lambda \right) \!. \end{aligned}$$

Then we obtain

$$\begin{aligned}&\mathrm {det}\left( \text {Cov}\left( \hat{{\varvec{\varPsi }}}-{\varvec{\varPsi }}\right) -\lambda \,{\mathbf {I}}_{N} \right) \\&\quad =(a_2-\lambda )^{n-1}\left( \sigma ^2v-\lambda \right) ^{m-1}\left( (a_3+\sigma ^2v-\lambda )(a_1-\lambda )-\frac{a_1^2\,m}{n}\right) \!, \end{aligned}$$

which results in the following solutions of Eq. (40):

$$\begin{aligned} \lambda _1= & {} \frac{\sigma ^2\,v\,(Ku+1)}{K(v+u)+1},\\ \lambda _2= & {} \sigma ^2v,\\ \lambda _3= & {} \frac{\sigma ^2N}{2\,Kn\,m}(Km\,v+N(Ku+1)+\sqrt{s_{n,m}})\!, \end{aligned}$$

where \(s_{n,m}=K^2m^2v^2+2Km(m-n)(Ku+1)v+N^2(Ku+1)^2\), and

$$\begin{aligned} \lambda _4=\frac{\sigma ^2N}{2\,Kn\,m}(Km\,v+N(Ku+1)-\sqrt{s_{n,m}})\!. \end{aligned}$$

After applying \(n=N\,w\) and \(m=N(1-w)\), we can see that \(s_{n,m}=N^2s_w\) and we obtain the result of Lemma 2.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Prus, M. Optimal designs in multiple group random coefficient regression models. TEST 29, 233–254 (2020). https://doi.org/10.1007/s11749-019-00654-6

Download citation

Keywords

  • Optimal design
  • Treatment and control
  • Random effects
  • Cluster randomization
  • Mixed models
  • Estimation and prediction

Mathematics Subject Classification

  • 62K05