Skip to main content
Log in

On a mixed model analysis of multi-environment variety trials: a reconsideration of the one-stage and the two-stage models and analyses

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

Of interest is the analysis of results of a series of experiments conducted at several environments with the same set of plant varieties (called multi-environment variety trials). The most common practice is first to analyze individual trials and then to perform a kind of “synthesis” of the results obtained. This is considered as a two-stage approach to the analysis of the trial data. More recently a combined analysis of the raw plot data from all trials taken simultaneously has been advocated, as a one-stage approach to the analysis. The purpose of this article is to reconsider these two approaches with regard to the underlying models and the analyses based on them. The indicated differences between them are illustrated by a thorough analysis of a set of data from a series of trials with rye varieties. The required computations have been accomplished with the use of R.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Caliński T, Kageyama S (2000) Block designs: a randomization approach, vol I: analysis. Lecture notes in statistics 150. Springer, New York

  • Caliński T, Kageyama S (2008) On the analysis of experiments in affine resolvable designs. J Stat Plan Inference 138:3350–3356

    Article  MathSciNet  MATH  Google Scholar 

  • Caliński T, Czajka S, Kaczmarek Z (1987a) A model for the analysis of a series of experiments repeated at several places over a period of years. I. Theory. Biul Oceny Odmian Cultiv Test Bull 12(17–18):7–34

    Google Scholar 

  • Caliński T, Czajka S, Kaczmarek Z (1987b) A model for the analysis of a series of experiments repeated at several places over a period of years. II. Example. Biul Oceny Odmian Cultiv Test Bull 12(17–18):35–71

    Google Scholar 

  • Caliński T, Czajka S, Kaczmarek Z, Krajewski P, Pilarczyk W (2005) Analyzing multi-environment variety trials using randomization-derived mixed models. Biometrics 61:448–455

    Article  MathSciNet  MATH  Google Scholar 

  • Caliński T, Czajka S, Kaczmarek Z, Krajewski P, Pilarczyk W (2009a) Analyzing the genotype-by-environment interactions under a randomization-derived mixed model. J Agric Biol Environ Stat 14:224–241

    Article  MathSciNet  MATH  Google Scholar 

  • Caliński T, Czajka S, Kaczmarek Z, Krajewski P, Pilarczyk W (2009b) A mixed model analysis of variance for multi-environment variety trials. Stat Pap 50:735–759

    Article  MathSciNet  MATH  Google Scholar 

  • Caliński T, Czajka S, Pilarczyk W (2009c) On the application of affine resolvable designs to variety trials. J Stat Appl 4:201–224

    MATH  Google Scholar 

  • Cullis BR, Thomson FM, Fisher JA, Gilmour AR, Thompson R (1996a) The analysis of the NSW wheat variety database. I. Modelling trial error variance. Theor Appl Genet 92:21–27

    Article  Google Scholar 

  • Cullis BR, Thomson FM, Fisher JA, Gilmour AR, Thompson R (1996b) The analysis of the NSW wheat variety database. II. Variance component estimation. Theor Appl Genet 92:28–39

    Article  Google Scholar 

  • Cullis B, Gogel B, Verbyla A, Thompson R (1998) Spatial analysis of multi-environment early generation variety trials. Biometrics 54:1–18

    Article  MathSciNet  MATH  Google Scholar 

  • Denis J-B, Piepho H-P, van Eeuwijk FA (1997) Modelling expectation and variance for genotype by environment data. Heredity 79:162–171

    Article  Google Scholar 

  • Gogel BJ, Cullis BR, Verbyla AP (1995) REML estimation of multiplicative effects in multi-environment variety trials. Biometrics 51:744–749

    Article  MATH  Google Scholar 

  • Houtman AM, Speed TP (1983) Balance in designed experiments with orthogonal block structure. Ann Stat 11:1069–1085

    Article  MathSciNet  MATH  Google Scholar 

  • Kempthorne O (1975) Fixed and mixed models in the analysis of variance. Biometrics 31:473–486

    Article  MathSciNet  MATH  Google Scholar 

  • Möhring J, Piepho H-P (2009) Comparison of weighting in two-stage analyses of series of experiments. Crop Sci 49:1977–1988

    Article  Google Scholar 

  • Nelder JA (1965) The analysis of randomized experiments with orthogonal block structure. Proc R Soc A 283:147–178

    Article  MATH  Google Scholar 

  • Nelder JA (1968) The combination of information in generally balanced designs. J R Stat Soc B 30:303–311

    MathSciNet  MATH  Google Scholar 

  • Patterson HD, Silvey V (1980) Statutory and recommended list trials of crop varieties in the United Kingdom (with discussion). J R Stat Soc A 143:219–252

    Article  Google Scholar 

  • Patterson HD, Silvey V, Talbot M, Weatherup STC (1977) Variability of yields of cereal varieties in U.K. trials. J Agric Sci Camb 89:239–245

    Article  Google Scholar 

  • Piepho H-P, Möhring J (2007) On weighting in two-stage analysis of series of experiments. Biul Oceny Odmian Cultiv Test Bull 32:109–121

    Google Scholar 

  • Piepho H-P, Schulz-Streeck T, Ogutu JO (2011) A stage-wise approach for analysis of multi-environment trials. Biul Oceny Odmian Cultiv Test Bull 33:7–20

    MATH  Google Scholar 

  • Piepho H-P, Möhring J, Schulz-Streeck T, Ogutu JO (2012) A stage-wise approach for the analysis of multi-environment trials. Biom J 54:844–860

    Article  MathSciNet  MATH  Google Scholar 

  • Rao CR (1971) Unified theory of linear estimation. Sankhyā A 33:371–394

    MathSciNet  MATH  Google Scholar 

  • Rao CR (1972) Estimation of variance and covariance components in linear models. J Am Stat Assoc 67:112–115

    Article  MathSciNet  MATH  Google Scholar 

  • Rao CR (1973) Representations of best linear unbiased estimators in the Gauss–Markoff model with a singular dispersion matrix. J Multivar Anal 3:276–292

    Article  MathSciNet  MATH  Google Scholar 

  • Rao CR, Kleffe J (1988) Estimation of variance components and applications. North-Holland, Amsterdam

    MATH  Google Scholar 

  • Rao CR, Mitra SK (1971) Generalized inverse of matrices and its applications. Wiley, New York

    MATH  Google Scholar 

  • R Core Team (2015) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. http://www.r-project.org/

  • Smith A, Cullis B, Gilmour A (2001a) The analysis of crop variety evaluation data in Australia. Aust NZ J Stat 43:129–145

    Article  MathSciNet  MATH  Google Scholar 

  • Smith A, Cullis B, Thompson R (2001b) Analyzing variety by environment data using multiplicative mixed models and adjustments for spatial field trend. Biometrics 57:1138–1147

    Article  MathSciNet  MATH  Google Scholar 

  • Smith AB, Cullis BR, Thompson R (2005) The analysis of crop cultivar breeding and evaluation trials: an overview of current mixed model approaches. J Agric Sci 143:449–462

    Article  Google Scholar 

  • Welham SJ, Gogel BJ, Smith AB, Thompson R, Cullis BR (2010) A comparison of analysis methods for late-stage variety evaluation trials. Aust NZ J Stat 52:125–149

    Article  MathSciNet  MATH  Google Scholar 

  • Williams ER (1977) Iterative analysis of generalized lattice designs. Aust J Stat 19:39–42

    Article  MathSciNet  MATH  Google Scholar 

  • Yates F, Cochran WG (1938) The analysis of groups of experiments. J Agric Sci 28:556–580

    Article  Google Scholar 

Download references

Acknowledgments

This research was stimulated by a co-operation with the Research Centre for Cultivar Testing (Słupia Wielka, Poland), which kindly provided the example data. It was also partially supported by the Project HORhn-801-8-15, MR47 from the Ministry of Agriculture and Rural Development (Poland). The authors thank the editors and two referees for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to T. Caliński.

Appendices

Appendix 1

When solving, for an individual trial (jth experiment), the Eq. (3.8), \(\alpha = 1,\,2,\) it may be helpful to utilize the general balance (GB) property of the considered design (see Nelder 1968; also Caliński and Kageyama 2000, Sects. 3.6, 5.4).

Due to the GB property, the matrices \({{\varvec{C}}}_{1(j)} = {{\varvec{X}}}_{j}^{\prime }{{\varvec{\phi }}}_{1(j)}{{\varvec{X}}}_{j}\) and \({{\varvec{C}}}_{2(j)} = {{\varvec{X}}}_{j}^{\prime }{{\varvec{\phi }}}_{2(j)}{{\varvec{X}}}_{j}\) can be presented in the following spectral decomposition forms (though not formally specified, the realization of these and the following formulae depends on the design of the analyzed trial):

$$\begin{aligned} {{\varvec{C}}}_{1(j)} = a^{2}\sum \limits _{\beta =0}^{f-1}\varepsilon _{1 \beta }{{\varvec{S}}}_{\beta }{{\varvec{S}}}_{\beta }^{\prime } = a\sum \limits _{\beta =0}^{f-1}\varepsilon _{1 \beta }{{\varvec{L}}}_{\beta }, \quad {{\varvec{C}}}_{2(j)} = a^{2}\sum \limits _{\beta =1}^{f-1}\varepsilon _{2 \beta }{{\varvec{S}}}_{\beta }{{\varvec{S}}}_{\beta }^{\prime } = a\sum \limits _{\beta =0}^{f-1}\varepsilon _{2 \beta }{{\varvec{L}}}_{\beta }, \end{aligned}$$

where \({{\varvec{S}}}_{\beta }{{\varvec{S}}}_{\beta }^{\prime } = \sum \nolimits _{\ell =1}^{\rho _{\beta }}{{\varvec{s}}}_{\beta \ell }{{\varvec{s}}}_{\beta \ell }^{\prime }\) and \({{\varvec{L}}}_{\beta } = a{{\varvec{S}}}_{\beta }{{\varvec{S}}}_{\beta }^{\prime },\) chosen so that \({{\varvec{C}}}_{\alpha (j)}{{\varvec{S}}}_{\beta } = a\varepsilon _{\alpha \beta }{{\varvec{S}}}_{\beta },\) with \({{\varvec{S}}}_{\beta } = [{{\varvec{s}}}_{\beta 1}{:} {{\varvec{s}}}_{\beta 2}{:} \cdots {:} {{\varvec{s}}}_{\beta \rho _{\beta }}],\) for \(\alpha = 1,\,2\) and \(\beta = 0,\,1,\ldots , f - 1,\) i.e., \({{\varvec{C}}}_{\alpha (j)}{{\varvec{s}}}_{\beta \ell } = a\varepsilon _{\alpha \beta }{{\varvec{s}}}_{\beta \ell }\) for \(\ell = 1,\,2,\ldots , \rho _{\beta }\) and \(\alpha = 1,\,2,\) with \(\varepsilon _{10} = 1\) and, hence, \(\varepsilon _{20} = 0.\) Furthermore, the eigenvectors \({{\varvec{s}}}_{\beta 1},\,{{\varvec{s}}}_{\beta 2},\ldots , {{\varvec{s}}}_{\beta \rho _{\beta }}\) are orthonormalized in such a way that \(a{{\varvec{S}}}_{\beta }^{\prime }{{\varvec{S}}}_{\beta } = {{\varvec{I}}}_{\rho _{\beta }}\) for any \(\beta \) and \(a{{\varvec{S}}}_{\beta }^{\prime }{{\varvec{S}}}_{\beta ^{\prime }} = \mathbf{0}\) for \(\beta \ne \beta ^{\prime }.\) With this notation, the following equalities can be presented:

$$\begin{aligned} \left\| {{\varvec{\phi }}}_{1(j)}\left( {{\varvec{I}}}_{am} - {{\varvec{P}}}_{{{\varvec{}}X}_{j}({{\varvec{}}\tilde{V}}_{(j)}^{-1})}\right) {{\varvec{y}}}_{j}\right\| ^{2}= & {} {{\varvec{y}}}_{j}^{\prime }{{\varvec{\psi }}}_{1(j)}{{\varvec{y}}}_{j} + \sum \limits _{\beta =1}^{f-1}\varepsilon _{1 \beta }w_{2 \beta }^{2}\left( \varepsilon _{1 \beta }^{-1}{{\varvec{Q}}}_{1(j)}^{\prime }\right. \\&\left. - \varepsilon _{2 \beta }^{-1}{{\varvec{Q}}}_{2(j)}^{\prime }\right) {{\varvec{S}}}_{\beta }{{\varvec{S}}}_{\beta }^{\prime }\left( \varepsilon _{1 \beta }^{-1}{{\varvec{Q}}}_{1(j)} - \varepsilon _{2 \beta }^{-1}{{\varvec{Q}}}_{2(j)}\right) , \\ \left\| {{\varvec{\phi }}}_{2(j)}\left( {{\varvec{I}}}_{am} - {{\varvec{P}}}_{{{\varvec{}}X}_{j}({{\varvec{}}\tilde{V}}_{(j)}^{-1})}\right) {{\varvec{y}}}_{j}\right\| ^{2}= & {} {{\varvec{y}}}_{j}^{\prime }{{\varvec{\psi }}}_{2(j)}{{\varvec{y}}}_{j} + \sum \limits _{\beta =1}^{f-1}\varepsilon _{2 \beta }w_{1 \beta }^{2}\left( \varepsilon _{1 \beta }^{-1}{{\varvec{Q}}}_{1(j)}^{\prime }\right. \\&\left. - \varepsilon _{2 \beta }^{-1}{{\varvec{Q}}}_{2(j)}^{\prime }\right) {{\varvec{S}}}_{\beta }{{\varvec{S}}}_{\beta }^{\prime }\left( \varepsilon _{1 \beta }^{-1}{{\varvec{Q}}}_{1(j)} - \varepsilon _{2 \beta }^{-1}{{\varvec{Q}}}_{2(j)}\right) , \end{aligned}$$

where \({{\varvec{\psi }}}_{\alpha (j)} = {{\varvec{\phi }}}_{\alpha (j)} - {{\varvec{\phi }}}_{\alpha (j)}{{\varvec{X}}}_{j}{{\varvec{C}}}_{\alpha (j)}^{-}{{\varvec{X}}}_{j}^{\prime }{{\varvec{\phi }}}_{\alpha (j)} = {{\varvec{\phi }}}_{\alpha (j)}({{\varvec{I}}}_{am} - {{\varvec{X}}}_{j}{{\varvec{C}}}_{\alpha (j)}^{-}{{\varvec{X}}}_{j}^{\prime }){{\varvec{\phi }}}_{\alpha (j)}\) for \(\alpha = 1,\,2,\,{{\varvec{Q}}}_{1(j)} = {{\varvec{X}}}_{j}^{\prime }{{\varvec{\phi }}}_{1(j)}{{\varvec{y}}}_{j}\) and \({{\varvec{Q}}}_{2(j)} = {{\varvec{X}}}_{j}^{\prime }{{\varvec{\phi }}}_{2(j)}{{\varvec{y}}}_{j},\) and where the weights \(w_{1 \beta },\,w_{2 \beta }\) are defined as

$$\begin{aligned} w_{1 \beta } = \frac{\varepsilon _{1 \beta }\sigma _{2(j)}^{2}}{\varepsilon _{1 \beta }\sigma _{2(j)}^{2} + \varepsilon _{2 \beta }\sigma _{1(j)}^{2}}, \quad w_{2 \beta } = \frac{\varepsilon _{2 \beta }\sigma _{1(j)}^{2}}{\varepsilon _{1 \beta }\sigma _{2(j)}^{2} + \varepsilon _{2 \beta }\sigma _{1(j)}^{2}}. \end{aligned}$$

On the other hand, it can be shown, for the considered GL designs, that

$$\begin{aligned} {tr}\left\{ {{\varvec{\phi }}}_{1(j)}\left( {{\varvec{I}}}_{am} - {{\varvec{P}}}_{{{\varvec{}}X}_{j}({{\varvec{}}\tilde{V}}_{(j)}^{-1})}\right) \right\}= & {} am - b - m + 1 + \sum \limits _{\beta =0}^{f-1}\left( 1 - w_{1 \beta }\right) \rho _{\beta }, \\ {tr}\left\{ {{\varvec{\phi }}}_{2(j)}\left( {{\varvec{I}}}_{am} - {{\varvec{P}}}_{{{\varvec{}}X}_{j}({{\varvec{}}\tilde{V}}_{(j)}^{-1})}\right) \right\}= & {} b - a - m + 1 + \rho _{0} + \sum \limits _{\beta =1}^{f-1}\left( 1 - w_{2 \beta }\right) \rho _{\beta }. \end{aligned}$$

Now, it is interesting to note that in the special case of an affine resolvable design the Eq. (3.8), used for estimating the variances \(\sigma _{1(j)}^{2}\) and \(\sigma _{2(j)}^{2},\) reduce to the following forms:

$$\begin{aligned}&{{\varvec{y}}}_{j}^{\prime }{{\varvec{\psi }}}_{1(j)}{{\varvec{y}}}_{j} + a^{-1}\varepsilon _{11}w_{21}^{2}\left( \varepsilon _{11}^{-1}{{\varvec{Q}}}_{1(j)}^{\prime } - \varepsilon _{21}^{-1}{{\varvec{Q}}}_{2(j)}^{\prime }\right) {{\varvec{L}}}_{1}\left( \varepsilon _{11}^{-1}{{\varvec{Q}}}_{1(j)} - \varepsilon _{21}^{-1}{{\varvec{Q}}}_{2(j)}\right) \\&\quad =\sigma _{1(j)}^{2}\left( d_{1} + \ w_{21}\rho _{1}\right) , \\&a^{-1}\varepsilon _{21}w_{11}^{2}\left( \varepsilon _{11}^{-1}{{\varvec{Q}}}_{1(j)}^{\prime } - \varepsilon _{21}^{-1}{{\varvec{Q}}}_{2(j)}^{\prime }\right) {{\varvec{L}}}_{1}\left( \varepsilon _{11}^{-1}{{\varvec{Q}}}_{1(j)} - \varepsilon _{21}^{-1}{{\varvec{Q}}}_{2(j)}\right) \\&\quad = \sigma _{2(j)}^{2}w_{11}\rho _{1}, \end{aligned}$$

where \(\varepsilon _{11} = (a - 1)/a,\,\varepsilon _{21} = 1/a,\, {{\varvec{L}}}_{1} = a{{\varvec{S}}}_{1}{{\varvec{S}}}_{1}^{\prime } = {{\varvec{C}}}_{2(j)},\) of the rank \(\rho _{1} = b - a,\) and \(d_{1} = am - b - m + 1.\) Note that in this case \({{\varvec{\psi }}}_{2(j)} = {{\varvec{0}}}.\) A desirable advantage of these equations is that they need not to be solved by an iterative procedure, as it is required in the general case. In fact, their solutions are

$$\begin{aligned} \hat{\sigma }_{1(j)}^{2}= & {} d_{1}^{-1}{{\varvec{y}}}_{j}^{\prime }{{\varvec{\psi }}}_{1(j)}{{\varvec{y}}}_{j}, \\ \hat{\sigma }_{2(j)}^{2}= & {} \frac{1}{b - a}\biggl [\frac{\varepsilon _{21}}{a}\left( \varepsilon _{11}^{-1}{{\varvec{Q}}}_{1(j)}^{\prime }- \varepsilon _{21}^{-1}{{\varvec{Q}}}_{2(j)}^{\prime }\right) {{\varvec{L}}}_{1}\left( \varepsilon _{11}^{-1}{{\varvec{Q}}}_{1(j)} - \varepsilon _{21}^{-1}{{\varvec{Q}}}_{2(j)}\right) \\&-\,\frac{\varepsilon _{21}}{\varepsilon _{11}} \ \frac{b - a}{d_{1}}{{\varvec{y}}}_{j}^{\prime }{{\varvec{\psi }}}_{1(j)}{{\varvec{y}}}_{j}\biggl ]. \end{aligned}$$

For details of this solution see Caliński and Kageyama (2008, Sect. 4).

Appendix 2

To find \({{\varvec{\zeta }}}_{E}\) as a solution of Eq. (3.10), it may be helpful to represent the \(m \times m\) matrix \({{\varvec{\zeta }}}_{E}\) by an \(m^{2} \times 1\) vector, \(\mathrm{vec}({{\varvec{\zeta }}}_{E}),\) writing vertically all the elements of \({{\varvec{\zeta }}}_{E}\) starting with the first element and proceeding in the lexicographical order row by row. Then, the equation can be written in a more manageable form, as

$$\begin{aligned}&\left[ \sum \limits _{j=1}^{p}\sum \limits _{j^{\prime }=1}^{p}\left\{ \left( {{\varvec{U}}}_{j}^{\prime }{{\varvec{R}}}{{\varvec{U}}}_{j^{\prime }}\right) \otimes \left( {{\varvec{U}}}_{j}^{\prime }{{\varvec{R}}}{{\varvec{U}}}_{j^{\prime }}\right) \right\} \right] \mathrm{vec}\left( {{\varvec{\zeta }}}_{E}\right) \\&\quad = \mathrm{vec}\left( \sum \limits _{j=1}^{p}{{\varvec{U}}}_{j}^{\prime }{{\varvec{R}}}{{\varvec{y}}}{{\varvec{y}}}^{\prime }{{\varvec{R}}}{{\varvec{U}}}_{j}- \sum \limits _{j=1}^{p}{{\varvec{U}}}_{j}^{\prime }{{\varvec{R}}}\tilde{{{\varvec{V}}}}{{\varvec{R}}}{{\varvec{U}}}_{j}\right) \end{aligned}$$

(see Rao and Kleffe 1988, p. 10). To simplify it further, note that, when adopting the notation \({{\varvec{R}}} = [{{\varvec{R}}}_{jj^{\prime }}],\) one can use the equalities

$$\begin{aligned} {{\varvec{U}}}_{j}^{\prime }{{\varvec{R}}}{{\varvec{U}}}_{j^{\prime }}= & {} \left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) {{\varvec{R}}}_{jj^{\prime }}\left( \mathbf{1}_{a} \otimes {{\varvec{I}}}_{m}\right) , \\ {{\varvec{U}}}_{j}^{\prime }{{\varvec{R}}}= & {} \left[ \left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) {{\varvec{R}}}_{j1}{:} \left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) {{\varvec{R}}}_{j2}{:} \cdots {:}\left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) {{\varvec{R}}}_{jp}\right] . \end{aligned}$$

Also note that because \({{\varvec{\zeta }}}_{E},\) as a part of the covariance matrix \({{\varvec{V}}}\) defined in (2.10), appears also in the matrix \({{\varvec{R}}} = {{\varvec{V}}}^{-1} - {{\varvec{V}}}^{-1}{{\varvec{X}}}({{\varvec{X}}}^{\prime }{{\varvec{V}}}^{-1}{{\varvec{X}}})^{-1}{{\varvec{X}}}^{\prime }{{\varvec{V}}}^{-1},\) the above equation is to be solved by an iterative procedure. To start it, one may use, instead of the matrix \({{\varvec{\zeta }}}_{E}\) appearing in \({{\varvec{R}}},\) a matrix \(\zeta _{E}{{\varvec{J}}}_{m},\) taking some initial value for the scalar \(\zeta _{E}.\) A suitable choice will be \(\zeta _{E} = (am)^{-1}(\sigma _{4}^{2} - \sigma _{3}^{2}),\) where \(\sigma _{3}^{2}\) and \(\sigma _{4}^{2}\) are obtainable as solutions of the equations \({{\varvec{y}}}^{\prime }{{\varvec{\phi }}}_{3}{{\varvec{y}}} =\sigma _{3}^{2}p(a - 1)\) and \({{\varvec{y}}}^{\prime }{{\varvec{\phi }}}_{4}^{*}{{\varvec{y}}} =\sigma _{4}^{2}(p - 1),\) with \({{\varvec{\phi }}}_{3} = {{\varvec{I}}}_{p} \otimes ({{\varvec{I}}}_{a} - a^{-1}{{\varvec{J}}}_{a}) \otimes m^{-1}{{\varvec{J}}}_{m}\) and \({{\varvec{\phi }}}_{4}^{*} = ({{\varvec{I}}}_{p} - p^{-1}{{\varvec{J}}}_{p}) \otimes a^{-1}{{\varvec{J}}}_{a} \otimes m^{-1}{{\varvec{J}}}_{m}.\)

This would mean to start the iterations under the assumption of complete additivity. Continuing then the iterative procedure one has to secure, at any iteration cycle, that the obtained provisional estimate of \({{\varvec{\zeta }}}_{E},\) denoted by \({{\varvec{\zeta }}}_{E,0},\) is n.n.d. For that, it is advisable to take its spectral decomposition, \({{\varvec{\zeta }}}_{E,0} = \sum \nolimits _{i=1}^{m}\lambda _{i}{{\varvec{p}}}_{i}{{\varvec{p}}}_{i}^{\prime },\) and use

$$\begin{aligned} {{\varvec{\zeta }}}_{E,0}^{(+)} = \sum \limits _{i=1}^{m}\lambda _{i}^{*}{{\varvec{p}}}_{i}{{\varvec{p}}}_{i}^{\prime }, \quad \mathrm{with} \quad \lambda _{i}^{*} = \lambda _{i}, \quad \mathrm{if} \quad \lambda _{i} > 0, \quad \mathrm{or} \quad \lambda _{i}^{*} = 0, \,\mathrm{otherwise}. \end{aligned}$$

Appendix 3

To show the equivalence of the two forms appearing in formula (4.10), first write

$$\begin{aligned} {{\varvec{u}}}_{V(E,j)} = \left( {{\varvec{I}}}_{m}- m^{-1}{{\varvec{J}}}_{m}\right) \left\{ \left( \mathbf{1}_{a}^{\prime }\otimes {{\varvec{I}}}_{m}\right) {{\varvec{V}}}_{*(j)}^{-1}\left( \mathbf{1}_{a} \otimes {{\varvec{I}}}_{m}\right) \right\} ^{-1}\left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) {{\varvec{V}}}_{*(j)}^{-1}{{\varvec{y}}}_{*j}, \end{aligned}$$

and then use the following notation: \({{\varvec{A}}}_{(j)} = \tilde{{{\varvec{V}}}}_{*(j)} = \sigma _{1(j)}^{2}{{\varvec{\phi }}}_{1(j)} + \sigma _{2(j)}^{2}({{\varvec{I}}}_{am} - {{\varvec{\phi }}}_{1(j)}),\,{{\varvec{\Sigma }}}_{VE} = {{\varvec{R}}}^{VE}({{\varvec{R}}}^{VE})^{\prime },\,{{\varvec{B}}} = \mathbf{1}_{a} \otimes {{\varvec{R}}}^{VE},\,{{\varvec{D}}} = {{\varvec{I}}}_{t},\) where \(t = \mathrm{rank}({{\varvec{\Sigma }}}_{VE}) = \mathrm{rank}({{\varvec{R}}}^{VE}).\) With it, one can write for \({{\varvec{V}}}_{*(j)},\) defined in (4.6), \({{\varvec{V}}}_{*(j)} = {{\varvec{A}}}_{(j)} + {{\varvec{B}}}{{\varvec{D}}}{{\varvec{B}}}^{\prime }\) and \({{\varvec{V}}}_{*(j)}^{-1} = {{\varvec{A}}}_{(j)}^{-1} - {{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}({{\varvec{B}}}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}} + {{\varvec{D}}})^{-1}{{\varvec{B}}}^{\prime }{{\varvec{A}}}_{(j)}^{-1}.\) Furthermore, note that \({{\varvec{B}}} = (\mathbf{1}_{a} \otimes {{\varvec{I}}}_{m}){{\varvec{R}}}^{VE} = {{\varvec{B}}}_{1}{{\varvec{B}}}_{2},\) say. So, one can also write \({{\varvec{V}}}_{*(j)}^{-1} = {{\varvec{A}}}_{(j)}^{-1} - {{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2}({{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2} + {{\varvec{D}}})^{-1}{{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}.\) From this, \((\mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}){{\varvec{V}}}_{*(j)}^{-1}(\mathbf{1}_{a} \otimes {{\varvec{I}}}_{m}) = {{\varvec{B}}}_{1}^{\prime }{{\varvec{V}}}_{*(j)}^{-1}{{\varvec{B}}}_{1} = {{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1} - {{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2}({{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2} + {{\varvec{D}}})^{-1}{{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1},\) where \(\mathrm{rank}({{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}) =\mathrm{rank}({{\varvec{B}}}_{1}).\) This implies that \(\{(\mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}){{\varvec{V}}}_{*(j)}^{-1}(\mathbf{1}_{a} \otimes {{\varvec{I}}}_{m})\}^{-1} = ({{\varvec{B}}}_{1}^{\prime }{{\varvec{V}}}_{*(j)}^{-1}{{\varvec{B}}}_{1})^{-1} = ({{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1})^{-1} + {{\varvec{B}}}_{2}{{\varvec{D}}}{{\varvec{B}}}_{2}^{\prime }.\) Hence,

$$\begin{aligned}&\left\{ \left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) {{\varvec{V}}}_{*(j)}^{-1}\left( \mathbf{1}_{a} \otimes {{\varvec{I}}}_{m}\right) \right\} ^{-1}\left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) {{\varvec{V}}}_{*(j)}^{-1} = \left\{ \left( {{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}\right) ^{-1}\right. \\&\left. \quad +\,{{\varvec{B}}}_{2}{{\varvec{D}}}{{\varvec{B}}}_{2}^{\prime }\right\} \left\{ {{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1} - {{\varvec{B}}}_1^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2}\left( {{\varvec{B}}}_2^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2} + {{\varvec{D}}}\right) ^{-1}{{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}\right\} \\&= \left( {{\varvec{B}}}_{1}{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}\right) ^{-1}{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1} + {{\varvec{B}}}_{2}\left\{ {{\varvec{D}}} - \left( {{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2} + {{\varvec{D}}}\right) ^{-1}\right. \\&\left. \quad -\, {{\varvec{D}}}{{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2}\left( {{\varvec{B}}}_2^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2} + {{\varvec{D}}}\right) ^{-1}\right\} {{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}. \end{aligned}$$

Now, after some algebraical derivations, one can show that \({{\varvec{D}}} - ({{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2} + {{\varvec{D}}})^{-1} - {{\varvec{D}}}{{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2}({{\varvec{B}}}_{2}^{\prime }{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1}{{\varvec{B}}}_{2} + {{\varvec{D}}})^{-1} = \mathbf{0},\) which finally gives the equality \(\{(\mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}){{\varvec{V}}}_{*(j)}^{-1}(\mathbf{1}_{a} \otimes {{\varvec{I}}}_{m})\}^{-1}(\mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}){{\varvec{V}}}_{*(j)}^{-1} = ({{\varvec{B}}}_{1}{{\varvec{A}}}_{(j)}^{-1}{{\varvec{B}}}_{1})^{-1}{{\varvec{B}}}_{1}^{\prime }{{\varvec{A}}}_{(j)}^{-1} = \{(\mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m})\tilde{{{\varvec{V}}}}_{*(j)}^{-1}(\mathbf{1}_{a} \otimes {{\varvec{I}}}_{m})\}^{-1}(\mathbf{1}_{a}^{\prime }\otimes {{\varvec{I}}}_{m})\tilde{{{\varvec{V}}}}_{*(j)}^{-1}.\) This further implies that

$$\begin{aligned}&{{\varvec{u}}}_{V(E,j)} \\&\quad =\left( {{\varvec{I}}}_{m} - m^{-1}{{\varvec{J}}}_{m}\right) \left\{ \left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) \tilde{{{\varvec{V}}}}_{*(j)}^{-1}\left( \mathbf{1}_{a} \otimes {{\varvec{I}}}_{m}\right) \right\} ^{-1}\left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) \tilde{{{\varvec{V}}}}_{*(j)}^{-1}{{\varvec{y}}}_{*j} \quad \mathrm{and} \\&\mathrm{Cov}\left( {{\varvec{u}}}_{V(E,j)}\right) \\&\quad =\left( {{\varvec{I}}}_{m} - m^{-1}{{\varvec{J}}}_{m}\right) \left\{ \left( \mathbf{1}_{a}^{\prime } \otimes {{\varvec{I}}}_{m}\right) \tilde{{{\varvec{V}}}}_{*(j)}^{-1}\left( \mathbf{1}_{a} \otimes {{\varvec{I}}}_{m}\right) \right\} ^{-1}\left( {{\varvec{I}}}_{m} - m^{-1}{{\varvec{J}}}_{m}\right) + {{\varvec{\Sigma }}}_{VE}. \end{aligned}$$

The consequences of these results for presenting (4.12) and its covariance matrix are obvious. Note that the results presented here do not depend on the variances \(\sigma _{1(j)}^{2}\) and \(\sigma _{2(j)}^{2},\) whether they differ among js (the trials) or not.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Caliński, T., Czajka, S., Kaczmarek, Z. et al. On a mixed model analysis of multi-environment variety trials: a reconsideration of the one-stage and the two-stage models and analyses. Stat Papers 58, 433–465 (2017). https://doi.org/10.1007/s00362-015-0706-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-015-0706-y

Keywords

Mathematics Subject Classification

Navigation