Skip to main content
Log in

A correlation structure for the analysis of Gaussian and non-Gaussian responses in crossover experimental designs with repeated measures

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

In this paper, we propose a family of correlation structures for crossover designs with repeated measures for both, Gaussian and non-Gaussian responses using generalized estimating equations (GEE). The structure considers two matrices: one that models between-period correlation and another one that models within-period correlation. The overall correlation matrix, which is used to build the GEE, corresponds to the Kronecker between these matrices. A procedure to estimate the parameters of the correlation matrix is proposed, its statistical properties are studied and a comparison with standard models using a single correlation matrix is carried out. A simulation study showed a superior performance of the proposed structure in terms of the quasi-likelihood criterion, efficiency, and the capacity to explain complex correlation phenomena patterns in longitudinal data from crossover designs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Basu S, Santra S (2010) A joint model for incomplete data in crossover trials. J Stat Plan Inference 140(10):2839–2845

    MathSciNet  Google Scholar 

  • Biabani M, Farrell M, Zoghi M, Egan G, Jaberzadeh S (2018) Crossover design in transcranial direct current stimulation studies on motor learning: potential pitfalls and difficulties in interpretation of findings. Rev Neurosci 29(4):463–473

    Google Scholar 

  • Boik RJ (1991) Scheffés mixed model for multivariate repeated measures: a relative efficiency evaluation. Commun Stat 20(4):1233–1255

    Google Scholar 

  • Carey VJ (2019) Gee: Generalized Estimation Equation Solver. R package version 4.13-20. https://CRAN.R-project.org/package=gee

  • Cordeiro GM (2004) On Pearson’s residuals in generalized linear models. Stat Probab Lett 66(3):213–219

    MathSciNet  Google Scholar 

  • Cordeiro GM, McCullagh P (1991) Bias correction in generalized linear models. J R Stat Soc Ser B (Methodological) 53(3):629–643

    MathSciNet  Google Scholar 

  • Cox DR, Snell EJ (1968) A general definition of residuals. J R Stat Soc Ser B 30(2):248–265

    MathSciNet  Google Scholar 

  • Curtin F (2017) Meta-analysis combining parallel and crossover trials using generalised estimating equation method. Res Synth Methods 8(3):312–320

    Google Scholar 

  • Davis CS (2002) Statistical methods for the analysis of repeated measurements. Springer, San Diego

    Google Scholar 

  • Diaz FJ, Berg MJ, Krebill R, Welty T, Gidal BE, Alloway R, Privitera M (2013) Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs. Clin Pharmacokinet 52(12):1033–1043

    Google Scholar 

  • Dubois A, Lavielle M, Gsteiger S, Pigeolet E, Mentré F (2011) Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm. Stat Med 30(21):2582–2600

    MathSciNet  Google Scholar 

  • Forbes AB, Akram M, Pilcher D, Cooper J, Bellomo R (2015) Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care. Clin Trials 12(1):34–44

    Google Scholar 

  • Grayling MJ, Mander AP, Wason JM (2018) Blinded and unblinded sample size reestimation in crossover trials balanced for period. Biometr J 60(5):917–933

    MathSciNet  Google Scholar 

  • Hao C, von Rosen D, von Rosen T (2015) Explicit influence analysis in two-treatment balanced crossover models. Math Methods Stat 24(1):16–36

    MathSciNet  Google Scholar 

  • Hardin JW, Hilbe J (2003) Generalized estimating equations. Chapman & Hall, Boca Raton

    Google Scholar 

  • Harville DA (1997) Matrix algebra from a statistician’s perspective, vol 1. Springer, New York

    Google Scholar 

  • Hin L-Y, Wang Y-G (2009) Working-correlation-structure identification in generalized estimating equations. Stat Med 28(4):642–658

    MathSciNet  Google Scholar 

  • Hinkelmann K, Kempthorne O (2005) Design and analysis of experiments. Wiley series in probability and mathematical statistics. Applied probability and statistics. Wiley, New York, p 2

    Google Scholar 

  • Jaime GO (2019) Uso de Un Residuo de Papel Como Suplemento Para Vacas Lecheras. Universidad Nacional de Colombia, Sede Bogota, Tesis de maestría

    Google Scholar 

  • Jankar J, Mandal A (2021) Optimal crossover designs for generalized linear models: an application to work environment experiment. Stat Appl 19(1):319–336

    Google Scholar 

  • Jankar J, Mandal A, Yang J (2020) Optimal crossover designs for generalized linear models. J Stat Theory Pract 14(2):1–27

    MathSciNet  Google Scholar 

  • Jones B, Kenward MG (2015) Design and analysis of cross-over trials, 3rd edn. Chapman & Hall/CRC, Boca Raton

    Google Scholar 

  • Josephy H, Vansteelandt S, Vanderhasselt M-A, Loeys T (2015) Within-subject mediation analysis in ab/ba crossover designs. Int J Biostat 11(1):1–22

    MathSciNet  Google Scholar 

  • Kitchenham B, Madeyski L, Curtin F (2018) Corrections to effect size variances for continuous outcomes of cross-over clinical trials. Stat Med 37(2):320–323

    Google Scholar 

  • Krzyśko M, Skorzybut M (2009) Discriminant analysis of multivariate repeated measures data with a Kronecker product structured covariance matrices. Stat Papers 50(4):817–835

    MathSciNet  Google Scholar 

  • Lancaster H (1965) The Helmert matrices. Am Math Mntly 72(1):4–12

    MathSciNet  Google Scholar 

  • Leorato S, Mezzetti M (2016) Spatial panel data model with error dependence: a Bayesian separable covariance approach. Bayesian Anal 11(4):1035–1069

    MathSciNet  Google Scholar 

  • Li F, Forbes AB, Turner EL, Preisser JS (2018) Power and sample size requirements for gee analyses of cluster randomized crossover trials. Stat Med 1:1

    Google Scholar 

  • Li F, Forbes AB, Turner EL, Preisser JS (2019) Power and sample size requirements for gee analyses of cluster randomized crossover trials. Stat Med 38(4):636–649

    MathSciNet  Google Scholar 

  • Liang K-Y, Zeger SL (1986) Longitudinal data analysis using generalized linear models. Biometrika 73(1):13–22

    MathSciNet  Google Scholar 

  • Liu F, Li Q (2016) A Bayesian model for joint analysis of multivariate repeated measures and time to event data in crossover trials. Stat Methods Med Res 25(5):2180–2192

    MathSciNet  Google Scholar 

  • Lui K-J (2015) Test equality between three treatments under an incomplete block crossover design. J Biopharm Stat 25(4):795–811

    Google Scholar 

  • Madeyski L, Kitchenham B (2018) Effect sizes and their variance for ab/ba crossover design studies. Empirical Softw Eng 23(4):1982–2017

    Google Scholar 

  • McDaniel LS, Henderson NC, Rathouz PJ (2013) Fast pure R implementation of GEE: application of the Matrix package. R J 5:181–187

    Google Scholar 

  • Oh HS, Ko S-G, Oh M-S (2003) A Bayesian approach to assessing population bioequivalence in a 2 2 2 crossover design. J Appl Stat 30(8):881–891

    MathSciNet  Google Scholar 

  • Pan W (2001) Akaike’s information criterion in generalized estimating equations. Biometrics 57:120–125

    MathSciNet  Google Scholar 

  • Patterson HD (1951) Change-over trials. Journal of the Royal Statistical Society. Series B (Methodological) 13, 256–271

  • Pitchforth J, Nelson-White E, van den Helder M, Oosting W (2020) The work environment pilot: an experiment to determine the optimal office design for a technology company. PLoS ONE 15(5):0232943

    Google Scholar 

  • R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2022). R Foundation for Statistical Computing. https://www.R-project.org/

  • Ratkowsky D, Alldredge R, Evans MA (1992) Cross-over experiments. Statistics A Series of Textbooks and Monographs, Washington

    Google Scholar 

  • Rosenkranz GK (2015) Analysis of cross-over studies with missing data. Stat Methods Med Res 24(4):420–433

    MathSciNet  Google Scholar 

  • Roy A, Khattree R (2005) On implementation of a test for Kronecker product covariance structure for multivariate repeated measures data. Stat Methodol 2(4):297–306

    MathSciNet  Google Scholar 

  • Shkedy Z, Molenberghs G, Craenendonck HV, Steckler T, Bijnens L (2005) A hierarchical binomial-Poisson model for the analysis of a crossover design for correlated binary data when the number of trials is dose-dependent. J Biopharm Stat 15(2):225–239

    MathSciNet  Google Scholar 

  • Srivastava MS, von Rosen T, Von Rosen D (2008) Models with a Kronecker product covariance structure: estimation and testing. Math Methods Stat 17(4):357–370

    MathSciNet  Google Scholar 

  • Vegas S, Apa C, Juristo N (2016) Crossover designs in software engineering experiments: benefits and perils. IEEE Trans Softw Eng 42(2):120–135

    Google Scholar 

  • Wang X, Chinchilli VM (2021) Analysis of crossover designs with nonignorable dropout. Stat Med 40(1):64–84

    MathSciNet  Google Scholar 

  • Zhang H, Yu Q, Feng C, Gunzler D, Wu P, Tu X (2012) A new look at the difference between the gee and the glmm when modeling longitudinal count responses. J Appl Stat 39(9):2067–2079

    MathSciNet  Google Scholar 

Download references

Acknowledgements

Authors are grateful to M.Sc. George Oneiber Jaime Tenjo and Professor Alvaro Wills from the Department of Animal Sciences of Universidad Nacional de Colombia, Sede Bogotá, for providing the records from the Dairy Cattle experiment, and authors are also grateful to the reviewers for their contributions that improved this document

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. A. Cruz.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 264 KB)

Appendix A

Appendix A

Theorem 2

The estimator of \(\pmb {R}(\pmb {\alpha })\) given by \(\pmb {{\hat{\Psi }}}\otimes \pmb {R}_1(\pmb {{\hat{\alpha }}}_1)\) is asymptotically unbiased and consistent.

Proof

First, asymptotic properties of Pearson’s residuals concerning expectation, variance, and residuals are explored. Define the theoretical Pearson residuals of response of the kth of the ith experimental unit at the jth period as:

$$\begin{aligned} R_{ijk}=\frac{Y_{ijk}-\mu _{ijk}}{{\phi }\sqrt{V(\mu _{ijk})})} \end{aligned}$$
(A1)

and adapting the results of Cox and Snell (1968) it is true that:

$$\begin{aligned} E(R_{ijk})&=\sum _{l=1}^pB({\hat{\beta }}_l)E(H_l^{(ijk)})\nonumber \\&-\sum _{l,s=1}^p {\mathcal {K}}^{ls}E\left( H_l^{(ijk)}U_s^{(ijk)}+\frac{1}{2}H_{ls}^{(ijk)} \right) + O(n^{-1})\end{aligned}$$
(A2)
$$\begin{aligned} Var(R_{ijk})&=1+2\sum _{l=1}^pB(\hat{\beta _{l}})E(R_{ijk}H_l^{(ijk)})\nonumber \\&-\sum _{l,s=1}^p {\mathcal {K}}^{ls}E\left( 2R_{ijk}H_l^{(ijk)}U_s^{(ijk)}\right) -\nonumber \\&\sum _{l,s=1}^p {\mathcal {K}}^{ls}E\left( H_l^{(ijk)}H_s^{(ijk)} R_{ijk}H_{ls}^{(ijk)} \right) + O(n^{-1}) \end{aligned}$$
(A3)

where

$$\begin{aligned} H_l^{(ijk)}=\frac{\partial R_{ijk}}{\partial \beta _l}, \; H_{ls}^{(ijk)}=\frac{\partial ^2 R_{ijk}}{\partial \beta _l\partial \beta _s} \end{aligned}$$

and \({\mathcal {K}}^{ls}\) is the element at position (ls) of the inverse of the Fisher information matrix, and according to Cordeiro and McCullagh (1991), the bias of \(\hat{\pmb {\beta }}\) (\(B(\hat{\pmb {\beta }})\)) is given by:

$$\begin{aligned} B(\hat{\pmb {\beta }})&=-\frac{1}{2\phi }\pmb {(X^TWX)}\pmb {X}^T\pmb {D}(z_{ijk})\pmb {F}\pmb {1}\end{aligned}$$
(A4)
$$\begin{aligned} \pmb {W}&=\pmb {D}(V(Y_{ijk})^{\frac{1}{2}}) \pmb {R}(\pmb {\alpha }) \pmb {D}(V(Y_{ijk})^{\frac{1}{2}}) \pmb {D}\left( \frac{\partial \pmb {\mu }_i}{\partial \eta }\right) \nonumber \\ \pmb {F}&=\pmb {D}\left( V(\mu _{ijk})^{-1}\left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) \left( \frac{\partial ^2 \mu _{ijk}}{\partial \eta ^2_{ijk}}\right) \right) \end{aligned}$$
(A5)

where \(\pmb {1}\) is a vector of appropriate size whose entries are all equal to 1, \(\pmb {D}(z_{ijk})\) is a diagonal matrix with elements on the diagonal given by the variance of the estimated linear predictors, i.e., the diagonal of the matrix

$$\begin{aligned} \pmb {z}=Var({\hat{\eta }}_{111}. \ldots , {\hat{\eta }}_{nPL}) \end{aligned}$$
(A6)

i.e., \(z_{ijk}=Var({\hat{\eta }}_{ijk})\) and \(\pmb {X}\) is the design matrix of the parametric effects described in Equation (3). Now, computing the expected values by taking into account the properties of the exponential family, we obtain:

$$\begin{aligned} E\left( H_l^{(ijk)}\right)&=-\sqrt{\phi V(\mu _{ijk})} \left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) x_{l(ijk)} \end{aligned}$$
(A7)
$$\begin{aligned} E\left( H_{ls}^{(ijk)}\right)&=\left[ 2V(\mu _{ijk})^{-\frac{3}{2}} \left( \frac{\partial V(\mu _{ijk})}{\partial \mu _{ijk}}\right) \left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) ^2 \right. \nonumber \\&\left. -2V(\mu _{ijk})^{-\frac{1}{2}}\left( \frac{\partial ^2 \mu _{ijk}}{\partial \eta _{ijk}^2}\right) \right] \times \frac{1}{2}\sqrt{\phi }x_{l(ijk)}x_{s(ijk)}\end{aligned}$$
(A8)
$$\begin{aligned} E\left( H_l^{(ijk)}U_s^{(ijk)}\right)&=-\frac{1}{2}\phi ^{\frac{1}{2}} V(\mu _{ijk})^{-\frac{3}{2}} \left( \frac{\partial V(\mu _{ijk})}{\partial \mu _{ijk}}\right) \left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) ^2\nonumber \\&\times x_{l(ijk)}x_{s(ijk)}\end{aligned}$$
(A9)
$$\begin{aligned} E\left( 2R_{ijk} H_l^{(ijk)}U_s^{(ijk)}\right)&=-V(\mu _{ijk})^{-2} \left( \frac{\partial V(\mu _{ijk})}{\partial \mu _{ijk}}\right) ^2 \left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) ^2\nonumber \\&\times x_{l(ijk)}x_{s(ijk)} \nonumber \\&-2\phi \left( \frac{\partial V(\mu _{ijk})}{\partial \mu _{ijk}}\right) ^{-1} \left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) ^2 x_{l(ijk)}x_{s(ijk)}\end{aligned}$$
(A10)
$$\begin{aligned} E\left( H_l^{(ijk)}H_s^{(ijk)}\right)&= \left[ \phi + \frac{\left( \frac{\partial V(\mu _{ijk})}{\partial \mu _{ijk}}\right) }{4V(\mu _{ijk})}\right] w_{ijk} x_{l(ijk)}x_{s(ijk)}\end{aligned}$$
(A11)
$$\begin{aligned} E\left( R_{ijk}H_{ls}^{(ijk)}\right)&= \frac{1}{2}\phi V(\mu _{ijk})^{-\frac{1}{2}} \left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) \end{aligned}$$
(A12)

where \(w_{ijk}\) is the element ijk of the diagonal of the matrix \(\pmb {W}\) defined in Equation (A5). From Equation (A7) and the bias of \(\pmb {\beta }\) given in Equation (A4), it follows that:

$$\begin{aligned} \sum _{l=1}^p B(\hat{\beta _{l}})E\left( H_{l}^{(ijk)}\right) =-\phi ^{\frac{1}{2}} V(\mu _{ijk})^{-\frac{1}{2}} \left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) \pmb {e}_{ijk}\pmb {X}B(\hat{\pmb {\beta }}) \end{aligned}$$
(A13)

where \(\pmb {e}_{ijk}\) is a vector of zeros with a 1’s at the ijkth position. From equations (A8) and (A9), we get:

$$\begin{aligned} E\left( H_l^{(ijk)}U_s^{(ijk)}+\frac{1}{2}H_{ls}^{(ijk)}\right)&=-\frac{1}{2}\phi ^{\frac{1}{2}} V(\mu _{ijk})^{-\frac{1}{2}} \left( \frac{\partial ^2 \mu _{ijk}}{\partial \eta _{ijk}^2}\right) ^2x_{l(ijk)}x_{s(ijk)} \nonumber \\ \sum _{l,s=1}^p{\mathcal {K}}^{ls}E(H_l^{(ijk)}U_s^{(ijk)}+\frac{1}{2}H_{ls}^{(ijk)})&=-\frac{1}{2}\phi ^{\frac{1}{2}} V(\mu _{ijk})^{-\frac{1}{2}} \left( \frac{\partial ^2 \mu _{ijk}}{\partial \eta _{ijk}^2}\right) ^2x_{l(ijk)}x_{s(ijk)} \end{aligned}$$
(A14)

and therefore, from Eqs. (A13), (A14) and (A2):

$$\begin{aligned} E(R_{111}, R_{112}, \ldots , R_{nPL})=\frac{-1}{2\sqrt{\phi }}(\pmb {I}-\pmb {H})\pmb {J}\pmb {z} \end{aligned}$$
(A15)

where

$$\begin{aligned} \pmb {H}&=\pmb {W}^{\frac{1}{2}} \pmb {X(X^TWX)}^{\frac{1}{2}}\pmb {X}^T \pmb {W}^{\frac{1}{2}}\\ \pmb {J}&=\pmb {D}\left( V(Y_{ijk}) \right) \pmb {D}\left( \frac{\partial \pmb {\mu }^2_i}{\partial ^2\eta }\right) \end{aligned}$$

from Eqs. (A10), (A11) and (A12):

$$\begin{aligned} -\sum _{l,s=1}^p {\mathcal {K}}^{ls}&E\left( 2R_{ijk}H_l^{(ijk)}U_s^{(ijk)}+H_l^{(ijk)}H_s^{(ijk)} R_{ijk}H_{ls}^{(ijk)} \right) \nonumber \\&=\left[ -\phi w_{ijk}-\frac{\left( \frac{\partial V(\mu _{ijk})}{\partial \mu _{ijk}}\right) \left( \frac{\partial ^2 \mu _{ijk}}{\partial \eta _{ijk}^2}\right) }{2 V(\mu _{ijk})}-\frac{1}{2}w_{ijk}\left( \frac{\partial ^2 V(\mu _{ijk})}{\partial \mu ^2_{ijk}}\right) \right] \frac{z_{ijk}}{\phi } \end{aligned}$$
(A16)

and from equations (A4) and (A12), it follows that:

$$\begin{aligned}&2\sum _{l=1}^pB({\hat{\beta }}_l)E(R_{ijk}H_{ls}^{(ijk)})\nonumber \\&=\frac{1}{2\phi }\frac{\left( \frac{\partial V(\mu _{ijk})}{\partial \mu _{ijk}}\right) \left( \frac{\partial ^2 \mu _{ijk}}{\partial \eta _{ijk}^2}\right) }{V(\mu _{ijk})}\pmb {e}_{ijk} \pmb {Z}\pmb {D}(z_{ii})\pmb {D}\left( V(\mu _{ijk})^{-1} \left( \frac{\partial ^2 \mu _{ijk}}{\partial \eta _{ijk}^2}\right) \left( \frac{\partial \mu _{ijk}}{\partial \eta _{ijk}}\right) \right) \pmb {1} \end{aligned}$$
(A17)

Therefore, from the results (A16) and (A17), we have that:

$$\begin{aligned} \left[ Var(R_{111}), Var(R_{112}), \ldots , Var(R_{nPL})\right] =\pmb {1}+\frac{1}{2\phi }\left( \pmb {QHJ-M}\right) \pmb {z} \end{aligned}$$
(A18)

where \(\pmb {1}\) is a vector of ones and

$$\begin{aligned} \pmb {Q}&=\pmb {D}\left( V(Y_{ijk}) \right) ^{\frac{1}{2}}\pmb {D}\left( \frac{\partial V(Y_{ijk})}{\partial \mu }\right) \\ \pmb {M}&=\pmb {D}\left( V(Y_{ijk}) \right) ^{\frac{1}{2}}\pmb {D}\left( \frac{\partial V(Y_{ijk})^2}{\partial ^2\mu }+2\phi \pmb {D}(\pmb {W})+\frac{\frac{\partial V(Y_{ijk})}{\partial \mu }}{V(Y_{ijk})}\right) \end{aligned}$$

By theorem 2 of Liang and Zeger (1986), it is known that the GEE estimators of \(\eta _{ijk}\) is consistent and unbiased, i.e.,

$$\begin{aligned} \pmb {Z}\xrightarrow [n\rightarrow \infty ]{p} \pmb {0} \end{aligned}$$
(A19)

Thus from (A15) and (A18), we find that:

$$\begin{aligned} E(R_{ijk})&=O(n^{-1})\end{aligned}$$
(A20)
$$\begin{aligned} Var(R_{ijk})&=1+O(n^{-1}) \end{aligned}$$
(A21)

and furthermore, by Sect. 3 of Cordeiro (2004) and equations (A20) and (A21), it follows that:

$$\begin{aligned} R_{ijk}\xrightarrow [n\rightarrow \infty ]{d} N(0,1) \end{aligned}$$
(A22)

Let

$$\begin{aligned} \pmb {\Gamma }=\{\gamma _{ij}\}_{n\times n} \end{aligned}$$
(A23)

be a matrix whose first column is \(\frac{\pmb {1}}{\sqrt{n}}\) and the following columns are:

$$\begin{aligned} g_{i-1}=\left( \frac{1}{\sqrt{(i-1)i}}, \ldots ,\frac{1}{\sqrt{(i-1)i}},-\frac{i-1}{\sqrt{(i-1)i}}, 0, \ldots ,0\right) , \qquad i=2, \ldots , n \end{aligned}$$
(A24)
$$\begin{aligned} \pmb {\Gamma }^T=\begin{pmatrix} \frac{\pmb {1}}{\sqrt{n}}&\vdots&\pmb {G} \end{pmatrix}^T=\begin{pmatrix} \frac{1}{\sqrt{n}} &{} \frac{1}{\sqrt{n}}&{} \frac{1}{\sqrt{n}}&{} \frac{1}{\sqrt{n}} &{} \cdots &{}\frac{1}{\sqrt{n}}\\ \frac{1}{\sqrt{2}} &{} -\frac{1}{\sqrt{2}}&{} 0&{} 0 &{}\cdots &{} 0\\ \frac{1}{\sqrt{6}} &{} \frac{1}{\sqrt{6}}&{} -\frac{2}{\sqrt{6}}&{} 0 &{} \cdots &{}0\\ \frac{1}{\sqrt{12}} &{} \frac{1}{\sqrt{12}}&{} \frac{1}{\sqrt{12}}&{} -\frac{3}{\sqrt{12}} &{} \cdots &{}0\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \ddots &{}\vdots \\ \frac{1}{\sqrt{(n(n-1))}} &{} \frac{1}{\sqrt{n(n-1)}}&{} \frac{1}{\sqrt{n(n-1)}}&{} \frac{1}{\sqrt{n(n-1)}} &{} \cdots &{}-\frac{n-1}{\sqrt{n(n-1)}}\\ \end{pmatrix} \end{aligned}$$

where

$$\begin{aligned} \pmb {G}^T=\begin{pmatrix} \frac{1}{\sqrt{2}} &{} -\frac{1}{\sqrt{2}}&{} 0&{} 0 &{}\cdots &{} 0\\ \frac{1}{\sqrt{6}} &{} \frac{1}{\sqrt{6}}&{} -\frac{2}{\sqrt{6}}&{} 0 &{} \cdots &{}0\\ \frac{1}{\sqrt{12}} &{} \frac{1}{\sqrt{12}}&{} \frac{1}{\sqrt{12}}&{} -\frac{3}{\sqrt{12}} &{} \cdots &{}0\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \ddots &{}\vdots \\ \frac{1}{\sqrt{(n(n-1))}} &{} \frac{1}{\sqrt{n(n-1)}}&{} \frac{1}{\sqrt{n(n-1)}}&{} \frac{1}{\sqrt{n(n-1)}} &{} \cdots &{}-\frac{n-1}{\sqrt{n(n-1)}}\\ \end{pmatrix} \end{aligned}$$

then the matrix \(\pmb {\Gamma }\) is a Helmert matrix (Lancaster 1965) and therefore:

$$\begin{aligned} \pmb {\Gamma }\pmb {\Gamma }^T=\pmb {I}_n, \qquad \pmb {1}_{n-1}^T \pmb {G}=0, \qquad \pmb {G}^T\pmb {G}=\pmb {I}_{n-1}-\frac{1}{n-1}\pmb {1}_{n-1}\pmb {1}_{n-1}^T \end{aligned}$$
(A25)

If \(r_{ijk}\) is defined as the estimated Pearson residual of the ith experimental unit in the jth period and the kth observation, i.e.,

$$\begin{aligned}r_{ijk}={\hat{R}}_{ijk}=\frac{Y_{ijk}-{\hat{\mu }}_{ijk}}{{\hat{\phi }}\sqrt{V({\hat{\mu }}_{ijk})})}\end{aligned}$$

and the matrix \(\pmb {r}_i\) of residuals of the ith individual, where the first row has the L Pearson residuals defined in Equation (8) corresponding to the first period, and the second row of the L corresponding to the second period and so on until completing a matrix with P rows and L columns, i.e.:

$$\begin{aligned} \pmb {r}_i=\begin{pmatrix} r_{i11}&{} r_{i21}&{} \cdots &{} r_{i1L}\\ r_{i21}&{} r_{i22}&{} \cdots &{}r_{i2L}\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ r_{iP1}&{} r_{i2P}&{} \cdots &{}r_{iPL}\\ \end{pmatrix} \end{aligned}$$
(A26)

By Equation (A20) and the correlation assumption given in Equation (6), it is true that:

$$\begin{aligned}&E( \pmb {r}_i)=\pmb {0}_{P\times L} \nonumber \\&Corr\left( Vec( \pmb {r}_i)\right) =\pmb {\Psi } \otimes \pmb {R}_1(\pmb {\alpha }_1)\nonumber \\&Corr\left( Vec( \pmb {r}_i), Vec( \pmb {r}_{i'})\right) =\pmb {0}_{PL\times PL}\, \qquad i\ne i' \end{aligned}$$
(A27)

And defining \(\pmb {R}\) as:

$$\begin{aligned} \pmb {R}=(\pmb {r}_1,\ldots , \pmb {r}_n)_{P\times nL} \qquad \end{aligned}$$

and since \(\pmb {\Gamma }\) is orthogonal, then \(\pmb {\Gamma }\otimes \pmb {I}_L\) is also orthogonal. Thus (Srivastava et al. 2008):

$$\begin{aligned} \pmb {R}(\pmb {\Gamma }\otimes \pmb {I}_L)=\begin{pmatrix} \sqrt{n}\bar{\pmb {r}}&\vdots&\pmb {R}(\pmb {G}\otimes \pmb {I}_L) \end{pmatrix} \end{aligned}$$
(A28)

and according to Equation (A27), we get:

$$\begin{aligned} \pmb {R}\left( \pmb {I}_n\otimes \pmb { \Psi }^{-1} \right) \pmb {R}^T&=(\pmb {r}_1,\ldots , \pmb {r}_n)\left( \pmb {I}_n\otimes \pmb { \Psi }^{-1} \right) (\pmb {r}_1,\ldots , \pmb {r}_n)^T \\&=n\bar{\pmb {r}}\otimes \pmb { \Psi }^{-1} \pmb {R} + \pmb {R}(\pmb {G}\otimes \pmb {I}_L)\left( \pmb {I}_n\otimes \pmb { \Psi }^{-1} \right) (\pmb {G}^T\otimes \pmb {I}_L )\pmb {R}^T\\&=n\bar{\pmb {r}}\otimes \pmb { \Psi }^{-1} \pmb {R}+\pmb {Z}(\pmb {G}^T\otimes \pmb {I}_L )\pmb {Z}^T \end{aligned}$$

where \(\pmb {Z}\) is:

$$\begin{aligned}&\pmb {Z}_{P\times (n-1)L}=(\pmb {Z}_1, \ldots , \pmb {Z}_{(n-1)})= \pmb {R}(\pmb {G}\otimes \pmb {I}_L)=(\pmb {r}_1,\ldots , \pmb {r}_n)(\pmb {G}\otimes \pmb {I}_L) \end{aligned}$$
(A29)

with \(\bar{\pmb {r}}\) is the matrix of the average residuals defined in Eq. (A26) for each period, that is,

$$\begin{aligned} \bar{\pmb {r}}=\frac{1}{n}\sum _{i=1}^n \pmb {r}_i =\frac{1}{n}\sum _{i=1}^n\begin{pmatrix} r_{i11}&{} r_{i21}&{} \cdots &{} r_{i1L}\\ r_{i21}&{} r_{i22}&{} \cdots &{}r_{i2L}\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ r_{iP1}&{} r_{i2P}&{} \cdots &{}r_{iPL}\\ \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned}&\pmb {Z}_1=\begin{pmatrix} \frac{1}{\sqrt{2}}r_{111}-\frac{1}{\sqrt{2}}r_{211} &{} \cdots &{} \frac{1}{\sqrt{2}}r_{11L}-\frac{1}{\sqrt{2}}r_{21L}\\ \frac{1}{\sqrt{2}}r_{121}-\frac{1}{\sqrt{2}}r_{221} &{} \cdots &{} \frac{1}{\sqrt{2}}r_{12L}-\frac{1}{\sqrt{2}}r_{22L}\\ \vdots &{}\ddots &{} \vdots \\ \frac{1}{\sqrt{2}}r_{1P1}-\frac{1}{\sqrt{2}}r_{2P1} &{} \cdots &{} \frac{1}{\sqrt{2}}r_{1PL}-\frac{1}{\sqrt{2}}r_{2PL} \end{pmatrix}_{P\times L}\\&\pmb {Z}_2=\begin{pmatrix} \frac{1}{\sqrt{6}}\sum _{i=1}^2 r_{i11}-\frac{2}{\sqrt{6}}r_{311} &{} \cdots &{} \frac{1}{\sqrt{6}}\sum _{i=1}^2 r_{i1L}-\frac{2}{\sqrt{6}}r_{31L} \\ \frac{1}{\sqrt{6}}\sum _{i=1}^2 r_{i21}-\frac{2}{\sqrt{6}}r_{321} &{} \cdots &{} \frac{1}{\sqrt{6}}\sum _{i=1}^2 r_{i2L}-\frac{2}{\sqrt{6}}r_{32L} \\ \vdots &{} \vdots &{}\ddots &{} \vdots \\ \frac{1}{\sqrt{6}}\sum _{i=1}^2 r_{iP1}-\frac{2}{\sqrt{6}}r_{3P1} &{} \cdots &{} \frac{1}{\sqrt{6}}\sum _{i=1}^2 r_{iPL}-\frac{2}{\sqrt{6}}r_{3PL} \\ \end{pmatrix}_{P\times L}\\&\vdots \\&\pmb {Z}_{(n-1)}=\\&\begin{pmatrix} {\scriptscriptstyle \frac{1}{\sqrt{n(n-1)}}\sum _{i=1}^{n-1} r_{i11}-\frac{n-1}{\sqrt{n(n-1)}}r_{(n-1)11}} &{} \cdots &{} {\scriptscriptstyle \frac{1}{\sqrt{n(n-1)}}\sum _{i=1}^{n-1}r_{i1L}-\frac{n-1}{\sqrt{n(n-1)}}r_{(n-1)1L}} \\ {\scriptscriptstyle \frac{1}{\sqrt{n(n-1)}}\sum _{i=1}^{n-1} r_{i21}-\frac{n-1}{\sqrt{n(n-1)}}r_{(n-1)21}} &{} \cdots &{} {\scriptscriptstyle \frac{1}{\sqrt{n(n-1)}}\sum _{i=1}^{n-1} r_{i2L}-\frac{n-1}{\sqrt{n(n-1)}}r_{(n-1)2L}} \\ \vdots &{} \vdots &{}\ddots &{} \vdots \\ {\scriptscriptstyle \frac{1}{\sqrt{n(n-1)}}\sum _{i=1}^{n-1} r_{iP1}-\frac{n-1}{\sqrt{n(n-1)}}r_{(n-1)P1}} &{} \cdots &{} {\scriptscriptstyle \frac{1}{\sqrt{n(n-1)}}\sum _{i=1}^{n-1} r_{iPL}-\frac{n-1}{\sqrt{n(n-1)}}r_{(n-1)PL}} \\ \end{pmatrix}_{P\times L} \end{aligned}$$

Now by the properties of the Pearson residuals, we have that:

$$\begin{aligned}E(\pmb {Z}_1)&=E(\pmb {Z}_2) = \cdots =E(\pmb {Z}_{n-1}) =\pmb {0}_{P\times L} \end{aligned}$$

and by the properties given in Equation (A25) and because we assume that the experimental units are independent, that is, \(Corr(r_{ijk}, r_{i'j'k'})=0\), for all \(i\ne i'\) and that Equation (3) is true, then:

$$\begin{aligned}&Corr(r_{ijk}, r_{i'j'k'})=0 \qquad \forall i\ne i'\\&Corr(r_{ijk}, r_{ij'k'})=Corr(r_{i'jk}, r_{i'j'k'})=Corr(r_{i'jk}, r_{i'j'k'}), \qquad \forall i\ne i'\\ \qquad \\&Corr\left( \frac{1}{\sqrt{2}}r_{111}-\frac{1}{\sqrt{2}}r_{211}, \frac{1}{\sqrt{2}}r_{121}-\frac{1}{\sqrt{2}}r_{221}\right) \\&=\frac{1}{2}Corr(r_{111}, r_{121})+\frac{1}{2}Cov(r_{211}, r_{221})\\&=Corr(r_{111}, r_{121})=Corr(r_{111}, r_{121})\\ \qquad \\&Corr\left( \frac{1}{\sqrt{n(n-1)}}\sum _{i=1}^{n-1} r_{i11}-\frac{n-1}{\sqrt{n(n-1)}}r_{(n-1)11}, \right. \\&\left. \frac{1}{\sqrt{n(n-1)}}\sum _{i=1}^{n-1} r_{i21}-\frac{n-1}{\sqrt{n(n-1)}}r_{(n-1)21}\right) \\&=Corr(r_{111}, r_{121})=Corr(r_{111}, r_{121})\\ \end{aligned}$$

furthermore,

$$\begin{aligned} Var(Vec(\pmb {Z}_1))&=Var\left\{ \begin{pmatrix} \frac{1}{\sqrt{2}}r_{111}-\frac{1}{\sqrt{2}}r_{211} \\ \frac{1}{\sqrt{2}}r_{121}-\frac{1}{\sqrt{2}}r_{221}\\ \vdots \\ \frac{1}{\sqrt{2}}r_{1P1}-\frac{1}{\sqrt{2}}r_{2P1}\\ \frac{1}{\sqrt{2}}r_{112}-\frac{1}{\sqrt{2}}r_{212}\\ \vdots \\ \frac{1}{\sqrt{2}}r_{1PL}-\frac{1}{\sqrt{2}}r_{2PL} \end{pmatrix}_{PL\times 1} \right\} \\&=\begin{pmatrix} 1 &{} Corr(r_{111}, r_{121}) &{} \cdots &{} Corr(r_{111}, r_{1P1})\\ Corr(r_{111}, r_{121}) &{} 1 &{} \cdots &{} Corr(r_{121}, r_{1P1})\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ Corr(r_{111}, r_{1P1}) &{} Corr(r_{121}, r_{1P1}) &{}\cdots &{}1 \end{pmatrix}_{PL\times PL}\\&=\pmb { \Psi } \otimes \pmb {R}_1(\pmb {\alpha }_1)) \end{aligned}$$
$$\begin{aligned} Var(Vec(\pmb {Z}_1))=Var(Vec(\pmb {Z}_2))=\cdots = Var(Vec(\pmb {Z}_{(n-1)}))=\pmb { \Psi } \otimes \pmb {R}_1(\pmb {\alpha }_1)) \end{aligned}$$
(A30)

By the central limit theorem, we get that:

$$\begin{aligned} Vec(\bar{\pmb {r}})&\xrightarrow [n\rightarrow \infty ]{d} N_{LP}(\pmb {0}, \pmb { \Psi } \otimes \pmb {R}_1(\pmb {\alpha }_1)) \end{aligned}$$
(A31)

By equations (A22) and (A30) it follows that:

$$\begin{aligned} Vec(\pmb {Z}_j)&\xrightarrow [n\rightarrow \infty ]{d} N_{LP}(\pmb {0},\pmb { \Psi } \otimes \pmb {R}_1(\pmb {\alpha }_1)) \end{aligned}$$
(A32)

and by equations (A25), (A31) y (A32) that:

$$\begin{aligned} Cov&(Vec(\bar{\pmb {r}}),Vec(\pmb {Z}_j) )=\pmb {0}\\ Cov&(Vec(\pmb {Z}_i) ,Vec(\pmb {Z}_j) )=\pmb {0} \end{aligned}$$

and partitioning \(\pmb {Z}_1\) as follows:

$$\begin{aligned} \pmb {Z}_1&=\begin{pmatrix} \frac{1}{\sqrt{2}}r_{111}-\frac{1}{\sqrt{2}}r_{211} &{} \cdots &{} \frac{1}{\sqrt{2}}r_{11L}-\frac{1}{\sqrt{2}}r_{21L}\\ \frac{1}{\sqrt{2}}r_{121}-\frac{1}{\sqrt{2}}r_{221} &{} \cdots &{} \frac{1}{\sqrt{2}}r_{12L}-\frac{1}{\sqrt{2}}r_{22L}\\ \vdots &{}\ddots &{} \vdots \\ \frac{1}{\sqrt{2}}r_{1P1}-\frac{1}{\sqrt{2}}r_{2P1} &{} \cdots &{} \frac{1}{\sqrt{2}}r_{1PL}-\frac{1}{\sqrt{2}}r_{2PL} \end{pmatrix}_{P\times L}\\&=\begin{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}}r_{111}-\frac{1}{\sqrt{2}}r_{211}\\ \frac{1}{\sqrt{2}}r_{121}-\frac{1}{\sqrt{2}}r_{221} \\ \vdots \\ \frac{1}{\sqrt{2}}r_{1P1}-\frac{1}{\sqrt{2}}r_{2P1} \end{pmatrix}_{P\times 1}&{} \cdots &{}\begin{pmatrix} \frac{1}{\sqrt{2}}r_{11L}-\frac{1}{\sqrt{2}}r_{21L}\\ \frac{1}{\sqrt{2}}r_{1PL}-\frac{1}{\sqrt{2}}r_{2PL}\\ \vdots \\ \frac{1}{\sqrt{2}}r_{1PL}-\frac{1}{\sqrt{2}}r_{2PL} \end{pmatrix}_{P\times 1} \end{pmatrix}\\&=(\pmb {z}_{(1)1}, \ldots , \pmb {z}_{(L)1}) \end{aligned}$$

Then, it is obtained that

$$\begin{aligned} E(\pmb {z}_{(j)1}, \pmb {z}^T_{(j)1})=\Psi _{jj}\pmb {R}_1(\pmb {\alpha }_1) \qquad E(\pmb {Z}_1, \pmb {Z}^T_1)=(trace(\pmb {\Psi }))\pmb {R}_1(\pmb {\alpha }_1) \end{aligned}$$

Similarly, it is found that:

$$\begin{aligned}&E(\pmb {z}_{(j)1}, \pmb {z}^T_{(j)1})=\Psi _{jj}\pmb {R}_1(\pmb {\alpha }_1)\nonumber \\&E(\pmb {z}_{(j)2}, \pmb {z}^T_{(j)2})=\Psi _{jj}\pmb {R}_1(\pmb {\alpha }_1)\nonumber \\ \vdots \nonumber \\&E(\pmb {z}_{(j)(n-1)}, \pmb {z}^T_{(j)(n-1)})=\Psi _{jj}\pmb {R}_1(\pmb {\alpha }_1) \end{aligned}$$
(A33)
$$\begin{aligned}&E(\pmb {Z}_1, \pmb {Z}^T_1)=(trace(\pmb {\Psi }))\pmb {R}_1(\pmb {\alpha }_1)\nonumber \\ \vdots \nonumber \\&E(\pmb {Z}_{(n-1)}, \pmb {Z}^T_{(n-q)})=(trace(\pmb {\Psi }))\pmb {R}_1(\pmb {\alpha }_1) \end{aligned}$$
(A34)

Therefore, since \(\pmb {\Psi }_{jj}=1\), \(\forall j=1,\ldots , P\), \(trace(\pmb {\Psi })=1\) from Equation (A33), we get:

$$\begin{aligned} E\left( \pmb {R}_1(\pmb {\alpha }_1)^{\frac{1}{2}}\pmb {z}_{(j')k} \pmb {z}^T_{(j)k} \pmb {R}_1(\pmb {\alpha }_1)^{\frac{1}{2}}\right) =\pmb {\Psi }_{jj'}\pmb {I}_P, \qquad \forall k=1, \ldots , n-1 \end{aligned}$$
(A35)
$$\begin{aligned} Cov[\pmb {R}_1(\pmb {\alpha }_1)^{\frac{1}{2}} \pmb {z}_{(k)j} \pmb {z}^T_{(i)j} \pmb {R}_1(\pmb {\alpha }_1)^{\frac{1}{2}}, \pmb {R}_1(\pmb {\alpha }_1)^{\frac{1}{2}} \pmb {z}_{(k)j'} \pmb {z}^T_{(i)j'} \pmb {R}_1(\pmb {\alpha }_1)^{\frac{1}{2}} ]=\pmb {0} \end{aligned}$$
(A36)

By Theorem 2 in Liang and Zeger (1986), it is known that \(\pmb {R}_1(\hat{\pmb {\alpha }}_1)\) is consistent and unbiased for \(\pmb {R}_1(\pmb { \alpha }_1)\), by equations (A35), (A31) and (A32), we have that

$$\begin{aligned} {\hat{\psi }}_{jj'}=\frac{1}{n} \sum _{i=1}^n tr\left( \pmb {R}_1(\pmb {{\hat{\alpha }}}_1)^{-1}(\pmb {r}_{(j)i}-\bar{\pmb {r}}_{(j)})(\pmb {r}_{(j')i}-\bar{\pmb {r}}_{(j')})^T\right) \end{aligned}$$
(A37)

is a consistent and asymptotically unbiased estimator for \(\Psi _{jj'}\), which proves the theorem. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cruz, N.A., Melo, O.O. & Martinez, C.A. A correlation structure for the analysis of Gaussian and non-Gaussian responses in crossover experimental designs with repeated measures. Stat Papers 65, 263–290 (2024). https://doi.org/10.1007/s00362-022-01391-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-022-01391-z

Keywords

Navigation