Skip to main content
Log in

Optimal crossover designs in a model with self and mixed carryover effects with correlated errors

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

We determine optimal crossover designs for the estimation of direct treatment effects in a model with mixed and self carryover effects. The model also assumes that the errors within each experimental unit are correlated following a stationary first-order autoregressive process. The paper considers situations where the number of periods for each experimental unit is at least four and the number of treatments is greater or equal to the number of periods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Afsarinejad K, Hedayat AS (2002) Repeated measurements designs for a model with self and simple mixed carryover effects. J Stat Plan Inference 106:449–459

    Article  MATH  MathSciNet  Google Scholar 

  • Hedayat AS, Yan Z (2008) Crossover designs based on type I orthogonal arrays for a self and simple mixed carryover effects model with correlated errors. J Stat Plan Inference 138:2201–2213

    Article  MATH  MathSciNet  Google Scholar 

  • Kiefer J (1975) Construction and optimality of generalized Youden designs. In: Srivastava JN (ed) A survey of statistical design and linear models. North-Holland, Amsterdam, pp 333–353

    Google Scholar 

  • Kunert J (1983) Optimal design and refinement of the linear model with applications to repeated measurements designs. Ann Stat 11:247–257

    Article  MATH  MathSciNet  Google Scholar 

  • Kunert J (1985) Optimal repeated measurements designs for correlated observations and analysis by weighted least squares. Biometrika 72:375–389

    Article  MATH  MathSciNet  Google Scholar 

  • Kunert J, Martin R (2000) On the determination of optimal designs for an interference model. Ann Stat 28:1728–1742

    Article  MATH  MathSciNet  Google Scholar 

  • Kunert J, Stufken J (2002) Optimal crossover designs in a model with self and mixed carryover effects. J Am Stat Assoc 97:898–906

    Article  MATH  MathSciNet  Google Scholar 

  • Kunert J, Stufken J (2008) Optimal crossover designs for two treatments in the presence of mixed and self carryover effects. J Am Stat Assoc 103:1641–1647

    Article  MATH  MathSciNet  Google Scholar 

  • Kushner HB (1997) Optimal repeated measurements designs: the linear optimality equations. Ann Stat 25:2328–2344

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

Financial support by the Deutsche Forschungsgemeinschaft (SFB 823, Statistik nichtlinearer dynamischer Prozesse) is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrian Wilk.

Appendix: Proofs

Appendix: Proofs

Lemma 1

Consider the matrix \(W_\lambda \), defined in (4). Assume \(p \ge 4\) and \(\lambda \in [\lambda ^*(p),1)\) with \(\lambda ^*(p)\) as in Proposition 1. We then get for the entries \(w_{ij}\) of \(W_\lambda \) that \(w_{ii} > 0\) for \(1 \le i \le p\) and \(w_{ij} \le 0\) for \(1 \le i \ne j \le p\).

Proof

It was shown by Kunert (1985) for \(\lambda \in [\lambda ^*(p),1)\) that \(w_{ij}\le 0\) for \(i \ne j\) and \(w_{ii} \ge 0\).

To see that the diagonal elements are in fact positive, define \((1-\lambda )^k =: L_k\). We observe \(\mathbf{1}_\mathbf{p}^\mathbf{T} {\varvec{\varLambda }^{-1}} = [L_1,L_2, \dots ,L_2,L_1]\) and \(\mathbf{1}_\mathbf{p}^\mathbf{T} {\varvec{\varLambda }^{-1}} \mathbf{1}_\mathbf{p} = L_1(p-\lambda (p-2)) = z_p,\) say. For \(2 \le i \le p-1\) the substitution \(p=v+4\) yields

$$\begin{aligned} w_{ii} = 1+\lambda ^2-\frac{L_4}{z_p}= \frac{(1-\lambda )(1+\lambda ^2)v-\lambda ^3+\lambda ^2+\lambda +3}{(1-\lambda )v-2\lambda +4} \end{aligned}$$

which is positive for \(v\ge 0\) and therefore also for \(p \ge 4\).

The entries \(w_{11}\) and \(w_{pp}\) are positive because, using the substitution \(p=v+4\) again, we have

$$\begin{aligned} w_{11} = w_{pp} = 1- \frac{L_2}{z_p}= \frac{(1-\lambda )v-\lambda +3}{(1-\lambda )v-2\lambda +4} \end{aligned}$$

which is positive. \(\square \)

Lemma 2

For \(t \ge p \ge 4\) and \(\lambda \in [\lambda ^*(p),1)\) consider an arbitrary sequence class \(\ell \). Then \(c_{11}(1) \ge c_{11}(\ell )\) and \(c_{11}(1)+2c_{12}(1) \ge c_{11}(\ell )+2c_{12}(\ell ).\)

Proof

Assume \(u\) is a unit receiving a sequence \(s = [s_1,...,s_p]\) from class \(\ell \). Then

$$\begin{aligned} c_{11}(\ell )={{\mathrm{tr}}}(\mathbf{B}_\mathbf{t}\mathbf{T}_\mathbf{du}^\mathbf{T}\mathbf{W}_{\lambda } \mathbf{T}_\mathbf{du}) \end{aligned}$$

and

$$\begin{aligned} c_{12}(\ell )={{\mathrm{tr}}}(\mathbf{B}_\mathbf{t}\mathbf{T}_\mathbf{du}^\mathbf{T}\mathbf{W}_{\lambda } \mathbf{M}_\mathbf{du}). \end{aligned}$$

Since \(\mathbf{W}_{\lambda }\) has column-sums zero and since \(\mathbf{T}_\mathbf{du}\mathbf{1}_\mathbf{t} = \mathbf{1}_\mathbf{p}\), it follows that

$$\begin{aligned} c_{11}(\ell )={{\mathrm{tr}}}(\mathbf{T}_\mathbf{du}^\mathbf{T}\mathbf{W}_{\lambda } \mathbf{T}_\mathbf{du}) \quad {\text {and}} \quad c_{12}(\ell )={{\mathrm{tr}}}(\mathbf{T}_\mathbf{du}^\mathbf{T}\mathbf{W}_{\lambda } \mathbf{M}_\mathbf{du}). \end{aligned}$$

For \(1 \le i \le t\) denote the \(i\)-th column of \(\mathbf{T}_\mathbf{du}\) by \(\mathbf{t}_\mathbf{du}^\mathbf{(i)}\) and the \(i\)-th column of \(\mathbf{M}_\mathbf{du}\) by \(\mathbf{m}_\mathbf{du}^\mathbf{(i)}\). Observe that

  • the \(j\)-th entry of \(\mathbf{t}_\mathbf{du}^\mathbf{(i)}\) is 1 if \(s_j=i\) and 0 otherwise,

  • the \(j\)-th entry of \(\mathbf{m}_\mathbf{du}^{(i)}\) is 1 if \(j\ge 2, s_{j-1}=i\) and \(s_{j-1} \ne s_j\). In all other cases it is 0.

For any given \(i\) it follows that

$$\begin{aligned} \mathbf{t}_\mathbf{du}^\mathbf{(i)T}\mathbf{W}_{\lambda } \mathbf{t}_\mathbf{du}^\mathbf{(i)} = \sum _{j=1}^p I(s_j=i)\sum _{r=1}^p w_{jr} I(s_r=i), \end{aligned}$$

where \(I(statement)\) is 1 if \(statement\) is true and 0 otherwise. Hence,

$$\begin{aligned} c_{11}(\ell )&= \sum _{i=1}^t \mathbf{t}_\mathbf{du}^\mathbf{(i)T}\mathbf{W}_{\lambda } \mathbf{t}_\mathbf{du}^\mathbf{(i)} \nonumber \\&= \sum _{i=1}^t\left( \sum _{j=1}^p\sum _{r=1}^p w_{jr} I(s_j=s_r=i)\right) =\sum _{j=1}^p\sum _{r=1}^p w_{jr} I(s_j=s_r) \nonumber \\&= \sum _{j=1}^p w_{jj} + 2\sum _{j=1}^{p-1}\sum _{r=j+1}^p w_{jr} I(s_j=s_r) \\&\le \sum _{j=1}^p w_{jj}, \nonumber \end{aligned}$$
(12)

because all \(w_{jr}\le 0\) if \(j\ne r\). It is easy to see that \(c_{11}(1)=\sum _{j=1}^p w_{jj}\) and, therefore, we have proved that

$$\begin{aligned} c_{11}(\ell ) \le c_{11}(1). \end{aligned}$$

We can also use Eq. (12) to derive the slightly sharper bound

$$\begin{aligned} c_{11}(\ell ) \le \sum _{j=1}^p w_{jj} +2 \sum _{j=1}^{p-1} w_{j,j+1} I(s_j=s_{j+1}). \end{aligned}$$
(13)

On the other hand,

$$\begin{aligned} \mathbf{t}_\mathbf{du}^\mathbf{(i)T}\mathbf{W}_{\lambda } \mathbf{m}_\mathbf{du}^\mathbf{(i)}=\sum _{j=1}^p I(s_j=i)\sum _{r=2}^p w_{jr} I(s_r \ne s_{r-1}=i). \end{aligned}$$

Therefore,

$$\begin{aligned} c_{12}(\ell )&= \sum _{i=1}^t\left( \sum _{j=1}^p\sum _{r=2}^p w_{jr} I(s_j=i,s_r \ne s_{r-1}=i)\right) \\&= \sum _{j=1}^p\sum _{r=2}^p w_{jr} I(s_j=s_{r-1} \ne s_r)\\&= \sum _{j=1}^p\sum _{r \ne j} w_{jr} I(s_j=s_{r-1} \ne s_r), \end{aligned}$$

because \(s_j = s_{r-1} \ne s_r\) can never hold for \(r = j\). On the other hand, if \(r = j+1\), then \(s_j = s_{r-1} \ne s_r\) becomes \(s_j \ne s_{j+1}\). Making use of the fact that all \(w_{jr} \le 0\) for all \(r \ne j\), we conclude that

$$\begin{aligned} c_{12}(\ell ) \le \sum _{j=1}^{p-1} w_{j,j+1} I(s_j \ne s_{j+1}). \end{aligned}$$

Combining this with Eq. (13), we conclude that

$$\begin{aligned} c_{11}(\ell )+2c_{12}(\ell ) \le \sum _{j=1}^p w_{jj} \!+\!2 \sum _{j=1}^{p-1} w_{j,j+1} I(s_j=s_{j+1}) \!+\! 2 \sum _{j=1}^{p-1} w_{j,j+1} I(s_j \ne s_{j+1}). \end{aligned}$$

Since it is easy to see that

$$\begin{aligned} c_{12}(1) = \sum _{j=1}^{p-1} w_{j,j+1}, \end{aligned}$$
(14)

we have proved that \(c_{11}(\ell )+2c_{12}(\ell ) \le c_{11}(1)+2c_{12}(1)\). \(\square \)

Lemma 3

Under the conditions of Lemma 2 we have \(c_{22}(1) \ge c_{22}(\ell )\).

Proof

If \(\mathbf{M}_\mathbf{du}\) is as in the proof of Lemma 2, then

$$\begin{aligned} c_{22}(\ell )={{\mathrm{tr}}}(\mathbf{B}_\mathbf{t} \mathbf{M}_\mathbf{du}^\mathbf{T} \mathbf{W}_{\lambda } \mathbf{M}_\mathbf{du})={{\mathrm{tr}}}(\mathbf{M}_\mathbf{du}^\mathbf{T} \mathbf{W}_{\lambda } \mathbf{M}_\mathbf{du}) - \frac{1}{t}\mathbf{1}_\mathbf{t}^\mathbf{T} \mathbf{M}_\mathbf{du}^\mathbf{T} \mathbf{W}_{\lambda } \mathbf{M}_\mathbf{du} \mathbf{1}_\mathbf{t}. \end{aligned}$$

Observe that \(\mathbf{M}_\mathbf{du} \mathbf{1}_\mathbf{t}\) is a \(p\)-dimensional vector with entries 1 or 0. The first element of \(\mathbf{M}_\mathbf{du} \mathbf{1}_\mathbf{t}\) is always 0, the \(j\)-th element, for \(j \ge 2,\) is 0, if \(s_{j-1}=s_j,\) and 1, otherwise. Hence,

$$\begin{aligned} \mathbf{1}_\mathbf{t}^\mathbf{T} \mathbf{M}_\mathbf{du}^\mathbf{T} \mathbf{W}_{\lambda } \mathbf{M}_\mathbf{du} \mathbf{1}_\mathbf{t}\!&= \!\sum _{j=2}^p\sum _{r=2}^p w_{jr} I(s_{j-1}\ne s_j)I(s_{r-1}\ne s_r)\\ \!&= \!\sum _{j\!=\!2}^p w_{jj} I(s_{j\!-\!1}\ne s_j) \!+\! 2\sum _{j=2}^{p\!-\!1}\sum _{r=j\!+\!1}^p w_{jr} I(s_{j\!-\!1}\ne s_j)I(s_{r\!-\!1}\ne s_r). \end{aligned}$$

Defining \(\mathbf{m}_\mathbf{du}^\mathbf{(i)}\) as in the proof of Lemma 2, we get

$$\begin{aligned} {{\mathrm{tr}}}(\mathbf{M}_\mathbf{du}^\mathbf{T} \mathbf{W}_{\lambda } \mathbf{M}_\mathbf{du})&= \sum _{i=1}^t \mathbf{m}_\mathbf{du}^\mathbf{(i)T}\mathbf{W}_{\lambda } \mathbf{m}_\mathbf{du}^\mathbf{(i)}\\&= \sum _{i=1}^t \left( \sum _{j=2}^p\sum _{r=2}^p w_{jr} I(s_j \ne s_{j-1}=i=s_{r-1}\ne s_r)\right) \\&= \sum _{j=2}^p\sum _{r=2}^p w_{jr} I(s_j\ne s_{j-1}= s_{r-1} \ne s_r)\\&= \sum _{j=2}^p w_{jj} I(s_j\ne s_{j-1}) \!+\! 2\sum _{j=2}^{p-1}\sum _{r=j+1}^p w_{jr} I(s_j\ne s_{j-1}\!=\! s_{r-1} \ne s_r)\\&\le \sum _{j=2}^p w_{jj} I(s_j\ne s_{j-1}). \end{aligned}$$

Combining the two parts, we get

$$\begin{aligned} c_{22}(\ell )&\le \sum _{j=2}^p w_{jj} I(s_j\ne s_{j-1}) - \frac{1}{t}\left( \sum _{j=2}^p w_{jj} I(s_j\ne s_{j-1}) \right. \\&\left. + \,2\sum _{j=2}^{p-1}\sum _{r=j+1}^p w_{jr} I(s_{j-1}\ne s_j)I(s_{r-1}\ne s_r) \right) \\&= \frac{t-1}{t} \sum _{j=2}^p w_{jj} I(s_j\ne s_{j-1}) - \frac{2}{t}\sum _{j=2}^{p-1}\sum _{r=j+1}^p w_{jr}I(s_{j-1}\ne s_j)I(s_{r-1}\ne s_r)\\&\le \frac{t-1}{t} \sum _{j=2}^p w_{jj} - \frac{2}{t}\sum _{j=2}^{p-1}\sum _{r=j+1}^p w_{jr}, \end{aligned}$$

where the last inequality made use of the fact that all \(w_{jj}>0\) and all \(w_{jr} \le 0\), for \(j \ne r\). It is easy to see that this bound for \(c_{22}(\ell )\) equals \(c_{22}(1)\). \(\square \)

Lemma 4

Under the conditions of Lemma 2 consider an \(x \in [0,1]\). We then have that

$$\begin{aligned} c_{11}(1) + 2xc_{12}(1) \ge c_{11}(\ell ) + 2xc_{12}(\ell ). \end{aligned}$$

Proof

Define \(f_{(\ell )}(x) := c_{11}(\ell ) + 2xc_{12}(\ell )\). Due to the linearity of the function \(f_{(\ell )}(x),\) it is sufficient to show that \(f_{(1)}(0) \ge f_{(\ell )}(0)\) and \(f_{(1)}(1) \ge f_{(\ell )}(1), 1 \le \ell \le K\). The inequalities are valid because of Lemma 2. \(\square \)

Lemma 5

Under the conditions of Lemma 2, we have that \(c_{22}(1) > 0\) and the function \(h_{1}{(x,0)}\) has an unique minimum at

$$\begin{aligned} x^* = -\frac{c_{12}(1)}{c_{22}(1)} \in [0,1]. \end{aligned}$$

Proof

It follows from the proof of Lemma 3 that

$$\begin{aligned} c_{22}(1) = \frac{t-1}{t} \sum _{j=1}^p w_{jj} - \frac{2}{t} \sum _{j=2}^{p-1} \sum _{r=j+1}^p w_{jr}. \end{aligned}$$

Making use of Lemma 1, we see that \(c_{22}(1) > 0.\) This implies that \(h_1(x,0)=c_{11}(1)+2xc_{12}(1)+x^2c_{22}(1)\) has a unique minimum at the point \(x^*\). It only remains to show that \( 0 \le x^* \le 1\).

For each \(j\), it holds that \(\sum _{r=1}^p w_{jr} = 0.\) This implies that

$$\begin{aligned} \frac{t-1}{t} \sum _{j=2}^p w_{jj}&= -\frac{t-1}{t} \sum _{j=2}^p \left( \sum _{r=1}^{j-1}w_{jr}+\sum _{r=j+1}^{p}w_{jr}\right) \\&= -\frac{t-1}{t} \sum _{j=2}^p \sum _{r=1}^{j-1}w_{jr} -\frac{t-1}{t} \sum _{j=2}^{p-1} \sum _{r=j+1}^{p}w_{jr}. \end{aligned}$$

We therefore can rewrite \(c_{22}(1)\) as

$$\begin{aligned} c_{22}(1) = -\frac{t-1}{t}\sum _{j=2}^p \sum _{r=1}^{j-1}w_{jr} - \frac{t+1}{t}\sum _{j=2}^{p-1} \sum _{r=j+1}^{p}w_{jr}. \end{aligned}$$

Since all \(w_{jr}\) in this sum are non-positive, we get

$$\begin{aligned} c_{22}(1)&\ge -\frac{t-1}{t}w_{21} - \frac{t+1}{t}\sum _{j=2}^{p-1} w_{j,j+1}\\&\ge -\frac{t-1}{t}w_{21} - \sum _{j=2}^{p-1} w_{j,j+1} - \frac{1}{t}w_{p-1,p}. \end{aligned}$$

Observing that \(w_{p-1,p}=w_{21}=w_{12}\), we conclude that

$$\begin{aligned} c_{22}(1) \ge - \sum _{j=1}^{p-1} w_{j,j+1}. \end{aligned}$$

Equation (14) then shows that \(0 \le -c_{12} (1) \le c_{22} (1)\). This completes the proof. \(\square \)

Lemma 6

Under the conditions of Lemma 2 we get for any \(x \in [0,1]\) that

$$\begin{aligned} h_{1}{(x,0)} \ge h_{\ell }{(x,0)}. \end{aligned}$$

Proof

Making use of Lemmas 3 and 4 we observe that

$$\begin{aligned} h_{1}{(x,0)}&= c_{11}(1) + 2xc_{12}(1) + x^2c_{22}(1) \\&\ge c_{11}(\ell ) + 2xc_{12}(\ell ) + x^2c_{22}(\ell ) \\&= h_{\ell }{(x,0)}. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wilk, A., Kunert, J. Optimal crossover designs in a model with self and mixed carryover effects with correlated errors. Metrika 78, 161–174 (2015). https://doi.org/10.1007/s00184-014-0494-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-014-0494-8

Keywords

Mathematics Subject Classification (2000)

Navigation