Abstract
We determine optimal crossover designs for the estimation of direct treatment effects in a model with mixed and self carryover effects. The model also assumes that the errors within each experimental unit are correlated following a stationary first-order autoregressive process. The paper considers situations where the number of periods for each experimental unit is at least four and the number of treatments is greater or equal to the number of periods.
Similar content being viewed by others
References
Afsarinejad K, Hedayat AS (2002) Repeated measurements designs for a model with self and simple mixed carryover effects. J Stat Plan Inference 106:449–459
Hedayat AS, Yan Z (2008) Crossover designs based on type I orthogonal arrays for a self and simple mixed carryover effects model with correlated errors. J Stat Plan Inference 138:2201–2213
Kiefer J (1975) Construction and optimality of generalized Youden designs. In: Srivastava JN (ed) A survey of statistical design and linear models. North-Holland, Amsterdam, pp 333–353
Kunert J (1983) Optimal design and refinement of the linear model with applications to repeated measurements designs. Ann Stat 11:247–257
Kunert J (1985) Optimal repeated measurements designs for correlated observations and analysis by weighted least squares. Biometrika 72:375–389
Kunert J, Martin R (2000) On the determination of optimal designs for an interference model. Ann Stat 28:1728–1742
Kunert J, Stufken J (2002) Optimal crossover designs in a model with self and mixed carryover effects. J Am Stat Assoc 97:898–906
Kunert J, Stufken J (2008) Optimal crossover designs for two treatments in the presence of mixed and self carryover effects. J Am Stat Assoc 103:1641–1647
Kushner HB (1997) Optimal repeated measurements designs: the linear optimality equations. Ann Stat 25:2328–2344
Acknowledgments
Financial support by the Deutsche Forschungsgemeinschaft (SFB 823, Statistik nichtlinearer dynamischer Prozesse) is gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Appendix: Proofs
Appendix: Proofs
Lemma 1
Consider the matrix \(W_\lambda \), defined in (4). Assume \(p \ge 4\) and \(\lambda \in [\lambda ^*(p),1)\) with \(\lambda ^*(p)\) as in Proposition 1. We then get for the entries \(w_{ij}\) of \(W_\lambda \) that \(w_{ii} > 0\) for \(1 \le i \le p\) and \(w_{ij} \le 0\) for \(1 \le i \ne j \le p\).
Proof
It was shown by Kunert (1985) for \(\lambda \in [\lambda ^*(p),1)\) that \(w_{ij}\le 0\) for \(i \ne j\) and \(w_{ii} \ge 0\).
To see that the diagonal elements are in fact positive, define \((1-\lambda )^k =: L_k\). We observe \(\mathbf{1}_\mathbf{p}^\mathbf{T} {\varvec{\varLambda }^{-1}} = [L_1,L_2, \dots ,L_2,L_1]\) and \(\mathbf{1}_\mathbf{p}^\mathbf{T} {\varvec{\varLambda }^{-1}} \mathbf{1}_\mathbf{p} = L_1(p-\lambda (p-2)) = z_p,\) say. For \(2 \le i \le p-1\) the substitution \(p=v+4\) yields
which is positive for \(v\ge 0\) and therefore also for \(p \ge 4\).
The entries \(w_{11}\) and \(w_{pp}\) are positive because, using the substitution \(p=v+4\) again, we have
which is positive. \(\square \)
Lemma 2
For \(t \ge p \ge 4\) and \(\lambda \in [\lambda ^*(p),1)\) consider an arbitrary sequence class \(\ell \). Then \(c_{11}(1) \ge c_{11}(\ell )\) and \(c_{11}(1)+2c_{12}(1) \ge c_{11}(\ell )+2c_{12}(\ell ).\)
Proof
Assume \(u\) is a unit receiving a sequence \(s = [s_1,...,s_p]\) from class \(\ell \). Then
and
Since \(\mathbf{W}_{\lambda }\) has column-sums zero and since \(\mathbf{T}_\mathbf{du}\mathbf{1}_\mathbf{t} = \mathbf{1}_\mathbf{p}\), it follows that
For \(1 \le i \le t\) denote the \(i\)-th column of \(\mathbf{T}_\mathbf{du}\) by \(\mathbf{t}_\mathbf{du}^\mathbf{(i)}\) and the \(i\)-th column of \(\mathbf{M}_\mathbf{du}\) by \(\mathbf{m}_\mathbf{du}^\mathbf{(i)}\). Observe that
-
the \(j\)-th entry of \(\mathbf{t}_\mathbf{du}^\mathbf{(i)}\) is 1 if \(s_j=i\) and 0 otherwise,
-
the \(j\)-th entry of \(\mathbf{m}_\mathbf{du}^{(i)}\) is 1 if \(j\ge 2, s_{j-1}=i\) and \(s_{j-1} \ne s_j\). In all other cases it is 0.
For any given \(i\) it follows that
where \(I(statement)\) is 1 if \(statement\) is true and 0 otherwise. Hence,
because all \(w_{jr}\le 0\) if \(j\ne r\). It is easy to see that \(c_{11}(1)=\sum _{j=1}^p w_{jj}\) and, therefore, we have proved that
We can also use Eq. (12) to derive the slightly sharper bound
On the other hand,
Therefore,
because \(s_j = s_{r-1} \ne s_r\) can never hold for \(r = j\). On the other hand, if \(r = j+1\), then \(s_j = s_{r-1} \ne s_r\) becomes \(s_j \ne s_{j+1}\). Making use of the fact that all \(w_{jr} \le 0\) for all \(r \ne j\), we conclude that
Combining this with Eq. (13), we conclude that
Since it is easy to see that
we have proved that \(c_{11}(\ell )+2c_{12}(\ell ) \le c_{11}(1)+2c_{12}(1)\). \(\square \)
Lemma 3
Under the conditions of Lemma 2 we have \(c_{22}(1) \ge c_{22}(\ell )\).
Proof
If \(\mathbf{M}_\mathbf{du}\) is as in the proof of Lemma 2, then
Observe that \(\mathbf{M}_\mathbf{du} \mathbf{1}_\mathbf{t}\) is a \(p\)-dimensional vector with entries 1 or 0. The first element of \(\mathbf{M}_\mathbf{du} \mathbf{1}_\mathbf{t}\) is always 0, the \(j\)-th element, for \(j \ge 2,\) is 0, if \(s_{j-1}=s_j,\) and 1, otherwise. Hence,
Defining \(\mathbf{m}_\mathbf{du}^\mathbf{(i)}\) as in the proof of Lemma 2, we get
Combining the two parts, we get
where the last inequality made use of the fact that all \(w_{jj}>0\) and all \(w_{jr} \le 0\), for \(j \ne r\). It is easy to see that this bound for \(c_{22}(\ell )\) equals \(c_{22}(1)\). \(\square \)
Lemma 4
Under the conditions of Lemma 2 consider an \(x \in [0,1]\). We then have that
Proof
Define \(f_{(\ell )}(x) := c_{11}(\ell ) + 2xc_{12}(\ell )\). Due to the linearity of the function \(f_{(\ell )}(x),\) it is sufficient to show that \(f_{(1)}(0) \ge f_{(\ell )}(0)\) and \(f_{(1)}(1) \ge f_{(\ell )}(1), 1 \le \ell \le K\). The inequalities are valid because of Lemma 2. \(\square \)
Lemma 5
Under the conditions of Lemma 2, we have that \(c_{22}(1) > 0\) and the function \(h_{1}{(x,0)}\) has an unique minimum at
Proof
It follows from the proof of Lemma 3 that
Making use of Lemma 1, we see that \(c_{22}(1) > 0.\) This implies that \(h_1(x,0)=c_{11}(1)+2xc_{12}(1)+x^2c_{22}(1)\) has a unique minimum at the point \(x^*\). It only remains to show that \( 0 \le x^* \le 1\).
For each \(j\), it holds that \(\sum _{r=1}^p w_{jr} = 0.\) This implies that
We therefore can rewrite \(c_{22}(1)\) as
Since all \(w_{jr}\) in this sum are non-positive, we get
Observing that \(w_{p-1,p}=w_{21}=w_{12}\), we conclude that
Equation (14) then shows that \(0 \le -c_{12} (1) \le c_{22} (1)\). This completes the proof. \(\square \)
Lemma 6
Under the conditions of Lemma 2 we get for any \(x \in [0,1]\) that
Proof
Making use of Lemmas 3 and 4 we observe that
\(\square \)
Rights and permissions
About this article
Cite this article
Wilk, A., Kunert, J. Optimal crossover designs in a model with self and mixed carryover effects with correlated errors. Metrika 78, 161–174 (2015). https://doi.org/10.1007/s00184-014-0494-8
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-014-0494-8