Abstract
In this paper we study the problem of finding neighbor optimal designs for a general correlation structure. We give universal optimality conditions for nearest-neighbor (NN) balanced block designs when observations on the same block are modeled by an autoregressive AR(m) process with arbitrary order m. The cases \(m=1,2\) have been studied by Grondona and Cressie (Sankhyā Indian J Stat Ser A 55(2):267–284, 1993) for AR(2) and by Gill and Shukla (Biometrika 72(3):539–544, 1985a, Commun Stat Theory Methods 14(9):2181–2197, 1985b) and Kunert (Biometrika 74(4):717–724, 1987) for AR(1); we extend these results to the cases \(m \ge 3\).
Similar content being viewed by others
Notes
We bring attention on an error of their paper about one term of their formula (4.9) giving \(\sigma ^{2}c_{l,m}\) for \(l\ne m\): in the term \(2(1-\phi _{1} -\phi _{2})\phi _{2}f^{*}_{l,m}\) of their paper, the factor 2 must be removed. The notations \(e^{*}_{l,m}\) and \(f_{l,m}^{*}\) of their paper correspond to the notations \(\phi _{d,l,m}^{1*}\) and \(\phi _{d,l,m}^{2*}\) of our paper.
Deheuvels and Derzo coined the terms totally balanced for SDEN and SB, and universally balanced for TA.
References
Ahmed R, Akhtar M (2009) On construction of one dimensional all order neighbor balanced designs by cyclic shifts. Pak J Statist 25(2):121–126
Azzalini A, Giovagnoli A (1987) Some optimal designs for repeated measurements with autoregressive errors. Biometrika 74(4):725–734
Benchekroun K (1993) Association-balanced arrays with applications to experimental design. Ph.D. thesis, Dept. of Statistics, The University of North Carolina, Chapel Hill
Deheuvels P and Derzko G (1991) Block designs for early-stage clinical trials. Technical report of the laboratory LSTA, Université Paris 6, France. HAL-CNRS Open archives. https://hal.archives-ouvertes.fr/hal-02068964. Accessed 15 Mar 2019
Dey A (2010) Incomplete block designs. Indian Statistical Institute, New Delhi
Gill PS, Shukla GK (1985) Efficiency of nearest neighbour balanced block designs for correlated observations. Biometrika 72(3):539–544
Gill PS, Shukla GK (1985) Experimental designs and their efficiencies for spatially correlated observations in two dimensions. Commun Stat Theory Methods 14(9):2181–2197
Grondona MO, Cressie N (1993) Efficiency of block designs under stationary second-order autoregressive errors. Sankhyā Indian J Stat Ser A 55(2):267–284
Hedayat NA, Sloane NJA, Stufken J (1999) Orthogonal arrays: theory and applications. Springer, New York
Iqbal I, Aman Ullah M, Nasir JA (2006) The construction of second order neighbour designs. J Res (Sci) 17(3):191–199
Kiefer J (1975) Balanced block designs and generalized Youden designs. I. Construction (patchwork). Ann Stat 3:109–118
Kiefer J (1975) Construction and optimality of generalized Youden designs. in: A survey of statistical design and linear models (Proc. Internat. Sympos., Colorado State Univ., Ft. Collins, Colo., 1973), North-Holland, Amsterdam, pp 333–353
Kiefer J, Wynn HP (1981) Optimum balanced block and Latin square designs for correlated observations. Ann Stat 9(4):737–757
Koné M, Valibouze A (2011) Plans en blocs incomplets pour la structure de corrélation NN\(m\). Annales de l’ISUP 55(2–3):65–88
Kunert J (1985) Optimal repeated measurements designs for correlated observations and analysis by weighted least squares. Biometrika 72(2):375–389
Kunert J (1987) Neighbour balanced block designs for correlated errors. Biometrika 74(4):717–724
Martin RJ, Eccleston JA (1991) Optimal incomplete block designs for general dependence structures. J Stat Plan Inference 28(1):67–81
Morgan JP, Chakravarti IM (1988) Block designs for first and second order neighbor correlations. Ann Stat 16(3):1206–1224
Mukhopadhyay AC (1972) Construction of BIBD’s from OA’s combinatorial arrangements analogous to OA’s. Calcutta Stat Assoc Bull 21:45–50
Passi RM (1976) A weighting scheme for autoregressive time averages. J Appl Meteorol 15(2):117–119
Ramanujacharyulu C (1966) A new general series of balanced incomplete block designs. Proc Am Math Soc 17:1064–1068
Rao C (1946) Hypercubes of strength ”d” leading to confounded designs in factorial experiments. Bull Calcutta Math Soc 38:67–78
Rao C (1947) Factorial experiments derivable from combinatorial arrangements of arrays. J. R. S. S. Suppl 09:128–139
Rao C (1961) Combinatorial arrangements analogous to orthogonal arrays. Sankhyā Indian J Stat Ser A 1:283–286
Rao CR (1973) Some combinatorial problems of arrays and applications to design of experiments. In: Survey of combinatorial theory (Proc. Internat. Sympos., Colorado State Univ., Ft. Collins, Colo., 1971), North-Holland, Amsterdam, pp 349–359
Satpati SK, Parsad R, Gupta VK (2007) Efficient block designs for dependent observations—a computer-aided search. Commun Stat Theory Methods 36(5–8):1187–1223
Siddiqui MM (1958) On the inversion of the sample covariance matrix in a stationary autoregressive process. Ann Math Stat 29:585–588
Stufken J (1991) Some families of optimal and efficient repeated measurements designs. J Stat Plan Inference 27:75–83
Wei WWS (1990) Time series analysis. Univariate and multivariate methods. Addison-Wesley Publishing Company Advanced Book Program, Redwood City, CA
Wise J (1955) The autocorrelation function and the spectral density function. Biometrika 42:151–159
Yates F (1964) Sir Ronald Fisher and the design of experiments. In Memoriam: Ronald Aylmer Fisher, 1890-962. Biometrics 20(2):307–321
Acknowledgements
We warmly thank Paul Deheuvels and Pierre Druilhet for their constructive suggestions which have improved the quality of this article. We would also like to thank the work of rewiewer.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proofs
Appendix: Proofs
This appendix begins by the proof of formula (23) on \(c=\mathbf{1 }_{k}'{\mathbb {M}}\mathbf{1 }_{k}\) in Proposition 3. To establish this formula we need the (essential) technical Lemma 1 on the sums of entries of a row of matrix \({\mathbb {M}}\) which is in Sect. 6.1 too. It is probably the difficulty in establishing this lemma that has long prevented the generalization to any m of the optimal conditions for the AR(m) process. The next four sections are devoted to the respective proofs of Propositions 3, Theorems 1 and 2 and Proposition 5. We will end in Sect. 6.6 with the proofs of Identities (3), (4), (19) and (20).
1.1 Sum c of entries of matrix \({\mathbb {M}}\), Identity (23)
We want establish the formula (23) of Proposition 3. This formula on c, the sum of entries of \({\mathbb {M}}\), is essentially based on Lemma 1 which gives the sum \(p_\ell \) of entries of row \(\ell \in {\llbracket 1,k\rrbracket }\); it will be also used to establish Identity (44) in the proof of Proposition 3. We first prove Identity (23) using Lemma 1 that will come after.
By definition, \(c =\mathbf{1 }_{k}'{\mathbb {M}}\mathbf{1 }_{k}= \sum _{\ell =1}^{k}\sum _{\ell '=1}^{k }\gamma _{\ell ,\ell '}=2\sum _{\ell =1}^{m}p_{\ell } + \sum _{\ell =m+1}^{k-m}p_{\ell }\). After that, from Formula \(p_{\ell }=a_0(a_0-a_\ell )\) of Lemma 1, we find:
because in the sum \(\sum _{\ell =1}^{m}(\theta _{0}+\cdots +\theta _{\ell -1})\) we count m times \(\theta _0\), \(m-1\) times \(\theta _1\), and so on until only once \(\theta _{m-1}\). Thus Formula (23) is proved \(\square \)
Lemma 1
Assume \(k >2m \ge 2\). Let \(p_{\ell }=\displaystyle \sum _{\ell '=1}^k\gamma _{\ell ,\ell '}\) be the sum of the entries of row \(\ell \in {\llbracket 1,k\rrbracket } \) of matrix \({\mathbb {M}}\), \(a_\ell =\displaystyle \sum _{u=\ell }^m\theta _u\;\) for \(\ell \le m\) and \(a_{\ell }=0\;\) for \(\ell >m\). Then:
for \(\ell \in {\llbracket 1,k-m\rrbracket }\) and, as \({\mathbb {M}}\) is symmetric with respect to its second diagonal, \(p_\ell =p_{k-\ell +1}\;\) for \(\ell \in {\llbracket k-m +1,k\rrbracket }\).
In particular, \(p_\ell =p_{m+1}=a_0^{2}\;\) for \(\ell \in {\llbracket m+1,k-m\rrbracket }\).
Proof
We consider the matrix \({\mathbb {M}}=(\gamma _{\ell ,\ell ^{\prime }})_{1\le \ell ,\ell ^{\prime }\le k}\) and we would like to express the sum \(p_\ell =\sum _{\ell '=1}^k\gamma _{\ell ,\ell '}\) of the entries of row \(\ell \) in the form given in Lemma 1. By symmetry of \({\mathbb {M}}\) we can suppose that \(\ell \in {\llbracket 1,k-m\rrbracket }\). We write \(p_\ell =\alpha _\ell + \beta _\ell \) where \(\alpha _\ell = \sum _{\ell '=\ell }^k\gamma _{\ell ,\ell '}\) and \( \beta _\ell =\sum _{\ell '=1}^{\ell -1}\gamma _{\ell ,\ell '}\).
First we compute \(\alpha _\ell = \sum _{\ell '=\ell }^k\gamma _{\ell ,\ell '}\). From Identity (8) of Proposition 1, we have the following expression of each \(\gamma _{\ell ,\ell '}\) for \(\ell ' \in {\llbracket \ell ,k\rrbracket }\):
Then, as \( \alpha _\ell =\sum _{s=0}^{k-\ell } \sum _{u=0}^{\ell -1} \theta _u\theta _{u+s}\), we obtain:
because for each \(b>m\) we have \(\theta _b=0\) and for each \(u \in {\llbracket 0,\ell -1\rrbracket }\) we have \(k+u-\ell \ge k - \ell \ge k - (k-m)=m\).
Now consider \(\beta _\ell =\sum _{\ell '=1}^{\ell -1}\gamma _{\ell ,\ell '} = \sum _{\ell '=1}^{\ell -1}\sum _{u=0}^{\ell ' -1}\theta _u\theta _{u+(\ell -\ell ')}\) and search to establish this formula:
The expression \(\beta _\ell = \sum _{\ell '=1}^{\ell -1}\sum _{u=0}^{\ell ' -1}\theta _u\theta _{u+(\ell -\ell ')}\) is a double sum and \(\ell \) is fixed. Let us consider the square matrix \(B=(b_{u,\ell '})\) of size \(\ell -1\) indexed by \(\ell ' \in {\llbracket 1,\ell -1\rrbracket }\) for the columns and by \(u \in {\llbracket 0,\ell -2\rrbracket }\) for the rows. We define \(b_{u,\ell '}\) as follows: \(b_{u,\ell '}= \theta _u\theta _{u+(\ell -\ell ')}\) for \(u\le \ell '\), otherwise \(b_{u,\ell '} = 0\) (B is upper triangular). Note that \(\sum _{u=0}^{\ell ' -1}\theta _u\theta _{u+(\ell -\ell ')}\) is both the inner sum of the double sum \(\beta _{\ell }\) and the sum of the entries of column \(\ell '\); thus the sum of all the entries of B is \(\beta _{\ell }\).
To obtain the right member of (31), we will sum the entries for each diagonal of B. As B is upper triangular, each of the sums of the diagonals below the main diagonal is zero; for the \(\ell -1\) upper diagonals, let a be in \({\llbracket 1,\ell -1\rrbracket }\); the sum of the entries of the diagonal at distance \(\ell -1 -a\) from the main diagonal is \(\theta _a \sum _{b=0}^{a-1}\theta _b\). For example, for the main diagonal (\(a=\ell -1\) and the distance is 0), the sum of the entries equals \(\theta _{\ell -1}(\theta _0+\theta _1+\cdots +\theta _{\ell -2})\); for the diagonal just above the main diagonal (\(a=\ell -2\) and the distance is 1), the sum of entries is \(\theta _{\ell -2}(\theta _0+\theta _1+\cdots +\theta _{\ell -3})\); the last diagonal is reduced to the only one element \(\theta _1\theta _0\) (\(a=1\) and the distance is \(\ell -2\)). Then (31) is proved.
From (30) and (31), we deduce Formula (28) of Lemma 1:
with \(a_\ell =\sum _{b=\ell }^m\theta _b\) for \(\ell \in {\llbracket 1,m\rrbracket }\) and \(a_\ell =0\) for \(\ell >m\). In particular, for \(\ell \in {\llbracket m+1,k-m\rrbracket }\), the formula becomes \(p_{\ell }=p_{m+1}=a_0^2=(1-\theta _1-\cdots -\theta _m)^2 \). Consequently, Lemma 1 is proved \(\square \)
1.2 Proof of Proposition 3 on entries of the information matrix
As the design d is fixed in \({\varOmega }_{v, b, k}\), it will be omitted in the indices. In Sect. 6.1, we have already established Identity (23) on c. We still have to establish Identities (21) and (22) about the entries \(\sigma ^{2}\mathbf{C }_{j,j}\) and \(\sigma ^{2}\mathbf{C }_{j,j'}\) (\(j\ne j'\)) of the matrix \(\sigma ^{2}\mathbf{C }_{d}\). The information matrix is given by Identity (15) rewritten below:
where \(T_{i}=(\mathbf{t }_1(i), \ldots , \mathbf{t }_v(i))\), \({{\mathcal {A}}}=\sum _{i=1}^{b} T_{i}'\, {\mathbb {M}} \, T_{i} \) and \({{\mathcal {B}}}=\sum _{i=1}^{b} T_{i}'\,{\mathbb {M}} \, \mathbf{1 }_k \mathbf{1 }_k'\, {\mathbb {M}} \, T_{i}\). The entries \(\gamma _{\ell ,\ell '}\) of the matrix \({\mathbb {M}}=\sigma ^{2}V^{-1}\) are described in Proposition 1. We will find:
where \(\tau \) and \( \mu \) come from \({{\mathcal {A}}}\) and \(\omega _{j,j}\) and \(\omega _{j,j'}\) come from \( c^{-1}{{\mathcal {B}}}\). In the following, we will look for formulas on \(\tau \), \(\mu \), \(\omega _{j,j}\) and \(\omega _{j,j'}\) by first considering the matrix \({{\mathcal {A}}}\) and then the matrix \(c^{-1}{{\mathcal {B}}}\). Before that, we introduce some necessary tools.
Preliminary notations and remarks
For \(r\in {\llbracket 1,k\rrbracket }\), \({\varvec{e}}_r=({\varvec{e}}_{r,s})_{1\le s \le k}\) denotes the r-th canonical vector of \({\mathbb {R}}^k\), i.e. \({\varvec{e}}_{r,s}=\delta _{r,s}\) (where \(\delta \) is the Kronecker symbol). Note that each entry of a \(k\times k\)-matrix A is expressed in the form \(A_{r,s}={\varvec{e}}_r' \, A \, {\varvec{e}}_{s}\).
For each treatment \(j\in {\llbracket 1,v\rrbracket }\), the jth column vector \(\mathbf{t }_j(i)\) of the matrix \(T_i\) defined in (11) can be expressed as follows: for each \(i \in {\llbracket 1,b\rrbracket }\), we set
Hence, following notations of Sect. 3.3, for each period \(\ell \in {\llbracket 1,k-m\rrbracket }\), we find
Remark 4
For each treatment j, exactly r vectors \(\mathbf{t }_j(i)\) are non-zero because exactly r patients receive the treatment j.
Remark 5
As the designs we consider in this paper are binary, each patient i receives at most one time the same treatment j; consequently, for each \(\ell \in {\llbracket 1,m\rrbracket }\) and because \(\ell \ne k-\ell +1\) since \(k>2m\), we have:
Now we fix two distinct treatments \(j,j'\) in \({\llbracket 1,v\rrbracket }\). To establish Identities (21) and (22) about \(\sigma ^{2}\mathbf{C }_{j,j}\) and \(\sigma ^{2}\mathbf{C }_{j,j'}\) of the matrix \(\sigma ^{2}\mathbf{C }_{d}\), we will examine separately the contributions of each of the two sums of the right member of (32), namely \({{\mathcal {A}}}\) (for \(\tau \) and \(\mu \)) then \({{\mathcal {B}}}\) (for \(\omega _{j,j}\) and \(\omega _{j,j'}\)). Then we will achieve the proof.
Diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j}\) : determination of \(\tau \)
The contribution of \({{\mathcal {A}}}\) to the diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j}\) is the value \(\tau =\sum _{i=1}^{b} \tau _{i}\) where \(\tau _{i}=\mathbf{t }_j'(i) \, {\mathbb {M}} \, \mathbf{t }_j(i)\). From Definition (33) of vectors \(\mathbf{t }_j(i)\), we have for each patient i:
Combining the above identity (35) and Lemma 1 applied to the diagonal entries \(\gamma _{\ell ,\ell }={\varvec{e}}_\ell ' \, {\mathbb {M}} \, {\varvec{e}}_\ell \) of matrix \({\mathbb {M}}\), we obtain (recall that \(\theta _0=-1\)):
As \(\sum _{\ell =1}^{k-m}\phi _{j}^\ell =r\) (see Identity (17)), we get:
Finally, the contribution \(\tau \) of the term \({{\mathcal {A}}}\) to the diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j}\) is:
with \(b_\ell =\displaystyle \sum _{u=\ell }^m\theta _u^{2}\) for \(\ell \in {\llbracket 1,m\rrbracket }\), as defined in Proposition 3.
Extra-diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j'}\) : determination of \(\mu \)
Similarly, let us now focus on the contribution \(\mu \) of the sum \({{\mathcal {A}}}=\sum _{i=1}^{b} T_{i}'\, {\mathbb {M}}\, T_{i}\) to the extra-diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j'}\), where \(\mu =\sum _{i=1}^{b} \mu _{i}\) with \(\mu _{i}=\mathbf{t }_j'(i){\mathbb {M}}\mathbf{t }_{j'}(i)\). For this purpose, we need to introduce the following notation: for \(\ell ,\ell ' \in {\llbracket 1,k\rrbracket }\), we denote by \(\phi _{j,j'}^{\ell ,\ell '}\) the number of patients who receive the distinct treatments j and \(j'\) at periods \(\ell ,\ell '\):
Note that if \(\ell '=\ell \) then \(\mathbf{t }_j(i)+\mathbf{t }_{j'}(i)\ne {\varvec{e}}_\ell + {\varvec{e}}_{\ell '}\) for each patient i because the distinct treatments j and \(j'\) cannot be applied simultaneously to the same patient i at the same period \(\ell \). Hence, for any \(s \in {\llbracket 1,k-1\rrbracket }\), we can write:
where the sum involves all the distinct periods \(\ell ,\ell '\) in \({\llbracket 1,k\rrbracket }\) and \(s=|\ell -\ell '| \ne 0\).
From Definition of vectors \(\mathbf{t }_j(i)\), as \(j\ne j'\), the only non-zero \(\mu _{i}=\mathbf{t }_j'(i){\mathbb {M}}\mathbf{t }_{j'}(i)\) are such that \(\mathbf{t }_j(i)+\mathbf{t }_{j'}(i)={\varvec{e}}_\ell + {\varvec{e}}_{\ell '}\) for some periods \(\ell \) and \(\ell '\) which are necessarily distinct. Moreover, as the matrix \({\mathbb {M}}\) is symmetric, when the identity \(\mathbf{t }_j(i)+\mathbf{t }_{j'}(i)={\varvec{e}}_\ell + {\varvec{e}}_{\ell '}\) holds, we can suppose that \(\mathbf{t }_j(i)= {\varvec{e}}_\ell \) and \(\mathbf{t }_{j'}(i)={\varvec{e}}_{\ell '}\) with \(\ell <\ell '\).
Hence, by putting \(u_{i,j}=\mathbf{t }_j(i)+\mathbf{t }_{j'}(i)\), \(v_{\ell ,\ell '}={\varvec{e}}_\ell + {\varvec{e}}_{\ell '}\) and considering the element \(\gamma _{\ell ,\ell '}= {\varvec{e}}_\ell ' \, {\mathbb {M}} \, {\varvec{e}}_{\ell '}\) of the matrix \({\mathbb {M}}\), we obtain:
For sake of clarity we let \(\phi ^{\ell ,\ell '}=\phi _{j,j'}^{\ell ,\ell '}\) for the rest of this proof. We introduce in the expression of \(\mu \) the values of the entries \(\gamma _{\ell ,\ell '}\) of the matrix \({\mathbb {M}}\) given in Proposition 1. Collecting the factors of each \(\theta _\ell \) and \(\theta _{\ell }\theta _{\ell '}\), we obtain:
Recall that Identity (39) says that \(N_{j,j'}^s=\phi ^{1,1+s}+\phi ^{2,2+s} +\cdots +\phi ^{k-s-1,k-1}+ \phi ^{k-s,k}\). Putting
for \(s\in {\llbracket 1,m-1\rrbracket }\) and \(t\in {\llbracket 1,m-s\rrbracket }\), the expression (40) of \(\mu \) becomes:
Collecting the factors of each \(N_{j,j'}^s\) and each \(U_{t,s}\), we obtain:
Indeed, for each \(s\in {\llbracket 1,m-1\rrbracket }\) and \(t\in {\llbracket 1,m-s\rrbracket }\), the component \(\beta _{t,s}\) of \(\mu \) which collects the terms \(U_{t,s}\theta _{a,b}\) is the following:
In addition, the double sum \(\displaystyle \sum _{s=1}^{m-1}\sum _{t=1}^{m-s}\beta _{t,s}\) collects all the terms of the form \(U_{t,s}\theta _{a,b}\) of the right-hand side of Identity (41). In order to complete the determination of \(\mu \), note that:
Remark 6
We have \(\phi _{j,i}^\ell = \delta _{j, d(i,\ell )} + \delta _{j, d(i,k-\ell +1)}\) and \(N_{j,j',i}^s \in \{0,1\}\) because d is binary (see Sect. 3.3); then, from Identity (38) about \(\phi _{j,j'}^{\ell ,\ell '}\), we find:
Finally, the contribution \(\mu \) of the term \({{\mathcal {A}}}=\sum _{i=1}^{b} T_{i}'\, {\mathbb {M}}\, T_{i}\) to the entry \(\sigma ^{2}\mathbf{C }_{j,j'}\) is:
where \({\varTheta }_{t,s}=\theta _t\theta _{t+s}+\theta _{t+1}\theta _{t+1+s}+\cdots +\theta _{m-s}\theta _m\).
Introduce the following notation \(\kappa _{j_{1},i}\) for some treatment \(j_{1}\) and some patient i:
As \( \kappa _{j_{1},i}\) is a scalar and \({\mathbb {M}}={\mathbb {M}}'\) (i.e. \({\mathbb {M}}\) is symmetric), we also have:
Then the contribution of \(c^{-1}{{\mathcal {B}}}\) to the entry \(\sigma ^{2}\mathbf{C }_{j_{1},j_{2}}\) for two treatments \(j_{1},j_{2}\), not necessarily distinct, is
In the following, we determine the quantities \( \kappa _{j,i}=\mathbf{t }_j'(i)\, {\mathbb {M}} \, \mathbf{1 }_k\) to find \(\omega _{j_{1},j_{2}}\).
When the treatment j is not applied to the ith patient, \(\kappa _{j,i}=0\) because \(\mathbf{t }_j(i)=\mathbf{0 }_{k}\). Otherwise, it is applied only once, at some period \(\ell \) and we have
Recall that the sum of the entries of row \(\ell \) in matrix \({\mathbb {M}}\) is given in Lemma 1: for each \(\ell \in {\llbracket 1,k-m\rrbracket }\), the value \(p_\ell =\displaystyle \sum \nolimits _{\ell '=1}^k\gamma _{\ell ,\ell '}=a_0(a_0-a_\ell )\) (with \(a_\ell =\displaystyle \sum \nolimits _{u=\ell }^m\theta _u\) for \(\ell \in {\llbracket 1,m\rrbracket }\) and \(a_{\ell }=0\) for \(\ell >m\)) and \(p_\ell =p_{k-\ell +1}\) for \(\ell \in {\llbracket k-m,k\rrbracket }\). Remark that \(p_{\ell }=p_{m+1}=a_0^{2}\) for all \(\ell >m\). Thus \(\forall \) \(\ell \in {\llbracket 1,m\rrbracket }\cup {\llbracket k-m+1,k\rrbracket }\):
Now, let’s determine the values of \(n_{j,i}\), defined in Sect. 2.1, and \(\phi _{j,i}^\ell \) for all \(\ell \in {\llbracket 1,m\rrbracket }\). Recall that \(\mathbf{t }_j(i)={\varvec{e}}_\ell \) if the treatment j is applied to the ith patient at period \(\ell \) and \(\mathbf{t }_j(i)=\mathbf{0 }_{k}\) otherwise.
-
Case \( \mathbf{t }_j(i)=\mathbf{0 }_{k}\): \(n_{j,i}=\phi _{j,i}^1=\cdots =\phi _{j,i}^m=0\) because the ith patient does not receive the treatment j.
-
Case \(\mathbf{t }_j(i)={\varvec{e}}_\ell \) where \(~\ell \in {\llbracket 1,m\rrbracket }\cup {\llbracket k-m+1,k\rrbracket }\): \(n_{j,i}=\phi _{j,i}^\ell =1\) and \(\phi _{j,i}^1=\cdots =\phi _{j,i}^{\ell -1}=\phi _{j,i}^{\ell +1}=\cdots =\phi _{j,i}^m=0\).
-
Case \(\mathbf{t }_j(i)={\varvec{e}}_\ell \) where \(~\ell \in {\llbracket m+1,k-m\rrbracket }\): \( n_{j,i}=1\) and \(\phi _{j,i}^1=\cdots =\phi _{j,i}^m=0.\)
If the treatment j is applied to the ith patient at some period \(\ell \) for \(\ell \in {\llbracket 1,k\rrbracket }\) then \(\kappa _{j,i}=p_\ell \). Otherwise, if the treatment j is not applied to the ith patient then \(\kappa _{j,i}=0\). Consequently, we can express the quantity \(\kappa _ {j, i} \) in the following form
From Formulas (28) and (44), we deduce that:
Diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j}\): determination of \(\omega _{j,j}\)
Recall that the contribution \(c^{-1}{{\mathcal {B}}} \) to the entry \(\sigma ^2\mathbf{C }_{j,j}\) is the quantity \(\omega _{j,j}=c^{-1}\sum _{i} \kappa _{j,i}^{2}\) (see Identity (43)). From Identity (46), we have:
because \({(\phi _{j,i}^{\ell })}^2=\phi _{j,i}^\ell \) \(\forall \) \(\ell \in {\llbracket 1,m\rrbracket }\), and when \(\ell \ne \ell '\), \(\phi _{j,i}^{\ell }\phi _{j,i}^{\ell '}=0\). From
we finally obtain:
Extra-diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j'}\): determination of \(\omega _{j,j'}\)
Let us determine the contribution \(\omega _{j,j'}=c^{-1}\sum _{i=1}^{b}\kappa _{j,i}\kappa _{j',i}\), \(j\ne j'\), to the entry \(\sigma ^{2}\mathbf{C }_{j,j'}\). From Identity (45), we have:
From \(\lambda _{j,j'}=\sum _{i=1}^bn_{j,i}n_{j',i}\) (see Identity (2)) and
we finally obtain for \(c\,\omega _{j,j'}=\sum _{i=1}^b\kappa _{j,i}\kappa _{j',i}\)
End of the proof of Proposition 3. From \(\mathbf{C }= {{\mathcal {A}}} - c^{-1}{{\mathcal {B}}}\) (see Identity (32)), we find:
where \(\tau \) and \( \mu \) come from the matrix \({{\mathcal {A}}}\) and \(\omega _{j,j}\) and \(\omega _{j,j'}\) come from the matrix \(c^{-1}{{\mathcal {B}}}\). Using the formulas on \(\tau \), \(\mu \), \(\omega _{j,i}\) and \(\omega _{j,j'}\) respectively establish in Identities (37), (42), (47) and (49), we complete the proof of Proposition 3.
1.3 Proof of Theorem 1
Consider \(d \in {\varOmega }_{v,b,k}\) a NNm-balanced BIBD\((v,b,r,k,\lambda )\) for the AR(m) model with \(k \ge 3\), \(m\ge 1\) and \(2m< k < v\) (this proof also holds for CBD when \(k=v\)).
In Remark 1, we have deduced from Proposition 3 that all the competitor designs have the same trace. Hence, from Proposition 4, the universal optimality of the design d is satisfied when the information matrix \(\mathbf{C }_d\) of \({\widehat{\gamma }}\) is completely symmetric; which means that its extra-diagonal entries \(\mathbf{C }_{d,j,j'}\) are all independent from \(j,j'\) \((j\ne j')\) because the sum by row (and by column) of \(\mathbf{C }_d\) is null (see Identities (25)). According to the hypothesis of Theorem 1, we will prove that none of the five summation blocks of \(\mathbf{C }_{d,j,j'}\) appearing in Identity (22) of Proposition 3 depends on \(j,j'\).
As the design d is a NNm-balanced BIBD\((v,b,r,k,\lambda )\), Identities (3) and (4) imply that two of the summation blocks of \(\mathbf{C }_{d,j,j'}\) are independent from \(j,j'\): those depending on \(\lambda =\lambda _{j,j'}\) and \(N^s=N_{j,j'}^s\). Therefore, if Identities (i), (ii) and (iii) of Theorem 1 hold then the three others summation blocks of \(\mathbf{C }_{d,j,j'}\) are independent from \(j,j'\) (see Remark 7 for the case of (iii)).
Remark 7
On the right side of Identity (22), let’s consider the summation block \(\sum _{s=1}^{m-1}\sum _{t=1}^{m-s} {\varTheta }_{t,s} {{\overline{\alpha }}}_{s,t} \) of \(\mathbf{C }_{d,j,j'}\) where \({{\overline{\alpha }}}_{s,t} = N_{j,j',i}^s(\phi _{j,i}^t\phi _{j',i}^{t+s}+ \phi _{j',i}^t\phi _{j,i}^{t+s})\). Let \(\ell \ne \ell '\) in \({\llbracket 1,m\rrbracket }\) and \(\alpha _{\ell ,\ell '}=N_{j,j',i}^{|\ell -\ell '|}(\phi _{j,i}^\ell \phi _{j',i}^{\ell '}+ \phi _{j',i}^\ell \phi _{j,i}^{\ell '})\) be the left-hand side of Identity (iii) in Theorem 1. We claim that:
For \(`` \subset `` \), by symmetry between \(\ell ,\ell '\) in \(\alpha _{\ell ,\ell '}\), we can suppose that \(\ell < \ell '\) and express \( \alpha _{\ell ,\ell '}\) as follows: \( \alpha _{\ell ,\ell '}= N_{j,j',i}^{|\ell -\ell '|}(\phi _{j,i}^\ell \phi _{j',i}^{\ell +|\ell -\ell '|}+ \phi _{j',i}^\ell \phi _{j,i}^{\ell +|\ell -\ell '|}) \). Then \(\alpha _{\ell ,\ell '} = {{\overline{\alpha }}}_{s,t} \) with \(s=|\ell -\ell '| \in {\llbracket 1,m-1\rrbracket }\) and \(\ell =t \in {\llbracket 1,m-s\rrbracket }\) (as expected in the summation in the expression of \(\mathbf{C }_{d,j,j'}\)). Conversely, let \(s \in {\llbracket 1,m-1\rrbracket }\) and \(t \in {\llbracket 1,m-s\rrbracket }\). Then we have \({{\overline{\alpha }}}_{s,t} = \alpha _{\ell ,\ell '} \) for the two distinct periods \(\ell =t\) and \(\ell '=t+s\) in \({\llbracket 1,m\rrbracket }\).
In the following, we will prove Identities (i), (ii) and (iii) of Theorem 1. More precisely, for each identity, we will suppose that the term in the left-hand side is a constant and we prove that it equals to the right-hand side. Recall that \(\omega = \frac{2b}{v(v-1)}\).
Proof of Identity (i). For each treatment j, we first need to establish the following identity:
Proof
Develop \(\sum _{j'\ne j}\phi _{j,j'}^{\ell *}\):
The first term of the right-hand side of the previous identity is
by definition of \(\phi _{j}^\ell \) and since each patient i receives k treatments. The second term is
because d is equireplicated (i.e. j appears r times in d) and only 2 treatments \(j'\) can be applied to a same patient i at periods \(\ell \) and \((k-\ell +1)\) (i.e. \(\phi _{j',i}^{\ell } =1\) for these two treatments and 0 for the others). Summing \(\alpha \) and \(\beta \), we obtain Identity (51) \(\square \)
From Formulas (51) and (16), we obtain finally:
because \(rv=bk\). Suppose that each \(\phi _{j,j'}^{\ell *}\) does not depend on \(j,j'\). Then we have the equality \(\sum _{j=1}^{v} \displaystyle \sum \nolimits _{j'\ne j}\phi _{j,j'}^{\ell *} = v(v-1)\phi _{j,j'}^{\ell *}\). Thus from (52), we obtain Identity
Proof of Identity (ii). Consider two distinct periods \(\ell \) and \(\ell '\) and fix a patient i. Four distinct treatments \(j_{1},\ldots ,j_{4}\) are applied to this patient at the respective periods \(\ell ,k-\ell +1,\ell ',k-\ell '+1\). Then \(\phi _{j_{1},i}^\ell =\phi _{j_{2},i}^\ell =\phi _{j_{3},i}^{\ell '}=\phi _{j_{4},i}^{\ell '}=1\) and the other values \(\phi _{j,i}^{\ell }\) and \(\phi _{j',i}^{\ell '}\) are zero; consequently:
and
If the quantity \(\displaystyle \sum _{i=1}^b\phi _{j,i}^\ell \phi _{j',i}^{\ell '}\) does not depend on \(j,j'\) then, by the same reasoning as for (i), we find (\(\delta _{\ell ,\ell '}\) is the Kronecker symbol):
Proof of Identity (iii). With reference to Remark 7, prove Identity (iii): \(\alpha _{\ell ,\ell '}= 2 \, \omega \) for \(\ell \ne \ell '\) in \({\llbracket 1,m\rrbracket }\) is equivalent to prove \({{\overline{\alpha }}}_{s,t} =2 \, \omega \) for \(s \in {\llbracket 1,m-1\rrbracket }\) and \(t \in {\llbracket 1,m-s\rrbracket }\). Let’s fix \(s \in {\llbracket 1,m-1\rrbracket }\) and \(t \in {\llbracket 1,m-s\rrbracket }\) and prove that \({{\overline{\alpha }}}_{s,t} =2 \, \omega \). By the same reasoning as above, for a patient i, four distinct treatments \(j_{1},\ldots ,j_{4}\) are applied at the respective distinct periods \(t,k-t+1,t+s,k-(t+s)+1\). Then
But, in this sum, there are 4 cases in which two treatments among \(j_{1},\ldots ,j_{4}\) are applied at distance s and there are 4 cases in which two treatments among \(j_{1},\ldots ,j_{4}\) are applied at distance \(\delta \) where \( \delta \ge m> s\) because \(k>2m\). For the firsts 4 cases, we have \(N_{j,j',i}^s=1\) and for the 4 others cases we have \(N_{j,j',i}^s=0\). Then
Hence, if each quantity \( {{\overline{\alpha }}}_{s,t} = \sum _{i=1}^{b} N_{j,j',i}^s(\phi _{j,i}^t\phi _{j',i}^{t+s}+ \phi _{j',i}^t\phi _{j,i}^{t+s}) \) does not depend on \(j,j'\) (\(j\ne j'\)), the following identity holds for \(\ell \ne \ell '\) in \({\llbracket 1,m\rrbracket }\):
Then Theorem 1 is proved.
1.4 Proof of Theorem 2
Theorem 2 is a straightforward consequence of the proof of Theorem 1 which also holds for \(k=v\) and of Identities (19) and (20) for the NNm-balanced square designs.
1.5 Proof of Proposition 5
Recall that in case of the strength is \(t=2\), the index \(\omega _2\) is \(\omega =\frac{2b}{v(v-1)}\) (see Remark 3). Since an SB(b, k, v, 2) can be interpreted as a BIBD(\(v,b, r, k, \lambda )\), Identity (27) comes from the identities \(v (v-1) \, \omega = 2b\) and \(rv = bk\) (see Identity (1)).
Now consider an unordered pair \((j,j')\) of two distinct treatments. For all \(m \in {\llbracket 1,k-1\rrbracket }\), the design d is NNm-balanced because \(N_{d,j,j'}^s\), the number of times that \((j,j')\) are applied to a same patient at distance \(s \in {\llbracket 1,m\rrbracket }\), is a constant \(N_{d}^{s}\). More precisely, consider the \(k-s\) possible pairs of periods \(\ell \) and \(\ell +s\) where \(\ell \) runs in \({\llbracket 1,k-s\rrbracket }\). Since the strength of d is two, we obtain Identity (4): \(N_{d}^{s}=N_{d,j,j'}^s=\omega (k-s)\). To prove the rest of Proposition 5, we use item (a) of Theorem 2 in Martin and Eccleston (1991) which implies that d is universally optimal.
1.6 Proofs of Identities (3), (4), (19) and (20)
Proof of Identity ( 3 )
Let \(\beta = \sum _{j=1}^{v}\sum _{j'\ne j} \lambda _{d,j,j'}\). As d is a BIBD, we have \(\beta = \sum _{j=1}^{v}\sum _{j'\ne j}\lambda = v(v-1)\lambda \). But we can express \(\beta \) differently: \(\beta = bk(k-1)\) because there are b patients and exactly \(k(k-1)\) distinct pairs of treatments for each of them (recall that \(k\le v\)). The identification of the two expressions of \(\beta \) prove the wanted identity satisfied by \(\lambda \):
Proof of Identity (4)
Assume that design d is NNm-balanced. Let us fix \(s \in {\llbracket 1,m\rrbracket }\) and compute by two ways the sum
Firstly, as the design is NNm-balanced, each \(N_{d,j,j'}^s\) equals a constant \(N_{d}^{s}\) which does not depend on the choice \(j,j'\). So we have:
Secondly, suppose that some patient i receives a given treatment j. Recall that j is administered at most once to a same patient. For the ith patient, if j is not applied in the first s or in the last s periods (i.e.when \(\sum _{\ell =1}^{s}\phi _{d,j,i}^{\ell } =0\)) then there exist \(2=\sum _{j'\ne j}N_{d,j,j',i}^s\) treatments at distance s from j. Otherwise, if j is applied in the first s or in the last s periods then \(\phi _{d,j,i}^{\ell } =1\) for (only) one period \(\ell \in {\llbracket 1,s\rrbracket }\) (i.e. when \(\sum _{\ell =1}^{s}\phi _{d,j,i}^{\ell } =1\)) and there exists only \(1=\sum _{j'\ne j}N_{d,j,j',i}^s\) treatment at distance s from j. Therefore, in both cases, we obtain
Moreover, as j appears exactly r times in the design d, by considering all patients i,
Let us sum the above equality for all j and from Identity (16), we obtain this second expression of \(\alpha \):
As \(rv=kb\) (see Identity (1)), the identification of the two expressions of \(\alpha \) implies the wanted identity (4) satisfied by \(N_{d}^{s}\):
Proof of Identities (19) and (20)
Let d be a NNm-balanced design with \(k=v\) (i.e. the number of periods equals the number of treatments). We have also \(r=b\) because \(rv=kb\). We will prove that for each \(\ell \in \) \({\llbracket 1,m\rrbracket }\) the quantities \(\phi _{d,j}^\ell \) and \(\phi _{d,j,j'}^{\ell *}\) do not depend on treatments \(j,j'\) (\(j\ne j'\)); we will express these quantities without j and \(j'\).
Let \(s\in \) \({\llbracket 1,m\rrbracket }\). Applying Identity (4), as d is a NNm-balanced design, \(N_{d,j,j'}^s=N_{d}^{s}=2b(k-s)/v(v-1)=2b(v-s)/v(v-1)\) since \(k=v\). Then from (54), we have:
As \(r=b\), the previous equality becomes \(\sum _{\ell =1}^{s} \phi ^\ell _{d,j} = \frac{2bs}{v}\). Then, for each \(s\in {\llbracket 1,m\rrbracket }\), we find:
which is Identity (19). We now prove the second identity. We know that each treatment is administered at most once for each patient; but, as moreover \(k=v\), every patient will receive the v distinct treatments once and only once. That means \(n_{d,j,i}=1\) for all \(j\in {\llbracket 1,v\rrbracket }\). Therefore Identity (48) becomes Identity (20):
and the two identities on NNm-balanced square designs are proved.
Rights and permissions
About this article
Cite this article
Koné, M., Valibouze, A. Nearest neighbor balanced block designs for autoregressive errors. Metrika 84, 281–312 (2021). https://doi.org/10.1007/s00184-020-00770-6
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-020-00770-6