Skip to main content
Log in

Nearest neighbor balanced block designs for autoregressive errors

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

In this paper we study the problem of finding neighbor optimal designs for a general correlation structure. We give universal optimality conditions for nearest-neighbor (NN) balanced block designs when observations on the same block are modeled by an autoregressive AR(m) process with arbitrary order m. The cases \(m=1,2\) have been studied by Grondona and Cressie (Sankhyā Indian J Stat Ser A 55(2):267–284, 1993) for AR(2) and by Gill and Shukla (Biometrika 72(3):539–544, 1985a, Commun Stat Theory Methods 14(9):2181–2197, 1985b) and Kunert (Biometrika 74(4):717–724, 1987) for AR(1); we extend these results to the cases \(m \ge 3\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We bring attention on an error of their paper about one term of their formula (4.9) giving \(\sigma ^{2}c_{l,m}\) for \(l\ne m\): in the term \(2(1-\phi _{1} -\phi _{2})\phi _{2}f^{*}_{l,m}\) of their paper, the factor 2 must be removed. The notations \(e^{*}_{l,m}\) and \(f_{l,m}^{*}\) of their paper correspond to the notations \(\phi _{d,l,m}^{1*}\) and \(\phi _{d,l,m}^{2*}\) of our paper.

  2. Deheuvels and Derzo coined the terms totally balanced for SDEN and SB, and universally balanced for TA.

References

  • Ahmed R, Akhtar M (2009) On construction of one dimensional all order neighbor balanced designs by cyclic shifts. Pak J Statist 25(2):121–126

    MathSciNet  Google Scholar 

  • Azzalini A, Giovagnoli A (1987) Some optimal designs for repeated measurements with autoregressive errors. Biometrika 74(4):725–734

    Article  MathSciNet  Google Scholar 

  • Benchekroun K (1993) Association-balanced arrays with applications to experimental design. Ph.D. thesis, Dept. of Statistics, The University of North Carolina, Chapel Hill

  • Deheuvels P and Derzko G (1991) Block designs for early-stage clinical trials. Technical report of the laboratory LSTA, Université Paris 6, France. HAL-CNRS Open archives. https://hal.archives-ouvertes.fr/hal-02068964. Accessed 15 Mar 2019

  • Dey A (2010) Incomplete block designs. Indian Statistical Institute, New Delhi

    Book  Google Scholar 

  • Gill PS, Shukla GK (1985) Efficiency of nearest neighbour balanced block designs for correlated observations. Biometrika 72(3):539–544

    Article  MathSciNet  Google Scholar 

  • Gill PS, Shukla GK (1985) Experimental designs and their efficiencies for spatially correlated observations in two dimensions. Commun Stat Theory Methods 14(9):2181–2197

    Article  MathSciNet  Google Scholar 

  • Grondona MO, Cressie N (1993) Efficiency of block designs under stationary second-order autoregressive errors. Sankhyā Indian J Stat Ser A 55(2):267–284

    MathSciNet  MATH  Google Scholar 

  • Hedayat NA, Sloane NJA, Stufken J (1999) Orthogonal arrays: theory and applications. Springer, New York

    Book  Google Scholar 

  • Iqbal I, Aman Ullah M, Nasir JA (2006) The construction of second order neighbour designs. J Res (Sci) 17(3):191–199

    Google Scholar 

  • Kiefer J (1975) Balanced block designs and generalized Youden designs. I. Construction (patchwork). Ann Stat 3:109–118

    Article  MathSciNet  Google Scholar 

  • Kiefer J (1975) Construction and optimality of generalized Youden designs. in: A survey of statistical design and linear models (Proc. Internat. Sympos., Colorado State Univ., Ft. Collins, Colo., 1973), North-Holland, Amsterdam, pp 333–353

  • Kiefer J, Wynn HP (1981) Optimum balanced block and Latin square designs for correlated observations. Ann Stat 9(4):737–757

    Article  MathSciNet  Google Scholar 

  • Koné M, Valibouze A (2011) Plans en blocs incomplets pour la structure de corrélation NN\(m\). Annales de l’ISUP 55(2–3):65–88

    Google Scholar 

  • Kunert J (1985) Optimal repeated measurements designs for correlated observations and analysis by weighted least squares. Biometrika 72(2):375–389

    Article  MathSciNet  Google Scholar 

  • Kunert J (1987) Neighbour balanced block designs for correlated errors. Biometrika 74(4):717–724

    Article  MathSciNet  Google Scholar 

  • Martin RJ, Eccleston JA (1991) Optimal incomplete block designs for general dependence structures. J Stat Plan Inference 28(1):67–81

    Article  MathSciNet  Google Scholar 

  • Morgan JP, Chakravarti IM (1988) Block designs for first and second order neighbor correlations. Ann Stat 16(3):1206–1224

    Article  MathSciNet  Google Scholar 

  • Mukhopadhyay AC (1972) Construction of BIBD’s from OA’s combinatorial arrangements analogous to OA’s. Calcutta Stat Assoc Bull 21:45–50

    Article  MathSciNet  Google Scholar 

  • Passi RM (1976) A weighting scheme for autoregressive time averages. J Appl Meteorol 15(2):117–119

    Article  Google Scholar 

  • Ramanujacharyulu C (1966) A new general series of balanced incomplete block designs. Proc Am Math Soc 17:1064–1068

    Article  MathSciNet  Google Scholar 

  • Rao C (1946) Hypercubes of strength ”d” leading to confounded designs in factorial experiments. Bull Calcutta Math Soc 38:67–78

    MathSciNet  MATH  Google Scholar 

  • Rao C (1947) Factorial experiments derivable from combinatorial arrangements of arrays. J. R. S. S. Suppl 09:128–139

    MathSciNet  MATH  Google Scholar 

  • Rao C (1961) Combinatorial arrangements analogous to orthogonal arrays. Sankhyā Indian J Stat Ser A 1:283–286

    MATH  Google Scholar 

  • Rao CR (1973) Some combinatorial problems of arrays and applications to design of experiments. In: Survey of combinatorial theory (Proc. Internat. Sympos., Colorado State Univ., Ft. Collins, Colo., 1971), North-Holland, Amsterdam, pp 349–359

  • Satpati SK, Parsad R, Gupta VK (2007) Efficient block designs for dependent observations—a computer-aided search. Commun Stat Theory Methods 36(5–8):1187–1223

    Article  MathSciNet  Google Scholar 

  • Siddiqui MM (1958) On the inversion of the sample covariance matrix in a stationary autoregressive process. Ann Math Stat 29:585–588

    Article  MathSciNet  Google Scholar 

  • Stufken J (1991) Some families of optimal and efficient repeated measurements designs. J Stat Plan Inference 27:75–83

    Article  MathSciNet  Google Scholar 

  • Wei WWS (1990) Time series analysis. Univariate and multivariate methods. Addison-Wesley Publishing Company Advanced Book Program, Redwood City, CA

  • Wise J (1955) The autocorrelation function and the spectral density function. Biometrika 42:151–159

    Article  MathSciNet  Google Scholar 

  • Yates F (1964) Sir Ronald Fisher and the design of experiments. In Memoriam: Ronald Aylmer Fisher, 1890-962. Biometrics 20(2):307–321

    Article  Google Scholar 

Download references

Acknowledgements

We warmly thank Paul Deheuvels and Pierre Druilhet for their constructive suggestions which have improved the quality of this article. We would also like to thank the work of rewiewer.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Annick Valibouze.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs

Appendix: Proofs

This appendix begins by the proof of formula (23) on \(c=\mathbf{1 }_{k}'{\mathbb {M}}\mathbf{1 }_{k}\) in Proposition 3. To establish this formula we need the (essential) technical Lemma 1 on the sums of entries of a row of matrix \({\mathbb {M}}\) which is in Sect. 6.1 too. It is probably the difficulty in establishing this lemma that has long prevented the generalization to any m of the optimal conditions for the AR(m) process. The next four sections are devoted to the respective proofs of Propositions 3, Theorems 1 and 2 and Proposition 5. We will end in Sect. 6.6 with the proofs of Identities (3), (4), (19) and (20).

1.1 Sum c of entries of matrix \({\mathbb {M}}\), Identity (23)

We want establish the formula (23) of Proposition 3. This formula on c, the sum of entries of \({\mathbb {M}}\), is essentially based on Lemma 1 which gives the sum \(p_\ell \) of entries of row \(\ell \in {\llbracket 1,k\rrbracket }\); it will be also used to establish Identity (44) in the proof of Proposition 3. We first prove Identity (23) using Lemma 1 that will come after.

By definition, \(c =\mathbf{1 }_{k}'{\mathbb {M}}\mathbf{1 }_{k}= \sum _{\ell =1}^{k}\sum _{\ell '=1}^{k }\gamma _{\ell ,\ell '}=2\sum _{\ell =1}^{m}p_{\ell } + \sum _{\ell =m+1}^{k-m}p_{\ell }\). After that, from Formula \(p_{\ell }=a_0(a_0-a_\ell )\) of Lemma 1, we find:

$$\begin{aligned} c= & {} 2 a_{0}\sum _{\ell =1}^{m}(a_{0}-a_{\ell }) + \sum _{\ell =m+1}^{k-m}a_0^2 = 2 a_{0}\sum _{\ell =1}^{m}(\theta _{0}+\cdots +\theta _{\ell -1}) + (k-2m)a_0^2 \quad \\= & {} 2a_0 \sum _{\ell =0}^{m-1}(m-\ell )\theta _\ell \; + \; (k-2m)a_0^2 \quad \end{aligned}$$

because in the sum \(\sum _{\ell =1}^{m}(\theta _{0}+\cdots +\theta _{\ell -1})\) we count m times \(\theta _0\), \(m-1\) times \(\theta _1\), and so on until only once \(\theta _{m-1}\). Thus Formula (23) is proved \(\square \)

Lemma 1

Assume \(k >2m \ge 2\). Let \(p_{\ell }=\displaystyle \sum _{\ell '=1}^k\gamma _{\ell ,\ell '}\) be the sum of the entries of row \(\ell \in {\llbracket 1,k\rrbracket } \) of matrix \({\mathbb {M}}\), \(a_\ell =\displaystyle \sum _{u=\ell }^m\theta _u\;\) for \(\ell \le m\) and \(a_{\ell }=0\;\) for \(\ell >m\). Then:

$$\begin{aligned} p_\ell =a_0(a_0-a_\ell )=(1-\theta _1-\cdots -\theta _m)(1-\theta _1-\cdots -\theta _{\ell -1}) \quad \end{aligned}$$
(28)

for \(\ell \in {\llbracket 1,k-m\rrbracket }\) and, as \({\mathbb {M}}\) is symmetric with respect to its second diagonal, \(p_\ell =p_{k-\ell +1}\;\) for \(\ell \in {\llbracket k-m +1,k\rrbracket }\).

In particular, \(p_\ell =p_{m+1}=a_0^{2}\;\) for \(\ell \in {\llbracket m+1,k-m\rrbracket }\).

Proof

We consider the matrix \({\mathbb {M}}=(\gamma _{\ell ,\ell ^{\prime }})_{1\le \ell ,\ell ^{\prime }\le k}\) and we would like to express the sum \(p_\ell =\sum _{\ell '=1}^k\gamma _{\ell ,\ell '}\) of the entries of row \(\ell \) in the form given in Lemma 1. By symmetry of \({\mathbb {M}}\) we can suppose that \(\ell \in {\llbracket 1,k-m\rrbracket }\). We write \(p_\ell =\alpha _\ell + \beta _\ell \) where \(\alpha _\ell = \sum _{\ell '=\ell }^k\gamma _{\ell ,\ell '}\) and \( \beta _\ell =\sum _{\ell '=1}^{\ell -1}\gamma _{\ell ,\ell '}\).

First we compute \(\alpha _\ell = \sum _{\ell '=\ell }^k\gamma _{\ell ,\ell '}\). From Identity (8) of Proposition 1, we have the following expression of each \(\gamma _{\ell ,\ell '}\) for \(\ell ' \in {\llbracket \ell ,k\rrbracket }\):

$$\begin{aligned} \gamma _{\ell ,\ell + s} = \sum _{u=0}^{\ell -1} \theta _u\theta _{u+s} \quad \text{ for } s \in {\llbracket 0,k-\ell \rrbracket } \quad . \end{aligned}$$
(29)

Then, as \( \alpha _\ell =\sum _{s=0}^{k-\ell } \sum _{u=0}^{\ell -1} \theta _u\theta _{u+s}\), we obtain:

$$\begin{aligned} \alpha _\ell = \sum _{u=0}^{\ell -1} \left( \theta _u \sum _{s=0}^{k-\ell } \theta _{u+s} \right) = \sum _{u=0}^{\ell -1} \left( \theta _u \sum _{b=u}^{k + u-\ell } \theta _{b} \right) = \sum _{u=0}^{\ell -1} \left( \theta _u \sum _{b=u}^{m} \theta _{b} \right) \quad \end{aligned}$$
(30)

because for each \(b>m\) we have \(\theta _b=0\) and for each \(u \in {\llbracket 0,\ell -1\rrbracket }\) we have \(k+u-\ell \ge k - \ell \ge k - (k-m)=m\).

Now consider \(\beta _\ell =\sum _{\ell '=1}^{\ell -1}\gamma _{\ell ,\ell '} = \sum _{\ell '=1}^{\ell -1}\sum _{u=0}^{\ell ' -1}\theta _u\theta _{u+(\ell -\ell ')}\) and search to establish this formula:

$$\begin{aligned} \beta _\ell =\sum _{a=1}^{\ell -1}\theta _a \sum _{b=0}^{a-1}\theta _b \quad . \end{aligned}$$
(31)

The expression \(\beta _\ell = \sum _{\ell '=1}^{\ell -1}\sum _{u=0}^{\ell ' -1}\theta _u\theta _{u+(\ell -\ell ')}\) is a double sum and \(\ell \) is fixed. Let us consider the square matrix \(B=(b_{u,\ell '})\) of size \(\ell -1\) indexed by \(\ell ' \in {\llbracket 1,\ell -1\rrbracket }\) for the columns and by \(u \in {\llbracket 0,\ell -2\rrbracket }\) for the rows. We define \(b_{u,\ell '}\) as follows: \(b_{u,\ell '}= \theta _u\theta _{u+(\ell -\ell ')}\) for \(u\le \ell '\), otherwise \(b_{u,\ell '} = 0\) (B is upper triangular). Note that \(\sum _{u=0}^{\ell ' -1}\theta _u\theta _{u+(\ell -\ell ')}\) is both the inner sum of the double sum \(\beta _{\ell }\) and the sum of the entries of column \(\ell '\); thus the sum of all the entries of B is \(\beta _{\ell }\).

To obtain the right member of (31), we will sum the entries for each diagonal of B. As B is upper triangular, each of the sums of the diagonals below the main diagonal is zero; for the \(\ell -1\) upper diagonals, let a be in \({\llbracket 1,\ell -1\rrbracket }\); the sum of the entries of the diagonal at distance \(\ell -1 -a\) from the main diagonal is \(\theta _a \sum _{b=0}^{a-1}\theta _b\). For example, for the main diagonal (\(a=\ell -1\) and the distance is 0), the sum of the entries equals \(\theta _{\ell -1}(\theta _0+\theta _1+\cdots +\theta _{\ell -2})\); for the diagonal just above the main diagonal (\(a=\ell -2\) and the distance is 1), the sum of entries is \(\theta _{\ell -2}(\theta _0+\theta _1+\cdots +\theta _{\ell -3})\); the last diagonal is reduced to the only one element \(\theta _1\theta _0\) (\(a=1\) and the distance is \(\ell -2\)). Then (31) is proved.

From (30) and (31), we deduce Formula (28) of Lemma 1:

$$\begin{aligned} p_\ell = \alpha _\ell + \beta _\ell = \sum _{u=0}^{\ell -1} \theta _u \sum _{b=0}^{m} \theta _{b} = a_0(a_0 - a_\ell )\quad \end{aligned}$$

with \(a_\ell =\sum _{b=\ell }^m\theta _b\) for \(\ell \in {\llbracket 1,m\rrbracket }\) and \(a_\ell =0\) for \(\ell >m\). In particular, for \(\ell \in {\llbracket m+1,k-m\rrbracket }\), the formula becomes \(p_{\ell }=p_{m+1}=a_0^2=(1-\theta _1-\cdots -\theta _m)^2 \). Consequently, Lemma 1 is proved \(\square \)

1.2 Proof of Proposition 3 on entries of the information matrix

As the design d is fixed in \({\varOmega }_{v, b, k}\), it will be omitted in the indices. In Sect. 6.1, we have already established Identity (23) on c. We still have to establish Identities (21) and (22) about the entries \(\sigma ^{2}\mathbf{C }_{j,j}\) and \(\sigma ^{2}\mathbf{C }_{j,j'}\) (\(j\ne j'\)) of the matrix \(\sigma ^{2}\mathbf{C }_{d}\). The information matrix is given by Identity (15) rewritten below:

$$\begin{aligned} \sigma ^{2}\mathbf{C }_{d}= \displaystyle \sum _{i=1}^{b} T_{i}'\, {\mathbb {M}} \, T_{i} - c^{-1} \sum _{i=1}^{b} T_{i}'\,{\mathbb {M}} \,\mathbf{1 }_k \mathbf{1 }_k'\, {\mathbb {M}} \, T_{i} = {{\mathcal {A}}} - c^{-1}{{\mathcal {B}}}\quad \end{aligned}$$
(32)

where \(T_{i}=(\mathbf{t }_1(i), \ldots , \mathbf{t }_v(i))\), \({{\mathcal {A}}}=\sum _{i=1}^{b} T_{i}'\, {\mathbb {M}} \, T_{i} \) and \({{\mathcal {B}}}=\sum _{i=1}^{b} T_{i}'\,{\mathbb {M}} \, \mathbf{1 }_k \mathbf{1 }_k'\, {\mathbb {M}} \, T_{i}\). The entries \(\gamma _{\ell ,\ell '}\) of the matrix \({\mathbb {M}}=\sigma ^{2}V^{-1}\) are described in Proposition 1. We will find:

$$\begin{aligned} \mathbf{C }_{j,j} = \tau - \omega _{j,j} \quad \text { and } \quad \mathbf{C }_{j,j'} = \mu - \omega _{j,j'} \end{aligned}$$

where \(\tau \) and \( \mu \) come from \({{\mathcal {A}}}\) and \(\omega _{j,j}\) and \(\omega _{j,j'}\) come from \( c^{-1}{{\mathcal {B}}}\). In the following, we will look for formulas on \(\tau \), \(\mu \), \(\omega _{j,j}\) and \(\omega _{j,j'}\) by first considering the matrix \({{\mathcal {A}}}\) and then the matrix \(c^{-1}{{\mathcal {B}}}\). Before that, we introduce some necessary tools.

Preliminary notations and remarks

For \(r\in {\llbracket 1,k\rrbracket }\), \({\varvec{e}}_r=({\varvec{e}}_{r,s})_{1\le s \le k}\) denotes the r-th canonical vector of \({\mathbb {R}}^k\), i.e. \({\varvec{e}}_{r,s}=\delta _{r,s}\) (where \(\delta \) is the Kronecker symbol). Note that each entry of a \(k\times k\)-matrix A is expressed in the form \(A_{r,s}={\varvec{e}}_r' \, A \, {\varvec{e}}_{s}\).

For each treatment \(j\in {\llbracket 1,v\rrbracket }\), the jth column vector \(\mathbf{t }_j(i)\) of the matrix \(T_i\) defined in (11) can be expressed as follows: for each \(i \in {\llbracket 1,b\rrbracket }\), we set

$$\begin{aligned} \mathbf{t }_j(i) =\left\{ \begin{array}{l} {\varvec{e}}_\ell \quad \text { if } j \text { is applied to } i\text { -th patient at period } \ell \in {\llbracket 1,k\rrbracket } \\ \mathbf{0 }_{k} \quad \text{ otherwise. } \end{array} \right. \end{aligned}$$
(33)

Hence, following notations of Sect. 3.3, for each period \(\ell \in {\llbracket 1,k-m\rrbracket }\), we find

$$\begin{aligned} \phi _{j}^\ell =\left\{ \begin{array}{ll} \# \{i : \mathbf{t }_j(i)\in \{{\varvec{e}}_\ell ,{\varvec{e}}_{k-\ell +1} \} \} &{} \quad \text{ if } \ell \in {\llbracket 1,m\rrbracket } \\ \\ \# \{i : \mathbf{t }_j(i)={\varvec{e}}_\ell \} &{} \quad \text{ if } \ell \in {\llbracket m+1,k-m\rrbracket } \quad . \end{array} \right. \end{aligned}$$
(34)

Remark 4

For each treatment j, exactly r vectors \(\mathbf{t }_j(i)\) are non-zero because exactly r patients receive the treatment j.

Remark 5

As the designs we consider in this paper are binary, each patient i receives at most one time the same treatment j; consequently, for each \(\ell \in {\llbracket 1,m\rrbracket }\) and because \(\ell \ne k-\ell +1\) since \(k>2m\), we have:

$$\begin{aligned} \{i : \mathbf{t }_j(i)={\varvec{e}}_\ell \}\cap \{i : \mathbf{t }_j(i)= {\varvec{e}}_{k-\ell +1} \}=\emptyset . \end{aligned}$$

Now we fix two distinct treatments \(j,j'\) in \({\llbracket 1,v\rrbracket }\). To establish Identities (21) and (22) about \(\sigma ^{2}\mathbf{C }_{j,j}\) and \(\sigma ^{2}\mathbf{C }_{j,j'}\) of the matrix \(\sigma ^{2}\mathbf{C }_{d}\), we will examine separately the contributions of each of the two sums of the right member of (32), namely \({{\mathcal {A}}}\) (for \(\tau \) and \(\mu \)) then \({{\mathcal {B}}}\) (for \(\omega _{j,j}\) and \(\omega _{j,j'}\)). Then we will achieve the proof.

Diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j}\) : determination of \(\tau \)

The contribution of \({{\mathcal {A}}}\) to the diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j}\) is the value \(\tau =\sum _{i=1}^{b} \tau _{i}\) where \(\tau _{i}=\mathbf{t }_j'(i) \, {\mathbb {M}} \, \mathbf{t }_j(i)\). From Definition (33) of vectors \(\mathbf{t }_j(i)\), we have for each patient i:

$$\begin{aligned} \tau _{i}= \displaystyle \sum _{\ell =1}^k\sum _{\{i ~:~ \mathbf{t }_j(i)={\varvec{e}}_\ell \}}{\varvec{e}}_\ell ' \, {\mathbb {M}} \, {\varvec{e}}_\ell = \sum _{\ell =1}^m\phi _j^\ell \, {\varvec{e}}_\ell ' \, {\mathbb {M}} \, {\varvec{e}}_\ell + \sum _{\ell =m+1}^{k-m}\phi _j^\ell \, {\varvec{e}}_\ell ' \, {\mathbb {M}} \, {\varvec{e}}_\ell . \end{aligned}$$
(35)

Combining the above identity (35) and Lemma 1 applied to the diagonal entries \(\gamma _{\ell ,\ell }={\varvec{e}}_\ell ' \, {\mathbb {M}} \, {\varvec{e}}_\ell \) of matrix \({\mathbb {M}}\), we obtain (recall that \(\theta _0=-1\)):

$$\begin{aligned} \tau= & {} \sum _{\ell =1}^m \phi _j^\ell \, (\theta _0^2+\theta _1^2\cdots +\theta _{\ell -1}^2)+ \sum _{\ell =m+1}^{k-m}\phi _j^\ell \, (\theta _0^2+\theta _1^2\cdots +\theta _{m}^2)\nonumber \\= & {} \phi _j^1 \, \theta _0^{2}+\phi _j^2 \, (\theta _0^{2}+\theta _1^2)+\cdots + \phi _j^{m} \, (\theta _0^{2}+\theta _1^2+\cdots + \theta _{m-1}^2) \nonumber \\&+ \sum _{\ell =m+1}^{k-m}\phi _j^\ell (\theta _0^{2}+\theta _1^2+\cdots +\theta _{m}^2) \nonumber \\= & {} \theta _0^{2}\sum _{\ell =1}^{k-m}\phi _j^\ell +\theta _1^2\sum _{\ell =2}^{k-m}\phi _j^\ell +\theta _2^2\sum _{\ell =3}^{k-m}\phi _j^\ell + \cdots + \theta _{m}^2\sum _{\ell =m+1}^{k-m}\phi _j^\ell \quad . \end{aligned}$$
(36)

As \(\sum _{\ell =1}^{k-m}\phi _{j}^\ell =r\) (see Identity (17)), we get:

$$\begin{aligned} \tau= & {} \theta _0^{2} r + \theta _1^2(r-\phi _j^1) + \theta _2^2(r-(\phi _j^1 + \phi _j^2)) +\cdots + \theta _{m}^2(r-(\phi _j^1+\cdots +\phi _j^{m})) \\= & {} r\sum _{u=0}^m\theta _u^2- \phi _j^1\sum _{u=1}^m\theta _u^2 - \phi _j^2\sum _{u=2}^m\theta _u^2 -\cdots -\phi _j^{\ell }\sum _{u=\ell }^m\theta _u^2- \cdots - \phi _j^{m}\theta _{m}^2 . \end{aligned}$$

Finally, the contribution \(\tau \) of the term \({{\mathcal {A}}}\) to the diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j}\) is:

$$\begin{aligned} \tau= & {} r b_{0} - \phi _j^1 b_{1} - \phi _j^2 b_{2} - \cdots - \phi _j^{m}b_{m} \end{aligned}$$
(37)

with \(b_\ell =\displaystyle \sum _{u=\ell }^m\theta _u^{2}\) for \(\ell \in {\llbracket 1,m\rrbracket }\), as defined in Proposition 3.

Extra-diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j'}\) : determination of \(\mu \)

Similarly, let us now focus on the contribution \(\mu \) of the sum \({{\mathcal {A}}}=\sum _{i=1}^{b} T_{i}'\, {\mathbb {M}}\, T_{i}\) to the extra-diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j'}\), where \(\mu =\sum _{i=1}^{b} \mu _{i}\) with \(\mu _{i}=\mathbf{t }_j'(i){\mathbb {M}}\mathbf{t }_{j'}(i)\). For this purpose, we need to introduce the following notation: for \(\ell ,\ell ' \in {\llbracket 1,k\rrbracket }\), we denote by \(\phi _{j,j'}^{\ell ,\ell '}\) the number of patients who receive the distinct treatments j and \(j'\) at periods \(\ell ,\ell '\):

$$\begin{aligned} \phi _{j,j'}^{\ell ,\ell '}=\# \{i \in {\llbracket 1,b\rrbracket }~:~ \mathbf{t }_j(i)+\mathbf{t }_{j'}(i)={\varvec{e}}_\ell + {\varvec{e}}_{\ell '} \}. \end{aligned}$$
(38)

Note that if \(\ell '=\ell \) then \(\mathbf{t }_j(i)+\mathbf{t }_{j'}(i)\ne {\varvec{e}}_\ell + {\varvec{e}}_{\ell '}\) for each patient i because the distinct treatments j and \(j'\) cannot be applied simultaneously to the same patient i at the same period \(\ell \). Hence, for any \(s \in {\llbracket 1,k-1\rrbracket }\), we can write:

$$\begin{aligned} N_{j,j'}^s=\sum \phi _{j,j'}^{\ell ,\ell '} \end{aligned}$$
(39)

where the sum involves all the distinct periods \(\ell ,\ell '\) in \({\llbracket 1,k\rrbracket }\) and \(s=|\ell -\ell '| \ne 0\).

From Definition of vectors \(\mathbf{t }_j(i)\), as \(j\ne j'\), the only non-zero \(\mu _{i}=\mathbf{t }_j'(i){\mathbb {M}}\mathbf{t }_{j'}(i)\) are such that \(\mathbf{t }_j(i)+\mathbf{t }_{j'}(i)={\varvec{e}}_\ell + {\varvec{e}}_{\ell '}\) for some periods \(\ell \) and \(\ell '\) which are necessarily distinct. Moreover, as the matrix \({\mathbb {M}}\) is symmetric, when the identity \(\mathbf{t }_j(i)+\mathbf{t }_{j'}(i)={\varvec{e}}_\ell + {\varvec{e}}_{\ell '}\) holds, we can suppose that \(\mathbf{t }_j(i)= {\varvec{e}}_\ell \) and \(\mathbf{t }_{j'}(i)={\varvec{e}}_{\ell '}\) with \(\ell <\ell '\).

Hence, by putting \(u_{i,j}=\mathbf{t }_j(i)+\mathbf{t }_{j'}(i)\), \(v_{\ell ,\ell '}={\varvec{e}}_\ell + {\varvec{e}}_{\ell '}\) and considering the element \(\gamma _{\ell ,\ell '}= {\varvec{e}}_\ell ' \, {\mathbb {M}} \, {\varvec{e}}_{\ell '}\) of the matrix \({\mathbb {M}}\), we obtain:

$$\begin{aligned} \mu = \sum _{1\le \ell<\ell '\le k} \sum _{\{i \, : \, u_{i,j}=v_{\ell ,\ell '}\}}{\varvec{e}}_\ell ' \, {\mathbb {M}} \, {\varvec{e}}_{\ell '} = \sum _{1\le \ell < \ell ' \le k}\gamma _{\ell ,\ell '}\phi _{j,j'}^{\ell ,\ell '}=\sum _{\ell '=2}^{k}\sum _{\ell =1}^{\ell '-1}\gamma _{\ell ,\ell '}\phi _{j,j'}^{\ell ,\ell '} \quad . \end{aligned}$$

For sake of clarity we let \(\phi ^{\ell ,\ell '}=\phi _{j,j'}^{\ell ,\ell '}\) for the rest of this proof. We introduce in the expression of \(\mu \) the values of the entries \(\gamma _{\ell ,\ell '}\) of the matrix \({\mathbb {M}}\) given in Proposition 1. Collecting the factors of each \(\theta _\ell \) and \(\theta _{\ell }\theta _{\ell '}\), we obtain:

$$\begin{aligned} \mu= & {} -~~ \theta _1\Big (\phi ^{1,2}+\cdots +\phi ^{k-1,k}\Big )-\cdots -\theta _s\Big (\phi ^{\ell ,\ell +s}+\phi ^{\ell +1,\ell +s+1}+\cdots + \phi ^{k-s,k}\Big ) \nonumber \\&-\cdots - \theta _m\Big (\phi ^{1,m+1}+\cdots +\phi ^{k-m,k}\Big ) \nonumber \\&+ \sum _{s=1}^{m-1}\theta _1\theta _{1+s}\Big (\phi ^{2,2+s}+\cdots +\phi ^{k-s-1,k-1}\Big ) \nonumber \\&+~~ \sum _{s=1}^{m-2}\theta _2\theta _{2+s}\Big ( \phi ^{3,3+s}+\cdots +\phi ^{k-s-2,k-2}\Big ) + \cdots \nonumber \\&+~~ \sum _{s=1}^{m-u}\theta _u\theta _{u+s}\Big ( \phi ^{u+1,u+1+s}+ \phi ^{u+2,u+2+s} +\cdots +\phi ^{k-s-u,k-u}\Big ) \nonumber \\&+ \cdots +~~ \sum _{s=1}^{2}\theta _{m-2}\theta _{m-2+s}\Big ( \phi ^{m-1,m-1+s} +\cdots +\phi ^{k-s-(m-2),k-(m-2)}\Big )\nonumber \\&+~~ \theta _{m-1}\theta _{m}\Big ( \phi ^{m,m+1} +\cdots +\phi ^{k-m,k-m+1}\Big ) . \end{aligned}$$
(40)

Recall that Identity (39) says that \(N_{j,j'}^s=\phi ^{1,1+s}+\phi ^{2,2+s} +\cdots +\phi ^{k-s-1,k-1}+ \phi ^{k-s,k}\). Putting

$$\begin{aligned} U_{t,s}=\phi ^{t,t+s}+\phi ^{k-t-s+1,k-t+1} \; , \end{aligned}$$

for \(s\in {\llbracket 1,m-1\rrbracket }\) and \(t\in {\llbracket 1,m-s\rrbracket }\), the expression (40) of \(\mu \) becomes:

$$\begin{aligned} \mu= & {} \sum _{s=1}^m\theta _0\theta _sN_{j,j'}^s + \sum _{s=1}^{m-1}\theta _1\theta _{1+s}(N_{j,j'}^s-U_{1,s})\nonumber \\&+ \sum _{s=1}^{m-2}\theta _2\theta _{2+s}(N_{j,j'}^s-(U_{1,s}+U_{2,s}))+ \cdots \nonumber \\&+ \sum _{s=1}^{m-u}\theta _u\theta _{u+s}(N_{j,j'}^s-( U_{1,s}+U_{2,s}+\cdots + U_{u,s})) + \ldots \nonumber \\&+ \sum _{s=1}^{2}\theta _{m-2}\theta _{m-2+s}(N_{j,j'}^s- (U_{1,s}+U_{2,s}+\cdots + U_{m-2,s})) \nonumber \\&+~~\theta _{m-1}\theta _{m}(N_{j,j'}^1-(U_{1,1}+U_{2,1}+\cdots + U_{m-1,1})). \end{aligned}$$
(41)

Collecting the factors of each \(N_{j,j'}^s\) and each \(U_{t,s}\), we obtain:

$$\begin{aligned} \mu = \sum _{s=1}^{m}N_{j,j'}^s\sum _{u=0}^{m-s}\theta _u\theta _{u+s} -\sum _{s=1}^{m-1}\sum _{t=1}^{m-s}U_{t,s}\sum _{u=t}^{m-s}\theta _u\theta _{u+s} \quad . \end{aligned}$$

Indeed, for each \(s\in {\llbracket 1,m-1\rrbracket }\) and \(t\in {\llbracket 1,m-s\rrbracket }\), the component \(\beta _{t,s}\) of \(\mu \) which collects the terms \(U_{t,s}\theta _{a,b}\) is the following:

$$\begin{aligned} \beta _{t,s}=-U_{t,s}(\theta _t\theta _{t+s}+\theta _{t+1}\theta _{t+1+s}+\cdots +\theta _{m-s}\theta _m) \; . \end{aligned}$$

In addition, the double sum \(\displaystyle \sum _{s=1}^{m-1}\sum _{t=1}^{m-s}\beta _{t,s}\) collects all the terms of the form \(U_{t,s}\theta _{a,b}\) of the right-hand side of Identity (41). In order to complete the determination of \(\mu \), note that:

Remark 6

We have \(\phi _{j,i}^\ell = \delta _{j, d(i,\ell )} + \delta _{j, d(i,k-\ell +1)}\) and \(N_{j,j',i}^s \in \{0,1\}\) because d is binary (see Sect. 3.3); then, from Identity (38) about \(\phi _{j,j'}^{\ell ,\ell '}\), we find:

$$\begin{aligned} U_{t,s}= & {} \phi _{j,j'}^{t,t+s}+\phi _{j,j'}^{k-t-s+1,k-t+1}\\= & {} \#\Big \{i: \mathbf{t }_j(i)+\mathbf{t }_{j'}(i)\in \{{\varvec{e}}_t+{\varvec{e}}_{t+s}, {\varvec{e}}_{k-t+1}+{\varvec{e}}_{k-(t+s)+1} \} \Big \}\\= & {} \sum _{i=1}^{b}N_{j,j',i}^s(\phi _{j,i}^t\phi _{j',i}^{t+s}+ \phi _{j',i}^t\phi _{j,i}^{t+s}). \end{aligned}$$

Finally, the contribution \(\mu \) of the term \({{\mathcal {A}}}=\sum _{i=1}^{b} T_{i}'\, {\mathbb {M}}\, T_{i}\) to the entry \(\sigma ^{2}\mathbf{C }_{j,j'}\) is:

$$\begin{aligned} \mu= & {} \sum _{s=1}^{m}N_{j,j'}^s {\varTheta }_{0,s} - \sum _{s=1}^{m-1}\sum _{t=1}^{m-s} {\varTheta }_{t,s} \sum _{i=1}^{b}N_{j,j',i}^s(\phi _{j,i}^t\phi _{j',i}^{t+s}+ \phi _{j',i}^t\phi _{j,i}^{t+s}) \quad \end{aligned}$$
(42)

where \({\varTheta }_{t,s}=\theta _t\theta _{t+s}+\theta _{t+1}\theta _{t+1+s}+\cdots +\theta _{m-s}\theta _m\).

Introduce the following notation \(\kappa _{j_{1},i}\) for some treatment \(j_{1}\) and some patient i:

$$\begin{aligned} \kappa _{j_{1},i} = \mathbf{t }_{j_{1}}'(i) \,{\mathbb {M}} \, \mathbf{1 }_k. \end{aligned}$$

As \( \kappa _{j_{1},i}\) is a scalar and \({\mathbb {M}}={\mathbb {M}}'\) (i.e. \({\mathbb {M}}\) is symmetric), we also have:

$$\begin{aligned} \kappa _{j_{1},i} = \mathbf{1 }_k'\,{\mathbb {M}} \, \mathbf{t }_{j_{1}}'(i)\quad . \end{aligned}$$

Then the contribution of \(c^{-1}{{\mathcal {B}}}\) to the entry \(\sigma ^{2}\mathbf{C }_{j_{1},j_{2}}\) for two treatments \(j_{1},j_{2}\), not necessarily distinct, is

$$\begin{aligned} \omega _{j_{1},j_{2}}=c^{-1}\sum _{i=1}^{b} \kappa _{j_{1},i} \kappa _{j_{2},i}. \end{aligned}$$
(43)

In the following, we determine the quantities \( \kappa _{j,i}=\mathbf{t }_j'(i)\, {\mathbb {M}} \, \mathbf{1 }_k\) to find \(\omega _{j_{1},j_{2}}\).

When the treatment j is not applied to the ith patient, \(\kappa _{j,i}=0\) because \(\mathbf{t }_j(i)=\mathbf{0 }_{k}\). Otherwise, it is applied only once, at some period \(\ell \) and we have

$$\begin{aligned} \kappa _{j,i}=\mathbf{t }_j'(i){\mathbb {M}}\mathbf{1 }_k=\displaystyle \sum _{\ell '=1}^k\gamma _{\ell ,\ell '} \quad . \end{aligned}$$

Recall that the sum of the entries of row \(\ell \) in matrix \({\mathbb {M}}\) is given in Lemma 1: for each \(\ell \in {\llbracket 1,k-m\rrbracket }\), the value \(p_\ell =\displaystyle \sum \nolimits _{\ell '=1}^k\gamma _{\ell ,\ell '}=a_0(a_0-a_\ell )\) (with \(a_\ell =\displaystyle \sum \nolimits _{u=\ell }^m\theta _u\) for \(\ell \in {\llbracket 1,m\rrbracket }\) and \(a_{\ell }=0\) for \(\ell >m\)) and \(p_\ell =p_{k-\ell +1}\) for \(\ell \in {\llbracket k-m,k\rrbracket }\). Remark that \(p_{\ell }=p_{m+1}=a_0^{2}\) for all \(\ell >m\). Thus \(\forall \) \(\ell \in {\llbracket 1,m\rrbracket }\cup {\llbracket k-m+1,k\rrbracket }\):

$$\begin{aligned} p_{\ell }-p_{m+1}=-a_0a_\ell . \end{aligned}$$
(44)

Now, let’s determine the values of \(n_{j,i}\), defined in Sect. 2.1, and \(\phi _{j,i}^\ell \) for all \(\ell \in {\llbracket 1,m\rrbracket }\). Recall that \(\mathbf{t }_j(i)={\varvec{e}}_\ell \) if the treatment j is applied to the ith patient at period \(\ell \) and \(\mathbf{t }_j(i)=\mathbf{0 }_{k}\) otherwise.

  • Case \( \mathbf{t }_j(i)=\mathbf{0 }_{k}\): \(n_{j,i}=\phi _{j,i}^1=\cdots =\phi _{j,i}^m=0\) because the ith patient does not receive the treatment j.

  • Case \(\mathbf{t }_j(i)={\varvec{e}}_\ell \) where \(~\ell \in {\llbracket 1,m\rrbracket }\cup {\llbracket k-m+1,k\rrbracket }\): \(n_{j,i}=\phi _{j,i}^\ell =1\) and \(\phi _{j,i}^1=\cdots =\phi _{j,i}^{\ell -1}=\phi _{j,i}^{\ell +1}=\cdots =\phi _{j,i}^m=0\).

  • Case \(\mathbf{t }_j(i)={\varvec{e}}_\ell \) where \(~\ell \in {\llbracket m+1,k-m\rrbracket }\): \( n_{j,i}=1\) and \(\phi _{j,i}^1=\cdots =\phi _{j,i}^m=0.\)

If the treatment j is applied to the ith patient at some period \(\ell \) for \(\ell \in {\llbracket 1,k\rrbracket }\) then \(\kappa _{j,i}=p_\ell \). Otherwise, if the treatment j is not applied to the ith patient then \(\kappa _{j,i}=0\). Consequently, we can express the quantity \(\kappa _ {j, i} \) in the following form

$$\begin{aligned} \kappa _{j,i}= & {} p_{m+1}n_{j,i}+\phi _{j,i}^1(p_1-p_{m+1})+\phi _{j,i}^2(p_2-p_{m+1})+\cdots \nonumber \\&\cdots + \; \phi _{j,i}^m(p_m-p_{m+1}). \end{aligned}$$
(45)

From Formulas (28) and (44), we deduce that:

$$\begin{aligned} \kappa _{j,i}= & {} a_0 (a_0n_{j,i}-a_1\phi _{j,i}^1-a_2\phi _{j,i}^2-\cdots -a_m\phi _{j,i}^m) \nonumber \\= & {} a_0\left( a_0n_{j,i}-\sum _{\ell =1}^ma_\ell \phi _{j,i}^\ell \right) . \end{aligned}$$
(46)

Diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j}\): determination of \(\omega _{j,j}\)

Recall that the contribution \(c^{-1}{{\mathcal {B}}} \) to the entry \(\sigma ^2\mathbf{C }_{j,j}\) is the quantity \(\omega _{j,j}=c^{-1}\sum _{i} \kappa _{j,i}^{2}\) (see Identity (43)). From Identity (46), we have:

$$\begin{aligned} \kappa _{j,i}^2=a_0^2\left\{ a_0^2n_{j,i}^2+ \sum _{\ell =1}^m a_\ell ^2\phi _{j,i}^\ell -2a_0 \sum _{\ell =1}^m a_\ell n_{j,i}\phi _{j,i}^\ell \right\} \end{aligned}$$

because \({(\phi _{j,i}^{\ell })}^2=\phi _{j,i}^\ell \) \(\forall \) \(\ell \in {\llbracket 1,m\rrbracket }\), and when \(\ell \ne \ell '\), \(\phi _{j,i}^{\ell }\phi _{j,i}^{\ell '}=0\). From

$$\begin{aligned} \phi _j^\ell =\displaystyle \sum _{i=1}^b\phi _{j,i}^\ell =\sum _{i=1}^b n_{j,i}\phi _{j,i}^\ell \quad \text { and } \quad r=\displaystyle \sum _{i=1}^bn_{j,i}^2, \end{aligned}$$

we finally obtain:

$$\begin{aligned} c \, w_{j,j}= \displaystyle \sum _{i=1}^b\kappa _{j,i}^2= & {} a_0^2\left\{ a_0^2r+ \displaystyle \sum _{\ell =1}^m a_\ell ^2\phi _j^\ell -2a_0 \sum _{\ell =1}^m a_\ell \phi _j^\ell \right\} \nonumber \\= & {} - a_0^2\left\{ a_0(a_{0} -2a_{0})r - \sum _{\ell =1}^m\phi _j^\ell a_\ell (a_\ell -2a_0)\right\} . \end{aligned}$$
(47)

Extra-diagonal entry \(\sigma ^{2}\mathbf{C }_{j,j'}\): determination of \(\omega _{j,j'}\)

Let us determine the contribution \(\omega _{j,j'}=c^{-1}\sum _{i=1}^{b}\kappa _{j,i}\kappa _{j',i}\), \(j\ne j'\), to the entry \(\sigma ^{2}\mathbf{C }_{j,j'}\). From Identity (45), we have:

$$\begin{aligned} \kappa _{j,i}\kappa _{j',i}= & {} a_0^{2}\left( a_0n_{j,i}- \sum _{\ell =1}^m a_\ell \phi _{j,i}^\ell \right) \left( a_0n_{j',i}- \sum _{\ell '=1}^m a_{\ell '}\phi _{j',i}^{\ell '} \right) \\= & {} a_0^2 \left\{ a_0^2n_{j,i}n_{j',i} - a_0 \left( \sum _{\ell =1}^m a_{\ell }n_{j,i}\phi _{j',i}^{\ell } +\sum _{\ell =1}^m a_\ell n_{j',i}\phi _{j,i}^\ell \right) \right. \\&+ \left. \sum _{\ell =1}^m\sum _{{\ell '}=1}^{m} a_{\ell }a_{\ell '} \phi _{j,i}^\ell \phi _{j',i}^{\ell '}\right\} . \end{aligned}$$

From \(\lambda _{j,j'}=\sum _{i=1}^bn_{j,i}n_{j',i}\) (see Identity (2)) and

$$\begin{aligned} \phi _{j,j'}^{\ell *}=\displaystyle \sum _{i=1}^b (n_{j',i}\phi _{j,i}^\ell \,+ \,n_{j,i}\phi _{j',i}^{\ell }), \quad \text {for all } \ell \in {\llbracket 1,m\rrbracket }, \end{aligned}$$
(48)

we finally obtain for \(c\,\omega _{j,j'}=\sum _{i=1}^b\kappa _{j,i}\kappa _{j',i}\)

$$\begin{aligned} c\,\omega _{j,j'} = a_0^4\lambda _{j,j'} -a_0^3\sum _{\ell =1}^m a_\ell \phi _{j,j'}^{\ell *} +a_0^2\sum _{\ell =1}^m\sum _{{\ell '}=1}^{m} \left( a_{\ell }a_{\ell '} \sum _{i=1}^b \phi _{j,i}^\ell \phi _{j',i}^{\ell '} \right) \; . \end{aligned}$$
(49)

End of the proof of Proposition 3. From \(\mathbf{C }= {{\mathcal {A}}} - c^{-1}{{\mathcal {B}}}\) (see Identity (32)), we find:

$$\begin{aligned} \mathbf{C }_{j,j} = \tau - \omega _{j,j} \quad \text { and } \quad \mathbf{C }_{j,j'} = \mu - \omega _{j,j'} \end{aligned}$$

where \(\tau \) and \( \mu \) come from the matrix \({{\mathcal {A}}}\) and \(\omega _{j,j}\) and \(\omega _{j,j'}\) come from the matrix \(c^{-1}{{\mathcal {B}}}\). Using the formulas on \(\tau \), \(\mu \), \(\omega _{j,i}\) and \(\omega _{j,j'}\) respectively establish in Identities (37), (42), (47) and (49), we complete the proof of Proposition 3.

1.3 Proof of Theorem 1

Consider \(d \in {\varOmega }_{v,b,k}\) a NNm-balanced BIBD\((v,b,r,k,\lambda )\) for the AR(m) model with \(k \ge 3\), \(m\ge 1\) and \(2m< k < v\) (this proof also holds for CBD when \(k=v\)).

In Remark 1, we have deduced from Proposition 3 that all the competitor designs have the same trace. Hence, from Proposition 4, the universal optimality of the design d is satisfied when the information matrix \(\mathbf{C }_d\) of \({\widehat{\gamma }}\) is completely symmetric; which means that its extra-diagonal entries \(\mathbf{C }_{d,j,j'}\) are all independent from \(j,j'\) \((j\ne j')\) because the sum by row (and by column) of \(\mathbf{C }_d\) is null (see Identities (25)). According to the hypothesis of Theorem 1, we will prove that none of the five summation blocks of \(\mathbf{C }_{d,j,j'}\) appearing in Identity (22) of Proposition 3 depends on \(j,j'\).

As the design d is a NNm-balanced BIBD\((v,b,r,k,\lambda )\), Identities (3) and (4) imply that two of the summation blocks of \(\mathbf{C }_{d,j,j'}\) are independent from \(j,j'\): those depending on \(\lambda =\lambda _{j,j'}\) and \(N^s=N_{j,j'}^s\). Therefore, if Identities (i), (ii) and (iii) of Theorem 1 hold then the three others summation blocks of \(\mathbf{C }_{d,j,j'}\) are independent from \(j,j'\) (see Remark 7 for the case of (iii)).

Remark 7

On the right side of Identity (22), let’s consider the summation block \(\sum _{s=1}^{m-1}\sum _{t=1}^{m-s} {\varTheta }_{t,s} {{\overline{\alpha }}}_{s,t} \) of \(\mathbf{C }_{d,j,j'}\) where \({{\overline{\alpha }}}_{s,t} = N_{j,j',i}^s(\phi _{j,i}^t\phi _{j',i}^{t+s}+ \phi _{j',i}^t\phi _{j,i}^{t+s})\). Let \(\ell \ne \ell '\) in \({\llbracket 1,m\rrbracket }\) and \(\alpha _{\ell ,\ell '}=N_{j,j',i}^{|\ell -\ell '|}(\phi _{j,i}^\ell \phi _{j',i}^{\ell '}+ \phi _{j',i}^\ell \phi _{j,i}^{\ell '})\) be the left-hand side of Identity (iii) in Theorem 1. We claim that:

$$\begin{aligned} \{\alpha _{\ell ,\ell '} \mid \ell \ne \ell ' \text { in } {\llbracket 1,m\rrbracket }\} = \{ {{\overline{\alpha }}}_{s,t} \mid s \in {\llbracket 1,m-1\rrbracket } \text { and } t \in {\llbracket 1,m-s\rrbracket } \}. \end{aligned}$$
(50)

For \(`` \subset `` \), by symmetry between \(\ell ,\ell '\) in \(\alpha _{\ell ,\ell '}\), we can suppose that \(\ell < \ell '\) and express \( \alpha _{\ell ,\ell '}\) as follows: \( \alpha _{\ell ,\ell '}= N_{j,j',i}^{|\ell -\ell '|}(\phi _{j,i}^\ell \phi _{j',i}^{\ell +|\ell -\ell '|}+ \phi _{j',i}^\ell \phi _{j,i}^{\ell +|\ell -\ell '|}) \). Then \(\alpha _{\ell ,\ell '} = {{\overline{\alpha }}}_{s,t} \) with \(s=|\ell -\ell '| \in {\llbracket 1,m-1\rrbracket }\) and \(\ell =t \in {\llbracket 1,m-s\rrbracket }\) (as expected in the summation in the expression of \(\mathbf{C }_{d,j,j'}\)). Conversely, let \(s \in {\llbracket 1,m-1\rrbracket }\) and \(t \in {\llbracket 1,m-s\rrbracket }\). Then we have \({{\overline{\alpha }}}_{s,t} = \alpha _{\ell ,\ell '} \) for the two distinct periods \(\ell =t\) and \(\ell '=t+s\) in \({\llbracket 1,m\rrbracket }\).

In the following, we will prove Identities (i), (ii) and (iii) of Theorem 1. More precisely, for each identity, we will suppose that the term in the left-hand side is a constant and we prove that it equals to the right-hand side. Recall that \(\omega = \frac{2b}{v(v-1)}\).

Proof of Identity (i). For each treatment j, we first need to establish the following identity:

$$\begin{aligned} \displaystyle \sum _{j'\ne j}\phi _{j,j'}^{\ell *} = (k-2)\phi _{j}^\ell +2r \quad . \end{aligned}$$
(51)

Proof

Develop \(\sum _{j'\ne j}\phi _{j,j'}^{\ell *}\):

$$\begin{aligned} \sum _{j'\ne j}\phi _{j,j'}^{\ell *}= \sum _{j'\ne j} \displaystyle \sum _{i=1}^b (n_{j',i}\phi _{j,i}^\ell + \,n_{j,i}\phi _{j',i}^{\ell }) = \sum _{i=1}^b \phi _{j,i}^\ell \sum _{j'\ne j} n_{j',i} + \sum _{i=1}^b n_{j,i} \sum _{j'\ne j} \phi _{j',i}^{\ell }. \end{aligned}$$

The first term of the right-hand side of the previous identity is

$$\begin{aligned} \alpha =\sum _{i=1}^b \phi _{j,i}^\ell \sum _{j'\ne j} n_{j',i} = \sum _{i=1}^b \phi _{j,i}^\ell \sum _{j'=1}^{v} n_{j',i} - \sum _{i=1}^b \phi _{j,i}^\ell n_{j,i} = k \, \phi _{j}^\ell - \phi _{j}^\ell \end{aligned}$$

by definition of \(\phi _{j}^\ell \) and since each patient i receives k treatments. The second term is

$$\begin{aligned} \beta = \sum _{i=1}^b n_{j,i} \sum _{j'\ne j} \phi _{j',i}^{\ell } = \sum _{i=1}^b n_{j,i} \sum _{j'=1}^{v}\phi _{j',i}^{\ell } - \sum _{i=1}^b n_{j,i} \phi _{j,i}^\ell = 2 \, r - \phi _{j}^\ell \end{aligned}$$

because d is equireplicated (i.e. j appears r times in d) and only 2 treatments \(j'\) can be applied to a same patient i at periods \(\ell \) and \((k-\ell +1)\) (i.e. \(\phi _{j',i}^{\ell } =1\) for these two treatments and 0 for the others). Summing \(\alpha \) and \(\beta \), we obtain Identity (51) \(\square \)

From Formulas (51) and (16), we obtain finally:

$$\begin{aligned} \sum _{j=1}^{v} \displaystyle \sum _{j'\ne j}\phi _{j,j'}^{\ell *} = 2b(k-2)+2rv= 2b(k-2) +2bk=4b(k-1) \end{aligned}$$
(52)

because \(rv=bk\). Suppose that each \(\phi _{j,j'}^{\ell *}\) does not depend on \(j,j'\). Then we have the equality \(\sum _{j=1}^{v} \displaystyle \sum \nolimits _{j'\ne j}\phi _{j,j'}^{\ell *} = v(v-1)\phi _{j,j'}^{\ell *}\). Thus from (52), we obtain Identity

$$\begin{aligned} \begin{array}{cc} (i)&\phi _{j,j'}^{\ell *} = \frac{4b(k-1)}{v(v-1)}=2 \omega (k-1) \quad . \end{array} \end{aligned}$$

Proof of Identity (ii). Consider two distinct periods \(\ell \) and \(\ell '\) and fix a patient i. Four distinct treatments \(j_{1},\ldots ,j_{4}\) are applied to this patient at the respective periods \(\ell ,k-\ell +1,\ell ',k-\ell '+1\). Then \(\phi _{j_{1},i}^\ell =\phi _{j_{2},i}^\ell =\phi _{j_{3},i}^{\ell '}=\phi _{j_{4},i}^{\ell '}=1\) and the other values \(\phi _{j,i}^{\ell }\) and \(\phi _{j',i}^{\ell '}\) are zero; consequently:

$$\begin{aligned} \displaystyle \sum _{j=1}^{v}\sum _{j'\ne j}\phi _{j,i}^\ell \phi _{j',i}^{\ell '}= & {} \phi _{j_{1},i}^\ell (\phi _{j_{3},i}^{\ell '} + \phi _{j_{4},i}^{\ell '}) + \phi _{j_{2},i}^\ell (\phi _{j_{3},i}^{\ell '} + \phi _{j_{4},i}^{\ell '})=4 \quad \end{aligned}$$

and

$$\begin{aligned} \displaystyle \sum _{j=1}^{v}\sum _{j'\ne j}\phi _{j,i}^\ell \phi _{j',i}^\ell = \phi _{j_{1},i}^\ell \phi _{j_{2},i}^\ell +\phi _{j_{2},i}^\ell \phi _{j_{1},i}^\ell = 2. \end{aligned}$$

If the quantity \(\displaystyle \sum _{i=1}^b\phi _{j,i}^\ell \phi _{j',i}^{\ell '}\) does not depend on \(j,j'\) then, by the same reasoning as for (i), we find (\(\delta _{\ell ,\ell '}\) is the Kronecker symbol):

$$\begin{aligned} \begin{array}{cc} (ii)&\displaystyle \sum _{i=1}^b\phi _{j,i}^\ell \phi _{j',i}^{\ell '}= \frac{b(2 + 2(1-\delta _{\ell ,\ell '}))}{v(v-1)} = \omega (2-\delta _{\ell ,\ell '}) ~~ \text{ for } \text{ all }~~ \ell ,\ell ' \in {\llbracket 1,m\rrbracket }. \end{array} \end{aligned}$$

Proof of Identity (iii). With reference to Remark 7, prove Identity (iii): \(\alpha _{\ell ,\ell '}= 2 \, \omega \) for \(\ell \ne \ell '\) in \({\llbracket 1,m\rrbracket }\) is equivalent to prove \({{\overline{\alpha }}}_{s,t} =2 \, \omega \) for \(s \in {\llbracket 1,m-1\rrbracket }\) and \(t \in {\llbracket 1,m-s\rrbracket }\). Let’s fix \(s \in {\llbracket 1,m-1\rrbracket }\) and \(t \in {\llbracket 1,m-s\rrbracket }\) and prove that \({{\overline{\alpha }}}_{s,t} =2 \, \omega \). By the same reasoning as above, for a patient i, four distinct treatments \(j_{1},\ldots ,j_{4}\) are applied at the respective distinct periods \(t,k-t+1,t+s,k-(t+s)+1\). Then

$$\begin{aligned} \beta _{t,s}= & {} \displaystyle \sum _{j=1}^{v}\sum _{j'\ne j}(\phi _{j,i}^t\phi _{j',i}^{t+s}+ \phi _{j',i}^t\phi _{j,i}^{t+s}) \\= & {} \sum _{j=1}^{v}\phi _{j,i}^t\sum _{j'\ne j}\phi _{j',i}^{t+s}+ \sum _{j=1}^{v} \phi _{j,i}^{t+s} \sum _{j'\ne j}\phi _{j',i}^t\\= & {} 2(\phi _{j_{1},i}^t + \phi _{j_{2},i}^t)( \phi _{j_{3},i}^{t+s} +\phi _{j_{4},i}^{t+s}) = 8 \quad . \end{aligned}$$

But, in this sum, there are 4 cases in which two treatments among \(j_{1},\ldots ,j_{4}\) are applied at distance s and there are 4 cases in which two treatments among \(j_{1},\ldots ,j_{4}\) are applied at distance \(\delta \) where \( \delta \ge m> s\) because \(k>2m\). For the firsts 4 cases, we have \(N_{j,j',i}^s=1\) and for the 4 others cases we have \(N_{j,j',i}^s=0\). Then

$$\begin{aligned} \sum _{j=1}^{v} \sum _{j'\ne j} N_{j,j',i}^s(\phi _{j,i}^t\phi _{j',i}^{t+s}+ \phi _{j',i}^t\phi _{j,i}^{t+s})= & {} \quad \displaystyle \frac{1}{2} \beta _{t,s} = 4. \end{aligned}$$

Hence, if each quantity \( {{\overline{\alpha }}}_{s,t} = \sum _{i=1}^{b} N_{j,j',i}^s(\phi _{j,i}^t\phi _{j',i}^{t+s}+ \phi _{j',i}^t\phi _{j,i}^{t+s}) \) does not depend on \(j,j'\) (\(j\ne j'\)), the following identity holds for \(\ell \ne \ell '\) in \({\llbracket 1,m\rrbracket }\):

$$\begin{aligned} \begin{array}{cc} (iii)&\alpha _{\ell ,\ell '}={{\overline{\alpha }}}_{s,t} = \frac{4b}{v(v-1)} =2 \, \omega . \end{array} \end{aligned}$$

Then Theorem 1 is proved.

1.4 Proof of Theorem 2

Theorem 2 is a straightforward consequence of the proof of Theorem 1 which also holds for \(k=v\) and of Identities (19) and (20) for the NNm-balanced square designs.

1.5 Proof of Proposition 5

Recall that in case of the strength is \(t=2\), the index \(\omega _2\) is \(\omega =\frac{2b}{v(v-1)}\) (see Remark 3). Since an SB(bkv, 2) can be interpreted as a BIBD(\(v,b, r, k, \lambda )\), Identity (27) comes from the identities \(v (v-1) \, \omega = 2b\) and \(rv = bk\) (see Identity (1)).

Now consider an unordered pair \((j,j')\) of two distinct treatments. For all \(m \in {\llbracket 1,k-1\rrbracket }\), the design d is NNm-balanced because \(N_{d,j,j'}^s\), the number of times that \((j,j')\) are applied to a same patient at distance \(s \in {\llbracket 1,m\rrbracket }\), is a constant \(N_{d}^{s}\). More precisely, consider the \(k-s\) possible pairs of periods \(\ell \) and \(\ell +s\) where \(\ell \) runs in \({\llbracket 1,k-s\rrbracket }\). Since the strength of d is two, we obtain Identity (4): \(N_{d}^{s}=N_{d,j,j'}^s=\omega (k-s)\). To prove the rest of Proposition 5, we use item (a) of Theorem 2 in Martin and Eccleston (1991) which implies that d is universally optimal.

1.6 Proofs of Identities (3), (4), (19) and (20)

Proof of Identity ( 3 )

Let \(\beta = \sum _{j=1}^{v}\sum _{j'\ne j} \lambda _{d,j,j'}\). As d is a BIBD, we have \(\beta = \sum _{j=1}^{v}\sum _{j'\ne j}\lambda = v(v-1)\lambda \). But we can express \(\beta \) differently: \(\beta = bk(k-1)\) because there are b patients and exactly \(k(k-1)\) distinct pairs of treatments for each of them (recall that \(k\le v\)). The identification of the two expressions of \(\beta \) prove the wanted identity satisfied by \(\lambda \):

$$\begin{aligned} \lambda =\lambda _{d,j,j'} =\frac{bk(k-1)}{v(v-1)} = \omega \, \frac{k(k-1)}{2} \quad \quad \forall j,j' \in {\llbracket 1,v\rrbracket }, j\ne j'. \end{aligned}$$

Proof of Identity (4)

Assume that design d is NNm-balanced. Let us fix \(s \in {\llbracket 1,m\rrbracket }\) and compute by two ways the sum

$$\begin{aligned} \alpha =\sum _{j=1}^{v}\sum _{j'\ne j} N_{d,j,j'}^s. \end{aligned}$$

Firstly, as the design is NNm-balanced, each \(N_{d,j,j'}^s\) equals a constant \(N_{d}^{s}\) which does not depend on the choice \(j,j'\). So we have:

$$\begin{aligned} \alpha = \sum _{j=1}^{v}\sum _{j'\ne j} N_{d}^s = v(v-1) \, N_{d}^s \quad . \end{aligned}$$

Secondly, suppose that some patient i receives a given treatment j. Recall that j is administered at most once to a same patient. For the ith patient, if j is not applied in the first s or in the last s periods (i.e.when \(\sum _{\ell =1}^{s}\phi _{d,j,i}^{\ell } =0\)) then there exist \(2=\sum _{j'\ne j}N_{d,j,j',i}^s\) treatments at distance s from j. Otherwise, if j is applied in the first s or in the last s periods then \(\phi _{d,j,i}^{\ell } =1\) for (only) one period \(\ell \in {\llbracket 1,s\rrbracket }\) (i.e. when \(\sum _{\ell =1}^{s}\phi _{d,j,i}^{\ell } =1\)) and there exists only \(1=\sum _{j'\ne j}N_{d,j,j',i}^s\) treatment at distance s from j. Therefore, in both cases, we obtain

$$\begin{aligned} \sum _{j'\ne j}N_{d,j,j',i}^s= 2 -\sum _{\ell =1}^{s}\phi ^\ell _{d,j,i} \quad . \end{aligned}$$
(53)

Moreover, as j appears exactly r times in the design d, by considering all patients i,

$$\begin{aligned} \sum _{j'\ne j} N_{d,j,j'}^s = 2r - \sum _{\ell =1}^{s} \phi _{d,j}^{\ell }. \end{aligned}$$
(54)

Let us sum the above equality for all j and from Identity (16), we obtain this second expression of \(\alpha \):

$$\begin{aligned} \alpha =\sum _{j=1}^{v} \sum _{j'\ne j} N_{d,j,j'}^s = \sum _{j=1}^{v}(2r - \sum _{\ell =1}^{s} \phi _{d,j}^{\ell }) = 2rv -2bs \quad . \end{aligned}$$

As \(rv=kb\) (see Identity (1)), the identification of the two expressions of \(\alpha \) implies the wanted identity (4) satisfied by \(N_{d}^{s}\):

$$\begin{aligned} N_{d}^{s}=N_{d,j,j'}^s =\frac{2b(k-s)}{v(v-1)} = \omega \, (k-s) \quad \forall j,j' \in {\llbracket 1,v\rrbracket }, j\ne j'. \end{aligned}$$

Proof of Identities (19) and (20)

Let d be a NNm-balanced design with \(k=v\) (i.e. the number of periods equals the number of treatments). We have also \(r=b\) because \(rv=kb\). We will prove that for each \(\ell \in \) \({\llbracket 1,m\rrbracket }\) the quantities \(\phi _{d,j}^\ell \) and \(\phi _{d,j,j'}^{\ell *}\) do not depend on treatments \(j,j'\) (\(j\ne j'\)); we will express these quantities without j and \(j'\).

Let \(s\in \) \({\llbracket 1,m\rrbracket }\). Applying Identity (4), as d is a NNm-balanced design, \(N_{d,j,j'}^s=N_{d}^{s}=2b(k-s)/v(v-1)=2b(v-s)/v(v-1)\) since \(k=v\). Then from (54), we have:

$$\begin{aligned} \sum _{\ell =1}^{s} \phi ^\ell _{d,j} = 2r -(v-1)N_{d}^{s}. \end{aligned}$$

As \(r=b\), the previous equality becomes \(\sum _{\ell =1}^{s} \phi ^\ell _{d,j} = \frac{2bs}{v}\). Then, for each \(s\in {\llbracket 1,m\rrbracket }\), we find:

$$\begin{aligned} \phi ^s_{d,j} = \sum _{\ell =1}^{s} \phi ^\ell _{d,j} - \sum _{\ell =1}^{s-1} \phi ^\ell _{d,j} = \frac{2b}{v} \quad \end{aligned}$$

which is Identity (19). We now prove the second identity. We know that each treatment is administered at most once for each patient; but, as moreover \(k=v\), every patient will receive the v distinct treatments once and only once. That means \(n_{d,j,i}=1\) for all \(j\in {\llbracket 1,v\rrbracket }\). Therefore Identity (48) becomes Identity (20):

$$\begin{aligned} \phi _{d,j,j'}^{\ell *}=\phi _{d,j}^\ell + \phi _{d,j'}^\ell = \frac{4b}{v} \quad ~~ \forall ~~ \ell \in {\llbracket 1,m\rrbracket } \quad \end{aligned}$$

and the two identities on NNm-balanced square designs are proved.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Koné, M., Valibouze, A. Nearest neighbor balanced block designs for autoregressive errors. Metrika 84, 281–312 (2021). https://doi.org/10.1007/s00184-020-00770-6

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-020-00770-6

Keywords

Navigation