Skip to main content
Log in

Optimality of Some Row–Column Designs

  • Original Article
  • Published:
Journal of Statistical Theory and Practice Aims and scope Submit manuscript

A Correction to this article was published on 24 April 2023

This article has been updated

Abstract

Latin squares, Youden squares, and Generalized Youden designs are optimal row–column designs sharing a common characteristic: in each case the two component block designs determined by rows and columns are restricted to the special types of balanced block design known as BIBDs, RCBDs, or more generally BBDs. This article takes up the optimality problem when it is possible to have a BIBD column component, but only a less balanced competitor, known as GGDD, as row component design. A-optimality is established in most cases considered, and E-optimality in all.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Change history

References

  1. Agrawal H (1966) Some generalizations of distinct representatives with applications to statistical designs. Ann Math Stat 37:525–528

    Article  MathSciNet  MATH  Google Scholar 

  2. Bagchi B, Bagchi S (2001) Optimality of partial geometric designs. Ann Stat 29(2):577–594

    Article  MathSciNet  MATH  Google Scholar 

  3. Chai F-S (1998) A note on generalization of distinct representatives. Stat Probab Lett 39(2):173–177

    Article  MathSciNet  MATH  Google Scholar 

  4. Chai F-S, Cheng C-S (2011) Some optimal row-column designs. J Stat Theory Pract 5(1):59–67

    Article  MathSciNet  MATH  Google Scholar 

  5. Cheng CS (1978) Optimality of certain asymmetrical experimental designs. Ann Statist 6(6):1239–1261

    Article  MathSciNet  MATH  Google Scholar 

  6. Dean A, Voss D, Draguljić D (2017) Design Anal Experiment, 2nd edn. Springer Texts in Statistics. Springer, Cham

    Book  MATH  Google Scholar 

  7. Jacroux M (1982) Some E-optimal designs for the one-way and two-way elimination of heterogeneity. J Royal Stat Soc, Series B 44(2):253–261

    MathSciNet  MATH  Google Scholar 

  8. Jacroux M (1985) Some sufficient conditions for the type \(1\) optimality of block designs. J Stat Plann Inference 11(3):385–398

    Article  MathSciNet  MATH  Google Scholar 

  9. Kiefer J (1975) Construction and optimality of generalized Youden designs. A Survey Stat Design Linear Models. North-Holland, Amsterdam, pp 333–353

    Google Scholar 

  10. Marshall AW, Olkin I, Arnold BC (2011) Inequalities: Theory of Majorization and its Applications, 2nd edn. Springer Series in Statistics, Springer, New York

    Book  MATH  Google Scholar 

  11. Morgan JP (2015) Blocking with independent responses. Handbook of Design and Analysis of Experiments. CRC Press, Boca Raton, pp 99–157

    Google Scholar 

  12. Morgan JP, Jermjitpornchai S (2021) Optimal row-column designs with three rows. Stat Appl 19(1):257–275

    Google Scholar 

  13. Morgan JP, Reck B (2007) E-optimal design in irregular BIBD settings. J Stat Plan Inference 137(5):1658–1668

    Article  MathSciNet  MATH  Google Scholar 

  14. Morgan JP, Srivastav SK (2000) On the type-1 optimality of nearly balanced incomplete block designs with small concurrence range. Stat Sinica 10(4):1091–1116

    MathSciNet  MATH  Google Scholar 

  15. Morgan JP, Stallings JW (2014) On the \(A\) criterion of experimental design. J Stat Theory Pract 8(3):418–422

    Article  MathSciNet  MATH  Google Scholar 

  16. Oehlert GW (2000) A First Course in Design and Analysis of Experiments. W. H, Freeman, New York

    Google Scholar 

  17. Russell KG (1980) Further results on the connectedness and optimality of designs of type \(O:XB\). Comm Stat A-Theory Methods 9(4):439–447

    Article  MathSciNet  MATH  Google Scholar 

  18. Shah KR, Sinha BK (1989) Theory of Optimal Designs. Lecture Notes in Statisitics, vol 54. Springer-Verlag, New York

    Book  Google Scholar 

  19. Tomić M (1949) Théorème de Gauss relatif au centre de gravité et son application. Bull Soc Math Phys Serbie 1:31–40

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors wish to express their thanks to the referees for many valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. P. Morgan.

Ethics declarations

Competing Interests

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Special Issue: AISC-2021 Special Collection” Guest edited by Tahani Coolen-Maturi, Javid Shabbir, and Arman Sabbaghi.

A Appendix

A Appendix

Let f be a non-increasing, real-valued Schur convex function on \(I\!\! R^+ = (0, \infty )\), and let \(\mathcal{F}\) be the collection of all such f. For any \(x\in (I\!\! R^+)^n\), let \(\Psi _f (x) = \sum _{i=1}^n f( x_i)\). Following Bagchi and Bagchi [2], for \(x,y \in (I\!\! R^+)^n\), x is said to be M-better than y if \(\Psi _f(x)\le \Psi _f(y)\) for every \(f \in \mathcal {F}\). Tomic’s theorem provides necessary and sufficient conditions for a vector x to be M-better than another vector y.

Theorem A.1

(Tomić, [19]) For \(x,y \in (I\!\! R)^{v-1}\), x is M-better than y if and only if

$$\begin{aligned} \sum _{i=1}^{k} {x^\uparrow }_{i} \ge \sum _{i=1}^{k} {y^\uparrow }_{i}, ~~ k = 1,2, \cdots , n.\end{aligned}$$
(A.1)

Here \(x^\uparrow \) denotes the vector obtained by rearranging the co-ordinates of x in non-decreasing order. The relationship between x and y expressed in (A.1) is termed “x is weakly majorized from above by y,” which will often be shortened to “x is majorized by y.” In the applications here, x and y are vectors of \(v-1\) nonzero eigenvalues arising from competing designs, and (A.1) says the design producing x is M-better than that producing y. If x is majorized by every competing y, then the design producing x is said to be M-optimal among all competing designs. Further details on majorization can be found in Marshall et al. [10]. Since \(f(x)=1/x\) is in \(\mathcal{F}\), an M-optimal design is A-optimal.

An immediate consequence of the inequalities (A.1) for the current setting is

Lemma A.1

For fixed integer \(s<v-1\), if there are numbers \(e_{01}\le e_{02}\le \ldots \le e_{0s}\) for which \(e_{di}\le e_{0i}\) for \(i=1,\ldots ,s\), then provided \(e_1=(\hbox {tr}(C_d)-\sum _{i=1}^se_{0i})/(v-1-s)\ge e_{0s}\),

$$\begin{aligned} (e_{01},e_{02},\ldots ,e_{0s},e_1,\ldots ,e_1) \ \ \ \hbox { is majorized by }\ \ \ (e_{d1},e_{d2},\ldots ,e_{dv-1}). \end{aligned}$$

Similarly, if \(\sum _{i=1}^se_{di}\le se_0\) and \(e_1=(\hbox {tr}(C_d)-se_0)/(v-1-s)\ge e_0\), then taking \(e_0\) with multiplicity s, and \(e_1\) with multiplicity \(v-1-s\),

$$\begin{aligned} (e_0,e_0,\ldots ,e_0,e_1,\ldots ,e_1) \ \ \ \hbox { is majorized by }\ \ \ (e_{d1},e_{d2},\ldots ,e_{dv-1}). \end{aligned}$$

A bounding result

Lemma A.2

(Cheng, [5]) If \(\hbox {tr}(C_d)=a_1\) and \(\hbox {tr}(C_d^2)\ge a_2\), then A\(_d\) \(\ge \frac{v-2}{e_1}+\frac{1}{e_2}\) where

$$\begin{aligned}{} & {} e_1=\frac{a_1-\sqrt{(v-1)/(v-2)}a_3}{v-1}, \ \ e_2=\frac{a_1+\sqrt{(v-1)(v-2)}a_3}{v-1}, \hbox { and } \\ {}{} & {} a_3=\sqrt{a_2-a_1^2/(v-1)}. \end{aligned}$$

Lemma A.2 is a special case of Theorem 2.2 in Cheng [5], which includes additional technical conditions (all satisfied here).

Proof of Lemma 4.1

First, suppose \(e_{dk}\ge e_2^*\). If \(\sum _{i-1}^{k-1}e_{di}> (k-1)e_1^*\) then \(\hbox {tr}(C_d)>\hbox {tr}(C_{d^*})\), a contradiction. Hence \(\sum _{i=1}^{k-1}e_{di}\le (k-1)e_1^*\), which with the trace condition implies by Lemma A.1 that d is M-inferior to \(d^*\).

So suppose \(e_{dk}< e_2^*\). Then \(e_{di}< e_2^*\) for \(i=2,\ldots ,k\), implying

$$\begin{aligned} \sum _{i=1}^{k-1}e_{di}< e_2^*-\frac{k-1}{r} + (k-2)e_2^* = (k-1)e_2^*-\frac{k-1}{r}=(k-1)e_1^* \end{aligned}$$

which again with the trace condition implies that d is M-inferior to \(d^*\). \(\square \)

Proof of Corollary 4.1

Unequal replication says that \(r_{di}\le r-1\) for some i so that \(c_{d_Nii}=r_{di}-\frac{\lambda _{dii}}{k}\) is no greater than \((r-1)(k-1)/k\). Hence invoking (2.8),

$$\begin{aligned} e_{d1} \le \frac{v(r-1)(k-1)}{(v-1)k}=\frac{\lambda v}{k}-\frac{v(k-1)}{(v-1)k}<\frac{\lambda v}{k}-\frac{(k-1)}{r} \end{aligned}$$
(A.2)

and the result follows from Lemma 4.1. \(\square \)

Proof of Lemma 5.1

Employing (2.8) and calculating from (2.4) that \(c_{d^*ii}=(r-\frac{1}{b})(k-1)/k\),

$$\begin{aligned} e_{d1}\ \le \ \frac{v}{v-1}\left( c_{d^*ii}-\frac{2}{k}\right)= & {} \frac{vr(k-1)}{(v-1)k}-\frac{v(k-1)}{(v-1)bk}-\frac{2v}{k(v-1)} \\= & {} \frac{\lambda v}{k}-\frac{(k-1)}{(v-1)r}-\frac{2v}{k(v-1)} \ < \ \frac{\lambda v-2}{k}, \end{aligned}$$

which has used the fact, see [14, Lemma 2.5], that every equireplicate design has \(c_{dii}\le c_{d^*ii}\) for all i, and that column nonbinarity implies \(c_{dii}\le c_{d^*ii}-\frac{2}{k}\) for some i. \(\square \)

Proof of Lemma 6.2

Let d be equireplicate and binary in columns, so that \(D_d\) has zero row and column sums and zero diagonal. Then \(C_d\le C_{d_N} = C_{d^*_N}+\frac{1}{k}D_d\), and eigenvectors of \(C_{d_N}\) are those of \(D_d\). Hence the \(e_{di}\) must satisfy

$$e_{di}\le \frac{\lambda v}{k}+\frac{\mu _i}{k}$$

where \(\mu _1\le \cdots \le \mu _{v-1}\) are the eigenvalues of \(D_d\) excluding the zero eigenvalue corresponding to eigenvector \(1_v\). The lemma is proven if it can be shown (for \(D_d\) not having a principle submatrix \(D_1\)) that the \(\mu _i\) satisfy one of: (i) \(\mu _1 \le -3/2\) and \(\mu _1 + \mu _2 \le -5/2\), or (ii) \(\mu _1 + \mu _2 +\mu _3\le -3\); and if the bound based on \(D_1\) can be separately established.

Let I be a subset of \(\{1,2,\ldots ,v\}\) and let D(I) be the principle submatrix of \(D_d\) identified by rows/columns in I. Let \(P_w\) be the projector orthogonal to the all-ones vector of length w. Then if the index set I contains w elements, the three smallest eigenvalues of \(P_wD(I)P_w\) provide upper bounds for \(\mu _1,\mu _2,\mu _3\). The same approach can be used with disjoint submatrices. That is, if \(I_1\) and \(I_2\) are disjoint, then using the projections of \(D(I_1)\) and \(D(I_2)\), the two smallest eigenvalues from one of these, and the smallest from the other, provide upper bounds for \(\mu _1,\mu _2,\mu _3\). Also, if there is an index set \(I=\{i,i'\}\) of two elements with off-diagonal element p for some \(p<0\), then there is an eigenvalue of D(I) no larger than p, and so long as this I is disjoint from other index sets used with the projection approach, this eigenvalue can be used with those found by that approach to provide upper bounds for \(\mu _1,\mu _2,\mu _3\). All of these bounding methods follow from Theorem A.2 in [10, Chapter 20].

Using the techniques described in the preceding paragraph, it is a straightforward exercise to sequentially enumerate possible discrepancy submatrices, building up one row/column at a time, and check each for whether or not the desired conditions (i) and (ii) on the smallest eigenvalues are met. Part (iii) of the lemma is disposed of upon observing that \(D_1\) is itself a discrepancy matrix and has an eigenvalue of \(-2\).

In what follows \(\delta _{ii'}\) will denote the \((i,i')\) element of D(I), and “the projection of D(I)” means \(P_wD(I)P_w\). Here a proof of Lemma 6.2 will be sketched for the case where no element in \(D_d\) exceeds 1 in absolute value. The case where larger magnitudes are allowed can be resolved more easily with the same enumerative approach. The problem is thus to show that \(\delta _{12}=1\) implies either one of the conditions (i)-(ii) above, or that \(D_d\) contains a principle submatrix \(D_1\). Now \(\delta _{12}=1\) implies either \(\delta _{13}=\delta _{23}=-1\) or \(\delta _{13}=\delta _{24}=-1\). Only the former of these two situations will be examined, as the considerations are similar for the latter. Throughout what follows, the eigenvalues of the projections can be found analytically, or are easily generated computationally.

Given \(\delta _{12}=1\) and \(\delta _{13}=\delta _{23}=-1\), then wlog, \(\delta _{34}=\delta _{35}=1\). Hence D(1,2,3,4,5) is of this form:

$$\begin{aligned} \left( \begin{array}{rrrrr} 0 &{} 1 &{} -1 &{} \ t_1 &{} \ t_3\\ 1 &{} 0 &{} -1 &{} t_2 &{} t_4\\ -1 &{} -1 &{} 0 &{} 1 &{} 1\\ t_1 &{} t_2 &{} 1 &{} 0 &{} t_5\\ t_3 &{} t_4 &{} 1 &{} t_5 &{} 0\\ \end{array}\right) \end{aligned}$$

If \(t_5=1\) then the projection of D(3,4,5) has two eigenvalues of \(-1\), and the projection of D(1,2) has an eigenvalue of \(-1\), proving that (ii) holds. If \(t_5=-1\) then the projection of D(3,4,5) has an eigenvalue of \(-5/3\) and again the projection of D(1,2) has an eigenvalue of \(-1\), proving that (i) holds. So assume that \(t_5=0.\)

For D(1,2,3,4) there are six cases for this matrix respecting the symmetry condition \(t_1\le t_2\). When \(t_1=t_2=1\), the resulting two smallest eigenvalues for the projection of this \(4\times 4\) are \(-1.73\) and \(-1\) satisfying (i) and so eliminating this case. Hence assume D(1,2,3,4,5) is of this form:

$$\begin{aligned} \left( \begin{array}{rrrrr} 0 &{} 1 &{} -1 &{} \ t_1 &{} \ t_3\\ 1 &{} 0 &{} -1 &{} t_2 &{} t_4\\ -1 &{} -1 &{} 0 &{} 1 &{} 1\\ t_1 &{} t_2 &{} 1 &{} 0 &{} 0\\ t_3 &{} t_4 &{} 1 &{} 0 &{} 0\\ \end{array}\right) \end{aligned}$$

where, taking symmetries into account,

$$\begin{aligned} t_1,\ t_2, \ t_3, \ t_4\hbox { are all in }\{-1,0,1\}, \ t_1=\max _it_i, \ \hbox {and neither }(t_1,t_2)\hbox { nor }(t_3,t_4)\hbox { is }(1,1). \end{aligned}$$

Of all these cases, only two fail to produce a projected submatrix whose eigenvalues don’t meet one of (i), (ii). These two remaining cases are

$$\begin{aligned} \begin{array}{ccc} \left( \begin{array}{rrrrr} 0 &{} 1 &{} -1 &{} \ 0 &{} -1\\ 1 &{} 0 &{} -1 &{} 0 &{} -1\\ -1 &{} -1 &{} 0 &{} 1 &{} 1\\ 0 &{} 0 &{} 1 &{} 0 &{} 0\\ -1 &{} -1 &{} 1 &{} 0 &{} 0\\ \end{array}\right) &{} &{} \left( \begin{array}{rrrrr} 0 &{} 1 &{} -1 &{} -1 &{} -1\\ 1 &{} 0 &{} -1 &{} -1 &{} -1\\ -1 &{} -1 &{} 0 &{} 1 &{} 1\\ -1 &{} -1 &{} 1 &{} 0 &{} 0\\ -1 &{} -1 &{} 1 &{} 0 &{} 0\\ \end{array}\right) \end{array} \end{aligned}$$

For both cases, wlog \(\delta _{56}=1\), and the partition \(I_1=\{1,2\},\ I_2=\{3,4\},\ I_3=\{5,6\}\) eliminates both. \(\square \)

Proof of Lemma 7.1

Let \(\mathcal{P}_W\) be the collection of k[(v/k)!] permutation matrices that permute the elements of the treatment vector \((1,2,\ldots ,v)'\) within each group. Let \(\mathcal{P}_G\) be the collection of k! permutation matrices that permute the groups of elements of the treatment vector defined by \(G_1,\ldots ,G_k\) in Sect. 7. Now consider the averaged information matrix

$$\begin{aligned} \bar{C}_d=\frac{\sum _{P\in \mathcal{P}_G}\sum _{P\in \mathcal{P}_W}C_d}{\vert \mathcal{P}_G\vert \vert \mathcal{P}_W\vert }=\frac{\lambda v}{k}(I-\frac{1}{v}J)+\frac{1}{vr}J+\frac{1}{k}\bar{D}_d-\frac{1}{b}(I_k\otimes J_{v/k}) \end{aligned}$$

which differs from \(C_d\), see (7.1), only in the averaging of the discrepancy matrix. Partitioning \(D_d\) as \(D_d=((D_{ij}))\) where each \(D_{ij}\) is \(\frac{v}{k}\times \frac{v}{k}\), each \(\bar{D}_{ii}\) is the average of the within-group averages of all k \(D_{ii}\), and each \(\bar{D}_{ij}\) for \(i\ne j\) is the average of the “cross-group” averages of all \(k(k-1)\) \(D_{ij}\). Noting that \(D_d\) has zero row and column sums, it is now an easy step to show that

$$\begin{aligned} \bar{D}_{ii} = \gamma _d(J-I) = \bar{D}_1 \text{(say) } , \ \ \ \hbox { and }\ \ \ \bar{D}_{ij}=\frac{-(v-k)\gamma _d}{k(k-1)}J =\bar{D}_2 \text{(say) } \end{aligned}$$

where \(\gamma _d\) is specified in (7.2). Then \(\bar{D}_d=I_k\otimes \bar{D}_1+(J_k-I_k)\otimes \bar{D}_2\) has the same grouped structure as the row component GGDD, and thus \(\bar{C}_d\) and \(C_{d^*}\) have the same eigenvectors. The nonzero eigenvalues of \(\bar{C}_d\) are

$$\begin{aligned} \bar{e}_1=\cdots =\bar{e}_{k-1}=\frac{\lambda v}{k}+\frac{\gamma _d(v-k)}{k(k-1)}-\frac{1}{r} \ \ \hbox { and } \ \ \bar{e}_k=\cdots =\bar{e}_{v-1}=\frac{\lambda v}{k}-\frac{\gamma _d}{k}. \end{aligned}$$

Comparing to the eigenvalues (2.5) of \(C_{d^*}\), application of Theorem A.1 completes the proof. \(\square \)

Proof of Lemma 7.3

For any \(d\in \mathcal{{D}_B}\), tr(\(C_d)=\frac{\lambda v(v-1)}{k}-\frac{k-1}{r}\) is fixed, and hence so, too is

$$\bar{c} \ = \ \underset{i\ne i'}{\sum \sum }c_{dii'}/[v(v-1)] \ = \ -\frac{\lambda }{k}+\frac{1}{bk}-\frac{(v-k)}{bk(v-1)}.$$

Now the values of \(c_{d^*ii'}\) for \(i\ne i'\) and their deviations from the average value \(\bar{c}\) are

(A.3)

Let \(d\in \mathcal{{D}_B}\) have discrepancy matrix \(D_d=((\delta _{ii'}))\ne 0\). Then a nonzero value \(\delta _{ii'}\) corresponds to \(c_{dii'}=c_{d^*ii'}\pm \frac{s}{k}\) for some positive integer s. Because \(\frac{1}{k}\) is greater than twice the magnitude of either deviation \(c_{d^*ii'}-\bar{c}\) in (A.3), we have \(\vert c_{dii'}-\bar{c}\vert > \vert c_{d^*ii'}-\bar{c}\vert \) whenever \(\delta _{ii'}\ne 0\), that is, whenever \(c_{dii'}\ne c_{d^*ii'}\). Noting that \(\hbox {tr}(C_d^2)=(\hbox {tr}(C_d))^2/v+\underset{i\ne i'}{\sum \sum }\vert c_{dii'}-\bar{c}\vert ^2+v(v-1)\bar{c}^2\), we have \(\hbox {tr}(C_d^2)-\hbox {tr}(C_{d^*}^2)>0\), and the magnitude of this difference depends solely on the magnitudes and the multiplicities of the \(\vert c_{dii'}-\bar{c}\vert \) which exceed \(\vert c_{d^*ii'}-\bar{c}\vert \). Setting aside the discrepancy matrix eliminated by Lemma 7.2, this difference is minimized and a lower bound for \(\hbox {tr}(C_d^2)\) obtained when there are six values of \(\delta _{ii'}=-1\) and six values of \(\delta _{ii'}=1\) (see the list of small discrepancy matrices produced by Morgan and Reck, [13]), and these twelve values are arranged so that \(c_{dii'}=c_{d^*ii'}-\frac{1}{k}\) or \(c_{d^*ii'}+\frac{1}{k}\) as \(c_{d^*ii'}\) is in the first or second row of (A.3), resulting in (7.4). \(\square \)

Proof of Lemma 8.1

We already know that \(v=mk\) and \(r=sk+1\) for some integers \(m\ge 2\) and \(s\ge 1\). Then \(\lambda (v-1)=r(k-1)\) is equivalent to

$$\begin{aligned} \lambda (mk-1)=(sk+1)(k-1)\Leftrightarrow & {} m=\frac{s(k-1)+1}{\lambda }+\frac{\lambda -1}{k\lambda }. \end{aligned}$$

Let \(\beta =(s(k-1)+1)(\bmod \; \lambda )\). Then since m is an integer, \((\lambda -1)/k\lambda \) must be of the form \(x-(\beta /\lambda )\) where x is an integer. Hence \(\lambda -1=k\lambda x-k\beta = k(\lambda x-\beta )=k\theta \) for integer \(\theta \ge 0\), ie \(\lambda =\theta k+1\). So now \(\lambda (v-1)=r(k-1)\) can be written as

$$\begin{aligned} (\theta k+1)(mk-1) = (sk+1)(k-1)\Leftrightarrow & {} s = m\theta +\frac{(m-1)(\theta +1)}{k-1}. \end{aligned}$$

Since s, m, and \(\theta \) are all integers, the last equality says that \((m-1)(\theta +1)\) must be an integer multiple of \(k-1\), ie \((m-1)(\theta +1)=\alpha (k-1)\) for some integer \(\alpha \ge 1\). Consequently,

$$\begin{aligned} m=\frac{\alpha (k-1)}{\theta +1}+1\ \ \ \hbox { and }\ \ \ s=\theta m+\alpha \end{aligned}$$

yielding the expressions for v and r in the statement of the lemma. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Morgan, J.P., Bagchi, S. Optimality of Some Row–Column Designs. J Stat Theory Pract 17, 18 (2023). https://doi.org/10.1007/s42519-022-00315-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42519-022-00315-2

Keywords

Navigation