Skip to main content
Log in

Aspects of Optimality of Plans Orthogonal Through Other Factors and Related Multiway Designs

  • Original Article
  • Published:
Journal of Statistical Theory and Practice Aims and scope Submit manuscript

Abstract

In a blocked main effect plan (MEP), a pair of factors is said to be orthogonal through the block factor if their totals adjusted for the block are uncorrelated, as defined in Bagchi (Technometrics 52:243–249, 2010). This concept is extended here to orthogonality through a set of other factors. We discuss the impact of such an orthogonality on the precision of the estimates as well as on the data analysis. We construct a series of plans in which every pair of factors is orthogonal through a given pair of factors. Next we construct plans orthogonal through the block factors (POTB). We construct the following POTBs for symmetrical experiments. There are an infinite series of E-optimal POTBs with two-level factors and an infinite series of universally optimal plans for three-level factors. We also construct an universally optimal POTB for an \(s^t(s+1)\) experiment on blocks of size \((s+1)/2\), where \(s \equiv 3 \pmod 4\) is a prime power. Next we study optimality aspects of the “duals” of main effect plans with desirable properties. Here by the dual of a main effect plan we mean a design in a multi-way heterogeneity setting obtained from the plan by interchanging the roles of the block factors and the treatment factors. Specifically, we take up two series of universally optimal POTBs for symmetrical experiments constructed in Morgan and Uddin (Ann Stat 24:1185–1208, 1996). We show that the duals of these plans, as multi-way designs, satisfy M-optimality. Finally, we construct another series of multiway designs, which are also duals of main effect plans, and proved their M-optimality. This result generalizes the result of Bagchi and Shah (J Stat Plan Inf 23:397–402, 1989) for a row–column set-up. It may be noted that M-optimality includes all commonly used optimality criteria like A-, D- and E-optimality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Addelman S (1962) Orthogonal main effect plans for asymmetrical factorial experiments. Technometrics 4:21–46

    Article  MathSciNet  Google Scholar 

  2. Bagchi B, Bagchi S (2001) Optimality of partial geometric designs. Ann Stat 29:577–594

    Article  MathSciNet  Google Scholar 

  3. Bagchi S (2010) Main effect plans orthogonal through the block factor. Technometrics 52:243–249

    Article  MathSciNet  Google Scholar 

  4. Bagchi S (2019) Inter-class orthogonal main effect plans for asymmetrical experiments. Sankhya B 81:93–122

    Article  MathSciNet  Google Scholar 

  5. Bagchi S (2021) New plans orthogonal through the block factor. Stat. Appl. 19:287–306

    Google Scholar 

  6. Bagchi S, Mukhopadhyay AC (1989) Optimality in the presence of two factor interactions among the nuisance factors. Commun Stat Theory Methods 18:1139–1152

    Article  MathSciNet  Google Scholar 

  7. Bagchi S, Shah KR (1989) On the optimality of a class of row-column designs. J Stat Plan Inf 23:397–402

    Article  MathSciNet  Google Scholar 

  8. Bose M, Bagchi S (2007) Optimal main effect plans in blocks of small size. J Stat Prob Lett 77:142–147

    Article  MathSciNet  Google Scholar 

  9. Bhatia R (2013) Matrix analysis. Graduate texts in mathematics. Springer

  10. Cheng CS (1978) Optimal designs for the elimination of multiway heterogeneity. Ann Stat 6:1262–1272

    MATH  Google Scholar 

  11. Das A, Dey A (2004) Optimal main effect plans with nonorthogonal blocks. Sankhya 66:378–384

    MathSciNet  MATH  Google Scholar 

  12. Eccleston JA, Russell KG (1977) Adjusted orthogonality in nonorthogonal designs. Biometrika 64:339–345

    Article  MathSciNet  Google Scholar 

  13. Hedayat AS, Sloan NJA, Stufken J (1999) Orthogonal arrays, theory and applications. Springer Series in Statistics

  14. Huang L, Wu CFJ, Yen CH (2002) The idle column method: Design construction, properties and comparisons. Technometrics 44:347–368

    Article  MathSciNet  Google Scholar 

  15. Jacroux M (2011) On the D-optimality of orthogonal and nonorthogonal blocked main effects plans. Stat Prob Lett 81:116–120

    Article  MathSciNet  Google Scholar 

  16. Jacroux Mike (2011) On the D-optimality of nonorthogonal blocked main effects plans. Sankhya B 73:p: 62–69

    Article  MathSciNet  Google Scholar 

  17. Jacroux M (2013) A note on the optimality of 2-level main effects plans in blocks of odd size. Stat Prob Lett 83:1163–1166

    Article  MathSciNet  Google Scholar 

  18. Jacroux M, Kealy-Dichone B (2014) On the E-optimality of blocked main effects plans when \( n \equiv 3 \pmod 4 \). Stat Prob Lett 87:143–148

    Article  Google Scholar 

  19. Jacroux M, Kealy-Dichone B (2015) On the E-optimality of blocked main effects plans when \(n \equiv 2 \pmod 4\). Sankhya B 77:165–174

    Article  MathSciNet  Google Scholar 

  20. Jacroux M, Jacroux T (2016) On the E-optimality of blocked main effects plans when \(n \equiv 1 \pmod 4\). Commun Stat Theory Methods 45:5584–5589

    Article  MathSciNet  Google Scholar 

  21. Jacroux M, Kealy-Dichone B (2017) On the E-optimality of blocked main effects plans in blocks of different sizes. Commun Stat Theory Methods 46:2132–2138

    Article  MathSciNet  Google Scholar 

  22. Ireland K, Rosen M (1982) A classical introduction to modern number theory. Springer Verlag

  23. Kiefer J (1975) Construction and optimality of generalized Youden designs. In: Srivastava JN (ed) A survey of statistical design and linear models. North-Holland, Amsterdam, p 333–353

    Google Scholar 

  24. Marshall AW, Olkin I, Arnold BC (2011) Inequalities: theory of majorization and its applications. Springer series in statistics, 2nd edn. Springer, New York

  25. Morgan JP (1997) Optimal design for interacting blocks with OAVS incidence. Metrika 45:67–83

    Article  MathSciNet  Google Scholar 

  26. Morgan JP, Uddin N (1996) Optimal blocked main effect plans with nested rows and columns and related designs. Ann Stat 24:1185–1208

    Article  MathSciNet  Google Scholar 

  27. Mukerjee R, Dey A, Chatterjee K (2002) Optimal main effect plans with non-orthogonal blocking. Biometrika 89:225–229

    Article  MathSciNet  Google Scholar 

  28. Mukhopadhyay AC, Mukhopadhyay S (1984) Optimality in a balanced multi-way heterogeneity set up. In Proceedings of the Indian Statistical Institute Golden Jubilee International conference on Statistics: Applications and new directions, p 466-477

  29. Nilson, T. and Cameron, P.J. (2017) Triple arrays from difference sets. J Combin Designs 25:494–506

    Article  MathSciNet  Google Scholar 

  30. Preece DA, Wallis WD, Yucas JL (2005) Paley triple arrays. Aus J Combin 33:237–246

  31. Rao CR (1946) On Hypercubes of strength d and a system of confounding in factorial experiments. Bull Cal Math Soc 38:67

    Google Scholar 

  32. SahaRay R, Dutta G (2016) On the optimality of blocked main effects plans. Int Scholar Sci Res Innovation 10:583–586

    MATH  Google Scholar 

  33. Shah KR, Eccleston JA (1986) On some aspects of row-column designs. J Stat Plan Inf 15:87-95

    Article  MathSciNet  Google Scholar 

  34. Shah KR and Sinha BK (1989) Theory of optimal designs. Lecture notes in statistics. vol. 54. Springer, Berlin

  35. Yanai H, Takeuchi K, Takane K (2011) Projection matrices, generalized inverse matrices, and singular value decomposition. Springer, Statistics for Social and Behavioral Sciences

Download references

Acknowledgements

The authors thank the reviewers and the associate editor for their suggestions which have improved the presentation of the paper. The authors also thank Professor J.P. Morgan of Virginia Tech for helpful comments on the presentation of sections 2 and 3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sunanda Bagchi.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Special Issue: State of the art in research on design and analysis of experiments" guest edited by John Stufken, Abhyuday Mandal, and Rakhi Singh.

Appendices

Appendix A : Results involving linear algebra

The proofs of the results in Section 3 :

The results in the following Lemma are well-known. [See Exercise 7 of Chapter 2 of Yanai, H., Takeuchi, K. and Takane, K. [35], for instance].

Lemma 8.1

Consider matrices UVW with the same number of rows. Then, the following hold.

  1. (a)

    Suppose \( \mathcal{C} (V) \subseteq \mathcal{C} (W)\). Then

    $$\begin{aligned} \mathcal{C} (P_V U) = \mathcal{C} (P_W U) \Leftrightarrow (P_W - P_V)U =0. \end{aligned}$$
  2. (b)

    If \(W = [U,V]\) then \( P_W - P_V = P_Z\), where \(Z = (I - P_V) U\).

Proof of Theorem 3.1:

Taking \(W = X_{{\bar{i}}}, \; U = X_S\) and \(V = X_T\) and applying Lemma 8.1 we find that

$$\begin{aligned} P_{{\bar{i}}} = P_T + P_Z, \text{ where } Z = (I - P_T) X_S. \end{aligned}$$
(8.1)

Proof of (a): In view of (2.3), (2.5) and (8.1) we see that \(C_{i;{\bar{i}}} = C_{i;T} - X'_i P_Z X_i\). Therefore, the required necessary and sufficient condition is that \(P_Z X_i = 0\), which is equivalent to (3.1). Hence the result.

Proof of (b): In view of Lemma 2.2 the following hold.

$$\begin{aligned} SS_{i;{\bar{i}}}= & {} Y^{\prime }P_U Y, SS_{i;T} = Y^{\prime }P_V Y, \end{aligned}$$
(8.2)
$$\begin{aligned} \text{ where } U= & {} (I - P_{{\bar{i}}}) X_i, V = (I - P_T)X_i. \end{aligned}$$
(8.3)

Since the support of Y is \(R^n\), \(Y^{\prime }P_U Y = Y^{\prime }P_V Y\) w.p.1 if and only if \(P_U = P_V\). Thus, the required necessary and sufficient condition is that \(\mathcal{C} (U) = \mathcal{C} (V)\). Applying Lemma 8.1 we see that the required necessary and sufficient condition is

$$\begin{aligned} ( P_{{\bar{i}}} - P_T) X_i = 0. \end{aligned}$$
(8.4)

But in view of (8.1) this is the same as \(P_Z X_i = 0\), which is equivalent to (3.1). Hence the result. \(\square \)

The proofs of the results in Section 5 :

Proof of Lemma 5.2: Eliminating \({\hat{\mu }}\) from the normal equations [see Lemma 2.1] we get a system of equation in \(\hat{\beta _i}, \; i \in \mathcal{M}\) and \({\hat{\tau }}\). These equations are as follows.

$$\begin{aligned}&\sum \limits _{ i,j \in \mathcal{M}} C_{ij;0} \hat{\beta _j} + C_{iV;0} {\hat{\tau }} = {{\mathbf {Q}}}_{i;0}, \; i \in \mathcal{M} \end{aligned}$$
(8.5)
$$\begin{aligned}&\text{ and } \sum \limits _{i \in \mathcal{M}} C_{Vi;0} \hat{\beta _i} + C_r {\hat{\tau }} = {{\mathbf {Q}}}_{V;0}. \end{aligned}$$
(8.6)

Here \(C_{ij;0}\)’s and \({{\mathbf {Q}}}_{i;0}\)’s are as in Notation 2.3 (c).

Proof of (a): By the hypothesis, we find that the individual C-matrices involving only the blocking factors are as follows.

$$\begin{aligned} C_{ii';0} = \left\{ \begin{array}{ll} (p+s)K_s &{} \text{ if } i' = i, \\ pK_s &{} \text{ otherwise } \end{array} \right. ,\; i,i'\in \mathcal{M}, \end{aligned}$$
(8.7)

Here \(p = -1\) for the type 1 setting and \(K_s\) is as in (6.6)

For a fixed i, we eliminate all \(\hat{\beta _i'} , i' \ne i\) from (8.5) by using (8.7). Then we get an equation involving only \(\hat{\beta _i}\) and \({\hat{\tau }} \). We use this equation to eliminate all \(\hat{\beta _i}\)’s from (7.24). Then we get the reduced normal equation for \({\hat{\tau }} \), in the form \(C_d {\hat{\tau }} = Q\), where \(C_d\) is as in the statement.

Proof of (b):

By the hypothesis, the individual C-matrices are as follows.

$$\begin{aligned} C_{ii';0}= & {} \left\{ \begin{array}{ll} (s+1)K_s &{} \text{ if } i' = i, \\ K_s &{} \text{ otherwise } \end{array} \right. ,\; 1 \le i,i' \le m, \end{aligned}$$
(8.8)
$$\begin{aligned} C_{i\infty ;0}= & {} \left\{ \begin{array}{ll} sK_{s+1} &{} \text{ if } i = \infty , \\ 0 &{} \text{ otherwise } \end{array} \right. ,\; 1 \le i,i' \le m. \end{aligned}$$
(8.9)

Following the same procedure as in Case (a) we get the C-matrix as in the statement. \(\square \)

The proofs of the results in Section 6 :

Proof of Lemma 6.7: Let \(X = \{x_1, \cdots x_{s-1}\}\) be a set of orthonormal vectors in \( \langle 1_s \rangle ^{\perp }\). Let \(Z = \{ z_i = 1_h \otimes x_i, \; x_i \in X\}\). For a \(z \in Z, \; Nz = Hx_i\) for some i. So, by Lemma 6.6 (b), \(Nz = -z\), so that \(z'N'Nz = z'z\). Let \({\tilde{\mu }}_1 \le \cdots \le {\tilde{\mu }}_{s-1}\) be the smallest \(s-1\) positive eigen values of \(NN'\). Then, \({\tilde{\mu }}_1, \cdots , {\tilde{\mu }}_{s-1}\) are also the smallest \(s-1\) positive eigen values of \(N'N\). Therefore,

$$\begin{aligned} \sum _{i=1}^{s-1} {\tilde{\mu }}_i \le \sum _{i=1}^{s-1} (z'_i N'N z_i)/(z'_i z_i) = (s-1). \end{aligned}$$
(8.10)

Since N is an incidence matrix, the largest eigenvalue of \(NN'\) corresponds to the eigenvector \(1_v\). Thus, the eigenvector \(e_i\) corresponding to \({\tilde{\mu }}_i\) cannot be \(1_v\) and therefore \(e'_i 1_v = 0\), for each i. Hence the result follows from (6.5) and (8.10) . \(\Box \)

Proof of Lemma 6.8: Let \(\mathcal{N} (A)\) denote the null space of the matrix A and \(\nu (A)\) the nullity of A.

By (6.9)

$$\begin{aligned} C_d = rK_v - \frac{1}{s} E_\mathcal{B} E'_\mathcal{B} - \frac{1}{s(s-h)} S_{\mathcal{B}} S'_{\mathcal{B}}. \end{aligned}$$

By Lemma 5.1 (a) and the definition of \(S_\mathcal{B}\), \(\;1'v E_\mathcal{B} = 0 = 1'_v S_\mathcal{B}\). Thus, \(1'v C_d = 0\) and (0) is proved.

We now prove (i). Substituting for \(E_\mathcal{B} E'_\mathcal{B}\) from (6.5) and \(S_\mathcal{B} S'_\mathcal{B}\) from (6.6) we get the following expression for \(C_d\).

$$\begin{aligned} C_d = r K_v - \frac{1}{s}(NN' - \frac{r(s-1)}{s}J_v) - \frac{1}{s(s-h)} (HH' - \frac{(s-1)^2}{s}J_v). \end{aligned}$$

By definition of \(H = H_d\), \(\mathcal{N} (NN') \subset \mathcal{N} (HH')\). Now, let \(W = \langle 1_h \rangle ^{\perp } \otimes 1_s\). Since d is equireplicate, \(Nw = 0, \forall w \in W\). Therefore, \( \nu (NN') = \nu (N'N) \ge h-1\), implying \( \nu (HH') \ge h-1\). Let \( x \in \mathcal{N} (NN')\). Since \(1_v\) is the eigenvector of \(NN'\) corresponding to the largest eigenvalue, \(x \ne 1_v\). Therefore, \(x' 1_v = 0\). So, \(C_d x = r\). Hence (i) follows.

Next we proceed to prove (ii), which is about the next largest eigen values of \(C_d\), Let \(P = a E_\mathcal{B} E'_\mathcal{B} + b S_\mathcal{B} S'_\mathcal{B} , \; a,b >0\). While proving (i), we have also proved that \(\mu _i (P) = 0,\; 0 \le i \le h-1\). Let \({\tilde{\mu }}_1 (T) \le \cdots \le {\tilde{\mu }}_{s-1} (T)\) be the smallest \(s-1\) positive eigen values of \(T,\; T = P\) or \(E_\mathcal{B} E'_\mathcal{B}\). Fix \( i : 1 \le i \le s-1\). By an well-known result [see Corollary III.2.2 of Bhatia [9], for instance] we get

$$\begin{aligned} {\tilde{\mu }}_i (P) \le a {\tilde{\mu }}_i (E_\mathcal{B} E'_\mathcal{B}) + b \mu _{v-1} (S_{\mathcal{B}} S'_{\mathcal{B}}), \end{aligned}$$

which is \(a {\tilde{\mu }}_i (E_\mathcal{B} E'_\mathcal{B}) + bh\), by Lemma 6.6 (c). So, by Lemma 6.7

$$\begin{aligned} \sum _{i= 1}^{s-1} {\tilde{\mu }}_i (P) \le (s-1) (a + bh). \end{aligned}$$
(8.11)

But P becomes \(r K_v - C_d\), if we put \(a = \frac{1}{s}, b = \frac{1}{s(s-h)}\). So, the result follows from (8.11). \(\square \)

The proof of Theorem 6.2 :

We begin with an well-known result.

Lemma 8.2

Suppose \(x_1, \cdots x_n\) are real numbers with \(\sum _{i=1}^{n} x_{i} = a\). Then, the following hold. (a) \(\sum _{i=1}^{n} x^2_{i} \ge a^2/n, = \text{ when } x_i = a/n, \; \forall i\).

(b) In particular, if \(x_i\)s are integers, then \(\sum _{i=1}^{n} x^2_{i}\) is minimum, when \(x_i = [a/n]\) or \([a/n] + 1\).

Proof of Lemma 6.9:

  1. (a)

    Follows by straightforward computation.

  2. (b)

    Fix an arbitrary \(x \in R^p\). For \( 1 \le i \le k\), let us write \(A'_ix\) as \(y_i = [\begin{array}{cccc} y_{i1}&\cdots y_{iq} \end{array}]^\prime \) . Then, \(x' \sum _{i=1}^k A_i A'_i x = \sum _{i=1}^k y'_i y_i = \sum _{j=1}^q \sum _{i=1}^k y^2_{ij}\). Now, \(\sum _{i=1}^k y_{ij} = \sum _{l=1}^p T(l,j) x_l = z_j\), say. Therefore, by Lemma 8.2, \(\sum _{i=1}^k y^2_{ij} \ge z^2_j /k\). Since \(z_j\) is the jth entry of \(T'x\), the result follows.

  3. (c)

    \( Tr(HH') = \sum _{i=1}^{p} \sum _{j=1}^{q} (H(i,j))^2.\) Again, \(\sum _{i=1}^{p} \sum _{j=1}^{q} H(i,j) = kpr\). Therefore, by Lemma 8.2, \(\sum _{i=1}^{p} \sum _{j=1}^{q} (H(i,j))^2\) is minimum, when the following hold. \(H (i,j) = [rk/q] \text{ or } [rk/q] + 1, 1 \le i \le p, 1 \le j \le q.\) Since a sufficient condition for the above is \( A_1 = A_2 = \cdots A_k, \text{ and } A_1 (i,j) = 0 \text{ or } 1, \;\; 1 \le i \le p, 1 \le j \le q\), the result follows. \(\square \)

Proof of Lemma 6.10:

By definition of \(C, \; \mu _0(C) = 0 = \gamma _0, \; \; \mu _i (C) = d - \mu _{n+1-i} (A)\) for \(i > 0\). Therefore, \(\sum _{i=1}^{\rho } \mu _{i} (C) = d \rho - tr(A) \le \rho (d - a)\). Since \((1/l)\sum _{i=1}^l \mu _{i} (C)\) is increasing in l, it follows that \(\sum _{i=1}^l \mu _{i} (C) \le l(d - a), \; 1 \le l \le \rho \). Since \(\mu _i(C) \le d \; \forall i\), the result follows. \(\square \)

Proof of Lemma 6.12:

Since d is equireplicate, \(1_v\) is an eigenvector of \(T_i\) as well as of \(T_j\) with eigenvalue r, where r is the replication number (for the treatments). Again, by the hypothesis, \(T_i T_j = r^3 J_v = T_j T_i\). Thus, \(T_i\) and \(T_j\) are commuting matrices and hence there is an orthonormal basis consisting of common eigenvectors of these two matrices. Therefore, \(\mathcal{C} (T_i) \cap \mathcal{C} (T_i) = \mathcal{C} (T_i T_j) = \mathcal{C} (J_v) = \langle \{1_v\} \rangle \). Hence the result. \(\square \)

Appendix B: Results involving finite field

The proofs of the results in Section 4 : We begin with the proof of Theorem 4.2.

Let \(C_0\) be the set of all nonzero squares of F and \(C_1\) the set of all nonzero non-squares of F.

Notation 9.1

Consider an \(m \times n\) array P with entries from F. For \(1 \le i,j\le m, \; k \in F\), let

$$\begin{aligned} d^k_{ij} = |\{l : p_{jl} - p_{il} = k \}|. \end{aligned}$$

\(C_0 P\) will denote the \(m \times (s-1)n/2\) array \( \{cP: c \in C_0\}\).

Lemma 9.1

Suppose \(\exists \) an array P as in Notation 9.1satisfying the following. \(\sum \limits _{ k \in C_0} d^k_{ij} = \sum \limits _{ k \in C_1} d^k_{ij} = u_{ij}\), (say). Let \(\mathcal{P}\) be a plan having the set of columns of the array \(C_0 P\) as the set of runs. Let \(\mathcal{P}^* = \mathcal{P} \oplus F\). Then, for a pair \((i,j), i\ne j,\;1 \le i,j \le m\), the incidence matrix \(N_{ij}\) of \(\mathcal{P}^*\) satisfies the following.

$$\begin{aligned} N_{ij} (x,y) = w _{ij}I_s + u_{ij} J_s, \text{ where } w _{ij} = (s-1)n/2 - su_{ij}. \end{aligned}$$

Proof

\(N_{ij} (x,y) = |\{(l,\alpha ,q) : q \in C_0,\; q(p_{jl} - p_{il}) = y-x, \alpha = x - qp_{il}, \; 1 \le l \le n \}|\), which is \(=|\{(l,q) : y-x = q(p_{jl} - p_{il}), \; q \in C_0, 1 \le l \le n \}|\). So, by hypothesis,

$$\begin{aligned} \text{ if } y-x \in C_0\text{, } \text{ then } N_{ij} (x,y) = |\{ l : p_{jl} - p_{il} \in C_0\}| = \sum \limits _{ k \in C_0} d^k_{ij} = u_{ij}. \end{aligned}$$

Similarly, if \(y-x \in C_1\), then \( N_{ij} (x,y) = \sum \limits _{ k \in C_1} d^k_{ij}\). Therefore, if \(y = x\), then \( N_{ij} (x,y) = \frac{s-1}{2}(n - 2 u_{ij})\). Hence the result. \(\square \)

Using the fact that when \( s \equiv 3 \pmod 4\), \(-1 \in C_1\), we can prove the following result from Lemma 9.1.

Lemma 9.2

Suppose \( s \equiv 3 \pmod 4\) is a prime power and n is a multiple of 4. Suppose there is an \(m \times n\) array P with entries \(0,1,-1\) (viewed as members of F) satisfying the following.

(a) \(p_{1,j} = 0, 1 \le j \le n\).

(b) For every ordered pair (ij), \(p_{il} - p_{jl} \in \{0,1,-1\}, \; 1 \le l \le n\).

(c) For \(k = 1,-1, d^k_{ij} = \left\{ \begin{array}{ll} n/2 &{} \text{ if } (i,j) = (1,2), \\ n/4 &{} \text{ otherwise } \end{array} \right. \)

Then the plan \(\mathcal{P}^* = C_0 \mathcal{P} \oplus F\) satisfies the conditions of Theorem 4.1 with \(c = ns(s-1)/8\). Here \(C_0 \mathcal{P} \oplus F\) is to be interpreted as in Notation 4.1 (d)

Proof of Theorem 4.2 : Let H be a Hadamard matrix of order q. W.l.g., we assume that the first row of H consists of only 1’s. Write \(H = [\begin{array}{cc} 1_q&{\tilde{H}} \end{array}]' \). Consider the array

$$\begin{aligned} P = \left[ \begin{array}{ccc} 0_{1 \times q} &{}|&{} 0_{1 \times q} \\ J_{1 \times q} &{}|&{} - J_{1 \times q} \\ ({\tilde{H}} + J_{q-1 \times q} )/2 &{}|&{} - ({\tilde{H}} + J_{q-1 \times q} )/2\\ ({\tilde{H}} + J_{q-1 \times q} )/2 &{}|&{} ({\tilde{H}} - J_{q-1 \times q} )/2\\ \end{array} \right] . \end{aligned}$$

It is easy to check that P satisfies the conditions of Lemma 9.2. Hence the proof of Theorem 4.2 is complete in view of Theorem 4.1. \(\square \)

The proofs of the results in Section 6.1 :

We present the result in Lemma 6.2 (c) as a theorem.

Theorem 9.1

The spectrum of \(C_{d^*_1}\) is \(r^{h-1} (r - (1/(s-h))^{s-1} (r-1)^{(h-1)(s-1)}.\)

This is the only part of this paper where matrices with complex entries occur. If A is such a matrix, then \(A^*\) will denote its conjugate transpose. To prove this theorem, we need a number of tools. We introduce some notations. Some of these were already in the main body of the paper; we have repeated them for the sake of readability.

Notation 9.2

(0) \(s = p^m = ht+1\), where p is a prime, mth are integers, \(m \ge 1, h,t \ge 2\). \(F_p\) and \(F_s\) are finite fields of orders p and s , respectively. [In this section, we use the notation \(F_s\) (rather than F like in the other sections) so as to distinguish it from the field of order p]

(i) Addition and subtraction in \(I_h = \{0.1, \cdots h-1\}\) will always be modulo h. \(C_i, \; i \in I_h\) will denote the cosets of the subgroup \(C_0\) of order t in \(F_s^*\), ordered in such a way that \(C_iC_j=C_{i+j}\) for \(i,j \in I_h\).

The rows and columns of every \(s \times s\) (respectively, \(h \times h\)) matrix will be indexed by \(F_s\) (respectively, \(I_h\)). Moreover, the rows and columns of every \(hs \times hs\) matrix will be indexed by \(I_h \times F_s \).

(ii) As in Notation 4.6, M will denote the \(hs \times hs\) matrix \((( M_{i-j}))_{i,j \in I_h}\), where for \(i \in I_h, M_i\) will denote the \(s \times s\) matrix given by \(M_i(x,y) = \left\{ \begin{array}{ll} 1 &{} \text{ if } y - x \in C_i, \\ 0 &{} \text{ otherwise } \end{array} \right. \)

(iii) \(\eta \) and \(\omega \) are primitive h th and p th roots of unity, respectively.

(iv) Consider the function trace \( : F_s \rightarrow F_p\) defined as follows. \(trace(x) = \sum _{i=1}^{m} x^{p^i},\; x \in F_s\). [This is \(F_p\)-linear and into \(F_p\) since \(x \rightarrow x^p\) is an automorphism of \(F_s\) and its fixed field is \(F_p\)].

(v) U and V are unitary matrices of orders h and s , respectively, given as follows.

$$\begin{aligned} U(i,j) = (1/\sqrt{h}) \eta ^{ij}, i,j \in I_h \text{ and } V(x,y) = (1/\sqrt{s}) \omega ^{trace(xy)}, x,y \in F_s. \end{aligned}$$

(vi) Consider the sums \(g_i\) given by \(g_i = \sum \limits _{ x \in C_i} \omega ^{-trace(x)}, i \in I_h\).

(vii) For \(k \in I_h\), \(G_k\) is the \(h \times h\) matrix given by \(G_k(i,j) = g_{i-j+k}, \; i,j \in I_h\).

(viii) For \(k \in I_h\), \(E_k\) is the \(s \times s\) diagonal matrix given by

$$\begin{aligned} E_k(x,x) = \left\{ \begin{array}{ll} - t &{} \text{ if } x =0 \\ 1 &{} \text{ if } x \in C_k \\ 0 &{} \text{ otherwise } \end{array} \right. \end{aligned}$$

(ix) W will denote the \(hs \times hs\) unitary matrix \(U \otimes V\).

(x) T is the \(h \times h\) diagonal matrix with the entries: \(T(l,l) = \eta ^l, \; l \in I_h\)0

We study the behaviour of M under the actions of U and V.

Lemma 9.3

\((I_h \otimes V)^* M (I_h \otimes V) = \sum \limits _{ k \in I_h} G_k \otimes E_k\).

Proof

A computation shows that we have \(V^* M_i V = Diag(\lambda _i(x),\; x \in F_s) = \sum \limits _{ k \in I_h} g_{i+k} E_k,\) where

$$\begin{aligned} \lambda _i (x) = \left\{ \begin{array}{ll} t &{} \text{ if } x =0 \\ g_{i+k} &{} \text{ if } x \in C_k, k \in I_h.\end{array} \right. \end{aligned}$$

[To verify the second equality here (as well as for later use), we need to observe that

$$\begin{aligned} \sum \limits _{ k \in I_h} g_{i+k} = \sum \limits _{ k \in I_h} g_k = \sum \limits _{ x \in F^*_s} \omega ^{-trace(x)} =-1, \text{ since } \sum \limits _{ x \in F_s} \omega ^{-trace(x)} = 0]. \end{aligned}$$

\(\square \)

But, by the definition of M, the left hand side of the statement of this lemma is \(((V^* M_{i-j} V))_{i,j \in I_h}\). Hence the result follows from the definition of \(G_k\) and \(E_k\).

Lemma 9.4

\(W^* M W\) is a diagonal matrix with the following entries. For \(i \in I_h, x \in F_s\), the (ix)th diagonal entry of \(W^* M W\) is

$$\begin{aligned} \delta (i,x) = \left\{ \begin{array}{ll} s-1 &{} \text{ if } \;\; i=0, x =0, \\ 0 &{} \text{ if } \;\; i \ne 0, x =0, \\ \eta ^{ik}\sum \limits _{ j \in I_h} g_j \eta ^{-ij} &{} \text{ if } \;\; x \in C_k. \end{array} \right. \end{aligned}$$

Proof

Since \(W = (I \otimes V) (U \otimes I) \), Lemma 9.3 implies that

$$\begin{aligned} W^* M W = (U \otimes I)^* \left( \sum \limits _{ k \in I_h} G_k \otimes E_k \right) (U \otimes I) = \sum \limits _{ k \in I_h} (U^* G_k U) \otimes E_k. \end{aligned}$$

But one can verify that

$$\begin{aligned} U^* G_k U = \sum \limits _{ j \in I_h} g_{k-j} T^j, \; k \in I_h. \end{aligned}$$

So, \(W^* M W = \sum \limits _{ k \in I_h} \sum \limits _{ j \in I_h} g_{k-j} T^j \otimes E_k\). Now, the formulae for T and \(E_k\) imply the result. \(\square \)

Notation 9.3

(a) \(\Omega _h\) is the multiplicative group of all hth roots of unity.

(b) For \(i \in I_h\), \(\chi _i : F^*_s \rightarrow \Omega _h\) is defined by \(\chi _i (x) = \eta ^{-ij}\), if \(x \in C_j, j \in I_h\).

Lemma 9.5

\(|\sum \limits _{ j \in I_h} g_i \eta ^{-ij}|^2 = \left\{ \begin{array}{ll} 1 &{} \text{ if } i=0,\\ s &{} \text{ if } 0< i < h \end{array} \right. \)

Proof

Since \(C_jC_k = C_{j+k}\) (where the addition in the suffix is modulo h), \(\chi _i\) is a group homomorphism (character) on \(F^*_s\). From the definition of \(g_j\)’s we have

$$\begin{aligned} \sum \limits _{ j \in I_h} g_j \eta ^{-ij} = \sum \limits _{j \in I_h} \sum \limits _{x \in C_j} \omega ^{-trace(x)} \chi _i(x) = \sum \limits _{x \in F_s^*} \omega ^{-trace(x)}\chi _i(x) = g(\chi _i), \end{aligned}$$

which is the Gauss sum attached to the character \(\chi _i\). But \(|g(\chi _i)|^2 = \left\{ \begin{array}{ll} 1 &{} \text{ if } i=0,\\ s &{} \text{ if } i \ne 0 \end{array} \right. \), by a classical result on such Gauss sums [see, for instance Chapter 10 of Ireland and Rosen [22]]. Hence the result. \(\square \)

Putting the information from Lemmas 9.4 and 9.5 together, we get the spectrum of \(M M'\).

Lemma 9.6

\(W^* M M' W= D\), where the diagonal entries of the diagonal matrix D are as follows.

$$\begin{aligned} |\delta (i,x)|^2 = \left\{ \begin{array}{ll} (s-1)^2 &{} \text{ if } i=0, x =0 \\ 0 &{} \text{ if } i \ne 0, x =0 \\ 1 &{} \text{ if } i=0, x \ne 0, \\ s &{} \text{ if } i \ne 0, x \ne 0. \end{array} \right. \end{aligned}$$

In order to get the spectrum of \(C_{d_1^*}\) we need the spectrum of \(HH'\).

Lemma 9.7

\(W^* HH' W\) is a diagonal matrix with the (ixth entry \(= \left\{ \begin{array}{ll} h(s-1)^2 &{} \text{ if } i=0, x =0 \\ h &{} \text{ if } i \ne 0, x \ne 0 \\ 0 &{} \text{ if } i \ne 0. \end{array} \right. \)

Proof

Let \(\triangle _h\) denote the \(h \times h\) matrix having the (0, 0)th entry 1 and all other entries 0. \(\triangle _s\) is defined in a similar manner. It is easy to verify that

$$\begin{aligned} U^* J_h U = h \triangle _h \text{ and } V^* J_s V = s \triangle _s. \end{aligned}$$

Since \( H = 1_h \otimes ( J_s - I_s )\) [recall Lemma 6.6], we see that

$$\begin{aligned} W^* HH' W = h \triangle _h \otimes (s \triangle _s - I_s)^2, \end{aligned}$$

which is a diagonal matrix with the entries as in the statement. \(\square \)

Proof of Theorem 9.1 : Lemmas 9.6 and 9.7 imply the result in view of the expression for \(C_{d^*_1}\) in Lemma 6.2. \(\square \)

The proofs of the results in Section 6.3 :

The following result lies at the foundation of the construction of \(d^*_3\) :

Lemma 9.8

Let \(s \equiv 3 \pmod 4\) be a prime power. Then there is a subset W of \(C_0\) and a function \(f : W \rightarrow C_1\) satisfying the following.

(a) \(|W| = (s-3)/4\).

(b) For every \(\xi \in W\), \( (\xi - 1) (f(\xi ) - 1) \in C_0\).

(c) For \(\xi \ne \xi ' \in W\), \((\xi - \xi ') ( f(\xi )- f(\xi ')) \in C_0.\)

Proof

Let \(W = \{x \in C_0 : 1-x^2 \in C_0\}, \; {\tilde{W}} = \{x \in C_1 : 1-x^2 \in C_1\}\). Note that \(x \rightarrow -x\) is a bijection from W onto \((C_1 \setminus {\tilde{W}})\setminus \{-1\}\). Therefore, \(|W| = |C_1 \setminus {\tilde{W}}| -1 = (s-3)/2 - |{\tilde{W}}|\). Thus, \(|W| + |{\tilde{W}}| = (s-3)/2\). Also, \(x \rightarrow -1/x\) is a bijection from W onto \({\tilde{W}}\). Thus, \(|W| = |{\tilde{W}}|\), which proves (a).

Let \(f : W \rightarrow {\tilde{W}}\) be defined by \(f(x) = -1/x,\; x \in W\). Then, for \(\xi \in W\), \(1-\xi ^2 \in C_0\), so that \((1 -\xi ) (1 - f(\xi )) = \xi ^{-1} (1 - \xi ^2)\) which is in \(C_0\). This proves (b).

Again, for \(\xi \ne \xi ' \in W\), \((f(\xi ) - f(\xi '))(\xi - \xi ') = (\xi \xi ')^{-1} (\xi - \xi ')^2 \in C_0\), which implies (c). \(\square \)

Proof of Lemmma 6.11 : Recall (6.10). For \(l = 0,1\), Let \(\tilde{R^3_l}\) denote the set of \((w+2) \times 1\) vectors obtained from \(R^3_l\) by juxtaposing 0 in the \((w+2)\)th position of each member of \(R^3_l\). Let \(\mathcal{P}^0_l\) denote the plan for an \(s^{w+1} (s+1)\) experiment with \(\tilde{R^3_l}\) as the set of runs and the set of factors \(\{A_1, \cdots A_w\} \cup \{A_\infty \} \cup \{A_0\}\). Let \(\mathcal{P}_l = \mathcal{P}^0_l \oplus F\) and \(\mathcal{P} = \bigcup \limits _{l =0,1} \mathcal{P}_l\). Needless to mention that while generating \(\mathcal{P}_l\) from \(\mathcal{P}^0_l\), we make use of (4.4).

Proof of (a): Fix \(i \ne j, \; 1 \le i,j \le w\). Let \(D_l\) denote the (\(A_i\), \(A_j\)) plot difference for the plan \(\mathcal{P}^0_l, l = 0,1\) [recall Definition 4.5]. From (6.10) we see that

$$\begin{aligned} D_0 = (\xi _j - \xi _i) {\bar{C}}_0 \text{ and } D_1 = (f(\xi _j) - f(\xi _i)) {\bar{C}}_1. \end{aligned}$$

By (c) of Lemma 9.8, \(D_0 \sqcup D_1 = F \sqcup \{0\}\). Now applying Lemma 4.2 on the plan \(\mathcal{P}^0_0 \cup \mathcal{P}^0_1\) we see that

$$\begin{aligned} L_{ij} = I_s + J_s. \end{aligned}$$
(9.1)

Next fix \(i, 1 \le i \le w\). Let \(D_l\) denote the (\(A_i\), \(A_\infty \)) plot difference for the plan \(\mathcal{P}^0_l, l = 0,1\). From (6.10) and the equation next to it we see that

$$\begin{aligned} D_0 = (1 - \xi _i) {\bar{C}}_0 \text{ and } D_1 = (1 - f(\xi _i))C_1 \cup \{\infty \}, \text{ so } \text{ that } D_0 \sqcup D_1 = F^+, \end{aligned}$$

by (b) of Lemma 9.8. Thus, by Lemma 4.2\(L_{i\infty } = J_{s\times s+1}\). This relation together with (9.1) completes the proof.

Proof of (b): Fix \( i, 1 \le i \le w\). Let \(D_l\) be the (\(A_0\),\(A_i\)) plot difference for the plan \( \mathcal{P}^0_l, l = 0,1\). From (6.10) we see that

$$\begin{aligned} D_0 = \xi _i {\bar{C}}_0 \text{ and } D_1 = f(\xi _i) {\bar{C}}_1, \text{ so } \text{ that } D_0 = D_1 = {\bar{C}}_0, \end{aligned}$$

by the definition of W. Thus, \(N^{(l)}_{0i} = M_0 + I_s,\; l = 0,1\), by Lemma 4.2. But from the description of \(d^*_3\), in view of Notation 6.1 we see that

$$\begin{aligned} N_i = \left[ \begin{array}{l} N_{i0} \\ N_{i1} \end{array} \right] , \text{ where } N_{il} = N^{(l)}_{0i},\; i \in \{1, \cdots w\} \cup \{\infty \}. \end{aligned}$$
(9.2)

Hence the result.

Proof of (c): Now we consider the ordered pair of factors \((A_0,A_\infty )\) of \(\mathcal{P}^0_l, l = 0,1\) and find the (\(A_0\), \(A_\infty \)) for each of \(\mathcal{P}_0\) and \(\mathcal{P}_1\). We see that \( D_0 = {\bar{C}}_0 \text{ and } D_1 = C_1 \cup \{\infty \} .\) Thus,

$$\begin{aligned} N^{(0)}_{0, \infty } = [ \begin{array}{ll} M_0 + I_s&0_{s \times 1} \end{array}], \text{ while } N^{(1)}_{0, \infty } = [ \begin{array}{ll} M_1&1_s \end{array}]. \end{aligned}$$

Now using (9.2) we get the result. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bagchi, S., Bagchi, B. Aspects of Optimality of Plans Orthogonal Through Other Factors and Related Multiway Designs. J Stat Theory Pract 15, 78 (2021). https://doi.org/10.1007/s42519-021-00211-1

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42519-021-00211-1

Keywords

Mathematics Subject Classification

Navigation