Skip to main content

Study of Central Composite Design and Orthogonal Array Composite Design

  • Chapter
  • First Online:
Contemporary Experimental Design, Multivariate Analysis and Data Mining

Abstract

Response surface methodology (RSM) is an effective tool for exploring the relationships between the response and the input factors. Central composite design (CCD) and orthogonal array composite design (OACD) are useful second-order designs in response surface methodology. In this work, we consider the efficiencies of the two classes of composite designs for general case. Assuming the second-order polynomial model, the D-efficiency of CCDs and OACDs are studied for general value of \(\alpha \) in star points. Moreover, the determination of \(\alpha \) is also discussed from the perspective of space-filling criterion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ai, M., Kong, X., Li, K.: A general theory for orthogonal array based latin hypercube sampling. Stat. Sin. 26(2), 761–777 (2016)

    MathSciNet  MATH  Google Scholar 

  2. Ai, M., Li, P.-F., Zhang, R.-C.: Optimal criteria and equivalence for nonregular fractional factorial designs. Metrika 62(1), 73–83 (2005)

    MathSciNet  MATH  Google Scholar 

  3. Asadi, N., Zilouei, H.: Optimization of organosolv pretreatment of rice straw for enhanced biohydrogen production using enterobacter aerogenes. Bioresour. Technol. 227, 335–344 (2017)

    Google Scholar 

  4. Oyejola, B.A., Nwanya, J.C.: Selecting the right central composite design. Int. J. Stat. Appl. 5(1), 21–30 (2015)

    Google Scholar 

  5. Box, G.E.P., Draper, N.R.: Response Surfaces, Mixtures, and Ridge Analyses. John Wiley & Sons Inc, Hoboken, NJ, USA (2007)

    Google Scholar 

  6. Box, G.E.P., Hunter, J.S.: Multi-factor experimental designs for exploring response surfaces. Ann. Math. Stat 28(1), 195–241 (1957)

    MathSciNet  MATH  Google Scholar 

  7. Box, G.E.P., Wilson, K.B.: On the experimental attainment of optimum conditions. J. R. Stat. Soc. Ser. B, 13(1), 1–45 (1951)

    MathSciNet  MATH  Google Scholar 

  8. Draper, N.R., Lin, D.K.J.: Small response-surface designs. Technometrics 32(2), 187 (1990)

    MathSciNet  Google Scholar 

  9. Fang, K.-T., Lin, D.K., Winker, P., Zhang, Y.: Uniform design: theory and application. Technometrics 42(3), 237–248 (2000)

    MathSciNet  MATH  Google Scholar 

  10. Farrell, R.H., Kiefer, J., Walbran, A.: Optimum multivariate designs. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 113–138. University of California Press, Berkeley, Calif (1967)

    Google Scholar 

  11. Hedayat, A.S., Sloane, N.J.A., Stufken, J.: Orthogonal Arrays: Theory and Applications. Springer Series in Statistics, Springer, New York (2013)

    Google Scholar 

  12. Jaynes, J., Zhao, Y., Xu, H., Ho, C.-M.: Use of orthogonal array composite designs to study lipid accumulation in a cell-free system. Qual. Reliab. Eng. Int. 32(5), 1965–1974 (2016)

    Google Scholar 

  13. Karlin, S., Studden, W.J.: Optimal experimental designs. Ann. Math. Stat. 37(4), 783–815 (1966)

    MathSciNet  MATH  Google Scholar 

  14. Khuri, A.I., Cornell, J.A.: Response Surfaces: Designs and Analyses, volume 152 of Statistics : Textbooks and Monographs. Dekker, New York, 2nd, rev. and expanded. edition (1996)

    Google Scholar 

  15. Kiefer, J.: Optimum designs in regression problems, ii. Ann. Math. Stat. 32(1), 298–325 (1961)

    MathSciNet  MATH  Google Scholar 

  16. Lucas, J.M.: Optimum composite designs. Technometrics 16(4), 561–567 (1974)

    MathSciNet  MATH  Google Scholar 

  17. Morris, M.D.: A class of three-level experimental designs for response surface modeling. Technometrics 42(2), 111–121 (2000)

    Google Scholar 

  18. Myers, R.H., Montgomery, D.C., Anderson-Cook, C.M.: Response Surface Methodology: Process and Product Optimization Using Designed Experiments. Wiley series in probability and statistics, 4th edn. Wiley, Hoboken, New Jersey (2016)

    Google Scholar 

  19. Park, S., Fowler, J.W., Mackulak, G.T., Keats, J.B., Carlyle, W.M.: D-optimal sequential experiments for generating a simulation-based cycle time-throughput curve. Oper. Res. 50(6), 981–990 (2002)

    Google Scholar 

  20. Pesotchinsky, L.L.: D-optimum and quasi-d-optimum second-order designs on a cube. Biometrika 62(2), 335–340 (1975)

    MathSciNet  MATH  Google Scholar 

  21. Wald, A.: On the efficient design of statistical investigations. Ann. Math. Stat. 14(2), 134–140 (1943)

    MathSciNet  MATH  Google Scholar 

  22. Wu, C.-F.J., Hamada, M.: Experiments: Planning, Analysis, and Optimization. Wiley series in probability and statistics, 2nd edn. Wiley, Hoboken, N.J., (2009)

    Google Scholar 

  23. Xu, H.: Some nonregular designs from the nordstrom-robinson code and their statistical properties. Biometrika 92(2), 385–397 (2005)

    MathSciNet  MATH  Google Scholar 

  24. Xu, H., Jaynes, J., Ding, X.: Combining two-level and three-level orthogonal arrays for factor screening and response surface exploration. Stat. Sinica 24, 269–289 (2014)

    MathSciNet  MATH  Google Scholar 

  25. Zhou, Y., Xu, H.: Composite designs based on orthogonal arrays and definitive screening designs. J. Am. Stat. Assoc. 112, 1675–1683 (2017)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was partially supported by a grant from the Natural Science Foundation of China (No.11571133 and 11871237).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianhui Ning .

Editor information

Editors and Affiliations

Appendix

Appendix

Lemma 10.7

Let \(a\ne 0\), \(b\ne 0\),

$$\begin{aligned} \begin{pmatrix}c_0 &{} c\mathbf{1 }^{\prime }_k \\ c\mathbf 1 _k &{} a\mathbf{J} _k+b\mathbf{I} _k \end{pmatrix}=b^{k-1}(bc_0+k(ac_0-c^2)). \end{aligned}$$

Lemma 10.8

Let E and F be two \(n\times n\) nonnegative definite matrices with partions

$$\begin{aligned} E=\begin{pmatrix}E_1 &{} \mathbf 0 \\ \mathbf 0 &{} E_2 \end{pmatrix}\ge 0,\ F=\begin{pmatrix}F_1 &{} F_3 \\ F^{\prime }_3 &{} F_2 \end{pmatrix}\ge 0, \end{aligned}$$

where \(E_1\) and \(F_1\) are \(m\times m\) matrices. Then

$$\begin{aligned} |E+F|\ge |E_2|\cdot |E_1+F_1|. \end{aligned}$$

Proof of Theorem 10.2 Denote \(X_0=(\mathbf 1 _{n_0},\mathbf 0 ,\mathbf 0 ,\mathbf 0 )\) and \(X_i=(\mathbf 1 _{n_i},Q_i,L_i,B_i)\), where \(Q_i\), \(L_i\), \(B_i\) respectively are the quadratic, linear and bilinear terms of \(d_i\) in the second-order model, \(i=1,2\).

$$\begin{aligned} X^{\prime }_1X_1=\begin{pmatrix}n_1 &{} n_1\mathbf 1 ^{\prime }_k &{} \mathbf 0 &{} \mathbf 0 \\ n_1\mathbf 1 _k &{} n_1J_k &{} \mathbf 0 &{} \mathbf 0 \\ \mathbf 0 &{} \mathbf 0 &{} n_1I_k &{} \mathbf 0 \\ \mathbf 0 &{} \mathbf 0 &{} \mathbf 0 &{} n_1I_q \end{pmatrix}=\begin{pmatrix}n_1J_{k+1} &{} \mathbf 0 &{} \mathbf 0 \\ \mathbf 0 &{} n_1I_k &{} \mathbf 0 \\ \mathbf 0 &{} \mathbf 0 &{} n_1I_q\end{pmatrix}, \end{aligned}$$
$$\begin{aligned} X^{\prime }_2X_2=\begin{pmatrix}n_2 &{} \frac{2}{3}n_2\alpha ^2\mathbf 1 ^{\prime }_k &{} \mathbf 0 &{} \mathbf 0 \\ \frac{2}{3}n_2\alpha ^2\mathbf 1 _k &{} \frac{4}{9}n_2\alpha ^4J_k+\frac{2}{9}n_2\alpha ^4I_k &{} \mathbf 0 &{} Q_2^{\prime }B_2 \\ \mathbf 0 &{} \mathbf 0 &{} \frac{2}{3}n_2\alpha ^2I_k &{} L_2^{\prime }B_2 \\ \mathbf 0 &{} B_2^{\prime }Q_2 &{} B_2^{\prime }L_2 &{} B_2^{\prime }B_2 \end{pmatrix}, \end{aligned}$$

let \(Y=X^{\prime }_2X_2+X^{\prime }_0X_0\), then

$$\begin{aligned} Y=\begin{pmatrix}n_2+n_0 &{} \frac{2}{3}n_2\alpha ^2\mathbf 1 ^{\prime }_k &{} \mathbf 0 &{} \mathbf 0 \\ \frac{2}{3}n_2\alpha ^2\mathbf 1 _k &{} \frac{4}{9}n_2\alpha ^4J_k+\frac{2}{9}n_2\alpha ^4I_k &{} \mathbf 0 &{} Q_2^{\prime }B_2 \\ \mathbf 0 &{} \mathbf 0 &{} \frac{2}{3}n_2\alpha ^2I_k &{} L_2^{\prime }B_2 \\ \mathbf 0 &{} B_2^{\prime }Q_2 &{} B_2^{\prime }L_2 &{} B_2^{\prime }B_2 \end{pmatrix}, \end{aligned}$$

denote

$$\begin{aligned} B_{11}=\begin{pmatrix}n_2+n_0 &{} \frac{2}{3}n_2\alpha ^2\mathbf 1 ^{\prime }_k \\ \frac{2}{3}n_2\alpha ^2\mathbf 1 _k &{} \frac{4}{9}n_2\alpha ^4J_k+\frac{2}{9}n_2\alpha ^4I_k\end{pmatrix}, B_{13}=\begin{pmatrix}{} \mathbf 0 \\ Q_2^{\prime }B_2\end{pmatrix}, \end{aligned}$$

then

$$\begin{aligned} X^{\prime }X=X^{\prime }_1X_1+Y=\begin{pmatrix}B_{11}+n_1J_{k+1} &{} \mathbf 0 &{} B_{13} \\ \mathbf 0 &{} (\frac{2}{3}n_2\alpha ^2+n_1)I_k &{} L_2^{\prime }B_2 \\ B_{13}^{\prime } &{} B_2^{\prime }L_2 &{} n_1I_q+B_2^{\prime }B_2\end{pmatrix}, \end{aligned}$$
(10.13)

from Lemma 10.8, we get

$$\begin{aligned} |X^{\prime }X|=|X^{\prime }_1X_1+Y|\ge |n_1I_q|\cdot \begin{vmatrix} B_{11}+n_1J_{k+1}&\mathbf 0 \\ \mathbf 0&(\frac{2}{3}n_2\alpha ^2+n_1)I_k\end{vmatrix}=n_1^q\left( \frac{2}{3}n_2\alpha ^2+n_1\right) ^k|B_{11}+n_1J_{k+1}|, \end{aligned}$$

from Lemma 10.7, we have

$$\begin{aligned} |B_{11}+n_1J_{k+1}|=\left( \frac{2}{9}n_2\alpha ^2\right) ^k\left[ (1+2k\alpha ^2)n_0+n_2+n_1\left( 1+2k\alpha ^2+\frac{9kn_0}{2n_2\alpha ^2}+\frac{9k}{2\alpha ^2}-6k\right) \right] , \end{aligned}$$

therefore

$$\begin{aligned} |X^{\prime }X|\ge n^q_1\left[ \frac{(4n_2\alpha ^2+6n_1)\alpha ^2n_2}{27}\right] ^k\left[ (1+2k\alpha ^2)n_0+n_2+n_1\left( 1+2k\alpha ^2+\frac{9kn_0}{2n_2\alpha ^2}+\frac{9k}{2\alpha ^2}-6k\right) \right] , \end{aligned}$$
(10.14)

then we can obtain Theorem 10.2.

Proof of Theorem 10.3 When \(s=L\), from Eq. (10.13) and Fischer inequality, we have

$$\begin{aligned} |X^{\prime }_{(L)}X_{(L)}|\le |B_{11}+n_1J_{k+1}|\cdot |n_1I_q+B_2^{\prime }B_2|, \end{aligned}$$

because all of the diagonal elements of \(B_2^{\prime }B_2\) are \(\frac{4}{9}n_2\alpha ^4\), we have

$$\begin{aligned} |n_1I_q+B_2^{\prime }B_2|\le \left( n_1+\frac{4}{9}n_2\alpha ^4\right) ^q, \end{aligned}$$
(10.15)

so

$$\begin{aligned} |X^{\prime }_{(L)}X_{(L)}|\le |B_{11}+n_1J_{k+1}|\cdot \left( n_1+\frac{4}{9}n_2\alpha ^4\right) ^q, \end{aligned}$$

then using Eq. (10.4) and (10.14), we obtain the lower bound of \(D_L\)-efficiency. Moreover, from Fischer inequality, we have

$$\begin{aligned} |X^{\prime }X|\le |X^{\prime }_{(L)}X_{(L)}|\cdot |X^{\prime }_LX_L|, \end{aligned}$$

so

$$\begin{aligned} \frac{|X^{\prime }X|}{|X^{\prime }_{(L)}X_{(L)}|}\le |X^{\prime }_LX_L|=\left| \left( \frac{2}{3}n_2\alpha ^2+n_1\right) I_k\right| =\left( \frac{2}{3}n_2\alpha ^2+n_1\right) ^k, \end{aligned}$$

then get the upper bound of \(D_L\)-efficiency, if the linear terms of \(d_2\) are orthogonal to the bilinear terms of \(d_2\), then

$$\begin{aligned} \frac{|X^{\prime }X|}{|X^{\prime }_{(L)}X_{(L)}|}=|X^{\prime }_LX_L|, \end{aligned}$$

and the upper bound of \(D_L\)-efficiency is achieved.

When \(s=B\), from Eq. (10.13),

$$\begin{aligned} |X^{\prime }_{(B)}X_{(B)}|=\begin{vmatrix} B_{11}+n_1J_{k+1}&\mathbf 0 \\ \mathbf 0&(\frac{2}{3}n_2\alpha ^2+n_1)I_k\end{vmatrix}=|B_{11}+n_1J_{k+1}|\cdot \left( \frac{2}{3}n_2\alpha ^2+n_1\right) ^k, \end{aligned}$$

then follows from Eqs. (10.4) and (10.14), we get the lower bound of \(D_B\)-efficiency.

When \(s=Q\), from Eq. (10.13) and Fischer inequality,

$$\begin{aligned} \begin{aligned} |X^{\prime }_{(Q)}X_{(Q)}|&=\begin{vmatrix} N&\mathbf 0&\mathbf 0 \\ \mathbf 0&(\frac{2}{3}n_2\alpha ^2+n_1)I_k&L_2^{\prime }B_2 \\ \mathbf 0&B_2^{\prime }L_2&n_1I_q+B_2^{\prime }B_2\end{vmatrix}\le N\left( \frac{2}{3}n_2\alpha ^2+n_1\right) ^k|n_1I_q+B_2^{\prime }B_2|\\&\le N\left( \frac{2}{3}n_2\alpha ^2+n_1\right) ^k \left( n_1+\frac{4}{9}n_2\alpha ^4\right) ^q, \end{aligned} \end{aligned}$$

then follows from Eq. (10.4), Theorem 10.2 and Eq. (10.15), we get the lower bound of \(D_Q\)-efficiency.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Qiu, S., Xie, M., Qin, H., Ning, J. (2020). Study of Central Composite Design and Orthogonal Array Composite Design. In: Fan, J., Pan, J. (eds) Contemporary Experimental Design, Multivariate Analysis and Data Mining. Springer, Cham. https://doi.org/10.1007/978-3-030-46161-4_10

Download citation

Publish with us

Policies and ethics