Skip to main content

Abstract

This chapter presents the framework for the design and analysis of experiments. First, the general principles of design, including confounding, signal-to-noise ratio, randomisation, and blocking, are considered. Next, the commonly encountered factorial and fractional factorial designs are analysed in detail. Both analysis and design of such experiments, including the topics of model determination, replicates, confounding patterns, and resolution, are explored. Appropriate methods, including the development of orthogonal and orthonormal bases, for the analysis of such experiments using computers are presented. Although the results focus on 2-factorial design, higher-order design experiments are also considered, and the procedure for their analysis is explained. Detailed examples and cases are given. Third, methods for analysis of curvature, or quadratic terms, in a model are examined using factorial design with centre point replicates. Finally, the idea behind response surface methodologies, such as central composite design and optimal design, is briefly explored. Examples drawn from a wide range of different examples are considered. By the end of this chapter, the reader should be able to design and analyse factorial and fractional factorial experiments and curvature experiments and perform basic response surface methodologies using appropriate computational assistance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Determining an orthogonal or orthonormal basis for an arbitrary level is explained fully in Sect. 4.7.

  2. 2.

    Pointwise multiplication of two vectors, also called the Schur or Hadamard product, and denoted in this work by ⊙ (U + 2299), is defined as the multiplication of two vectors by taking each entry of the two vectors and multiplying them together, that is, \( {z}_k={x}_k{y}_k \), where k are the index locations.

  3. 3.

    Determining an orthogonal basis for an arbitrary level is explained fully in Sect. 4.7.

  4. 4.

    The form of the polynomials is similar to the standard, discrete Gram polynomials.

  5. 5.

    Note that γ 12 must always equal zero given the set-up of the problem.

  6. 6.

    This will leave the factorial component unchanged.

References

  • Box GE, Hunter WG, Hunter JS (1978) Statistics for experimenters: an introduction to design, data analysis, and model building. Wiley, New York

    Google Scholar 

  • Hare LB (1988) In the soup: a case study to identify contributors to filling variability. J Qual Technol 20:36–43

    Google Scholar 

  • Ljung L (1999) System identification theory for the user. Prentice Hall, Inc., Upper Saddle River

    Google Scholar 

  • Myers RH (1971) Response surface methodology. Allyn and Bacon, Inc., Boston

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendix A4: Nonmatrix Approach to the Analysis of 2k-Factorial Design Experiments

Appendix A4: Nonmatrix Approach to the Analysis of 2k-Factorial Design Experiments

It will be assumed that a 2k-factorial experiment has been designed with n R full replicates. Furthermore, it will be assumed that all the factors have been coded so that −1 and 1 represent the upper and lower levels in the experiment. The same notation as presented in Chap. 4 will be used. Thus, instead of calculating inverses and transposes, the following simplifications work for a 2k-factorial experiment:

$$ {\mathcal{A}}^T\mathcal{A}={2}^k{\mathcal{I}}_k, $$
(4.104)

where \( \mathcal{I} \) k is the k × k identity matrix,

$$ {\left({\mathcal{A}}^T\mathcal{A}\right)}^{-1}={2}^{-k}{\mathcal{I}}_k $$
(4.105)
$$ \widehat{\overrightarrow{\beta}}={2}^{-k}{\mathcal{A}}^T\overrightarrow{y} $$
(4.106)

If \( \overline{\mathcal{A}} \) is used, then the results are

$$ {\overline{\mathcal{A}}}^T\overline{\mathcal{A}}={2}^k{n}_R{\mathcal{I}}_k $$
(4.107)
$$ {\left({\overline{\mathcal{A}}}^T\overline{\mathcal{A}}\right)}^{-1}={2}^{-k}{\left({n}_R\right)}^{-1}{\mathcal{I}}_k $$
(4.108)

The sum of squares due to errors, SSE, can be computed using the following formula:

$$ SSE=\left({n}_R-1\right){\displaystyle \sum_{i=1}^{2^k}{s}_i^2}, $$
(4.109)

where s i is the standard deviation for the replicates for treatment i. Thus the standard deviation, \( \widehat{\sigma} \), can be determined as

$$ \widehat{\sigma}=\sqrt{\frac{SSE}{l^k\left({n}_R-1\right)}}=\sqrt{\frac{{\displaystyle \sum_{i=1}^{2^k}{s}_i^2}}{l^k}} $$
(4.110)

The effect due to each variable can be determined from

$$ \mathrm{Effect}=2\widehat{\beta} $$
(4.111)

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Shardt, Y.A.W. (2015). Design of Experiments. In: Statistics for Chemical and Process Engineers. Springer, Cham. https://doi.org/10.1007/978-3-319-21509-9_4

Download citation

Publish with us

Policies and ethics