# Two Level Factorial Experiments

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
 Chapter 8: Two Level Factorial Experiments

Two level factorial experiments are factorial experiments in which each factor is investigated at only two levels. The early stages of experimentation usually involve the investigation of a large number of potential factors to discover the "vital few" factors. Two level factorial experiments are used during these stages to quickly filter out unwanted effects so that attention can then be focused on the important ones.

## 2k Designs

The factorial experiments, where all combination of the levels of the factors are run, are usually referred to as full factorial experiments. Full factorial two level experiments are also referred to as ${\displaystyle {2}^{k}\,\!}$ designs where ${\displaystyle k\,\!}$ denotes the number of factors being investigated in the experiment. In Weibull++ DOE folios, these designs are referred to as 2 Level Factorial Designs as shown in the figure below.

A full factorial two level design with ${\displaystyle k\,\!}$ factors requires ${\displaystyle {{2}^{k}}\,\!}$ runs for a single replicate. For example, a two level experiment with three factors will require ${\displaystyle 2\times 2\times 2={{2}^{3}}=8\,\!}$ runs. The choice of the two levels of factors used in two level experiments depends on the factor; some factors naturally have two levels. For example, if gender is a factor, then male and female are the two levels. For other factors, the limits of the range of interest are usually used. For example, if temperature is a factor that varies from ${\displaystyle {45}^{o}C\,\!}$ to ${\displaystyle {90}^{o}C\,\!}$, then the two levels used in the ${\displaystyle {2}^{k}\,\!}$ design for this factor would be ${\displaystyle {45}^{o}\,\!C\,\!}$ and ${\displaystyle {90}^{o}\,\!C\,\!}$.

The two levels of the factor in the ${\displaystyle {2}^{k}\,\!}$ design are usually represented as ${\displaystyle -1\,\!}$ (for the first level) and ${\displaystyle 1\,\!}$ (for the second level). Note that this representation is reversed from the coding used in General Full Factorial Designs for the indicator variables that represent two level factors in ANOVA models. For ANOVA models, the first level of the factor was represented using a value of ${\displaystyle 1\,\!}$ for the indicator variable, while the second level was represented using a value of ${\displaystyle -1\,\!}$. For details on the notation used for two level experiments refer to Notation.

### The 22 Design

The simplest of the two level factorial experiments is the ${\displaystyle {2}^{2}\,\!}$ design where two factors (say factor ${\displaystyle A\,\!}$ and factor ${\displaystyle B\,\!}$) are investigated at two levels. A single replicate of this design will require four runs (${\displaystyle {{2}^{2}}=2\times 2=4\,\!}$) The effects investigated by this design are the two main effects, ${\displaystyle A\,\!}$ and ${\displaystyle B,\,\!}$ and the interaction effect ${\displaystyle AB\,\!}$. The treatments for this design are shown in figure (a) below. In figure (a), letters are used to represent the treatments. The presence of a letter indicates the high level of the corresponding factor and the absence indicates the low level. For example, (1) represents the treatment combination where all factors involved are at the low level or the level represented by ${\displaystyle -1\,\!}$ ; ${\displaystyle a\,\!}$ represents the treatment combination where factor ${\displaystyle A\,\!}$ is at the high level or the level of ${\displaystyle 1\,\!}$, while the remaining factors (in this case, factor ${\displaystyle B\,\!}$) are at the low level or the level of ${\displaystyle -1\,\!}$. Similarly, ${\displaystyle b\,\!}$ represents the treatment combination where factor ${\displaystyle B\,\!}$ is at the high level or the level of ${\displaystyle 1\,\!}$, while factor ${\displaystyle A\,\!}$ is at the low level and ${\displaystyle ab\,\!}$ represents the treatment combination where factors ${\displaystyle A\,\!}$ and ${\displaystyle B\,\!}$ are at the high level or the level of the 1. Figure (b) below shows the design matrix for the ${\displaystyle {2}^{2}\,\!}$ design. It can be noted that the sum of the terms resulting from the product of any two columns of the design matrix is zero. As a result the ${\displaystyle {2}^{2}\,\!}$ design is an orthogonal design. In fact, all ${\displaystyle {2}^{k}\,\!}$ designs are orthogonal designs. This property of the ${\displaystyle {2}^{k}\,\!}$ designs offers a great advantage in the analysis because of the simplifications that result from orthogonality. These simplifications are explained later on in this chapter. The ${\displaystyle {2}^{2}\,\!}$ design can also be represented geometrically using a square with the four treatment combinations lying at the four corners, as shown in figure (c) below.

### The 23 Design

The ${\displaystyle {2}^{3}\,\!}$ design is a two level factorial experiment design with three factors (say factors ${\displaystyle A\,\!}$, ${\displaystyle B\,\!}$ and ${\displaystyle C\,\!}$). This design tests three (${\displaystyle k=3\,\!}$) main effects, ${\displaystyle A\,\!}$, ${\displaystyle B\,\!}$ and ${\displaystyle C\,\!}$ ; three (${\displaystyle (_{2}^{k})=\,\!}$ ${\displaystyle (_{2}^{3})=3\,\!}$) two factor interaction effects, ${\displaystyle AB\,\!}$, ${\displaystyle BC\,\!}$, ${\displaystyle AC\,\!}$ ; and one (${\displaystyle (_{3}^{k})=\,\!}$ ${\displaystyle (_{3}^{3})=1\,\!}$) three factor interaction effect, ${\displaystyle ABC\,\!}$. The design requires eight runs per replicate. The eight treatment combinations corresponding to these runs are ${\displaystyle (1)\,\!}$, ${\displaystyle a\,\!}$, ${\displaystyle b\,\!}$, ${\displaystyle ab\,\!}$, ${\displaystyle c\,\!}$, ${\displaystyle ac\,\!}$, ${\displaystyle bc\,\!}$ and ${\displaystyle abc\,\!}$. Note that the treatment combinations are written in such an order that factors are introduced one by one with each new factor being combined with the preceding terms. This order of writing the treatments is called the standard order or Yates' order. The ${\displaystyle {2}^{3}\,\!}$ design is shown in figure (a) below. The design matrix for the ${\displaystyle {2}^{3}\,\!}$ design is shown in figure (b). The design matrix can be constructed by following the standard order for the treatment combinations to obtain the columns for the main effects and then multiplying the main effects columns to obtain the interaction columns.

The ${\displaystyle {2}^{3}\,\!}$ design can also be represented geometrically using a cube with the eight treatment combinations lying at the eight corners as shown in the figure above.

## Analysis of 2k Designs

The ${\displaystyle {2}^{k}\,\!}$ designs are a special category of the factorial experiments where all the factors are at two levels. The fact that these designs contain factors at only two levels and are orthogonal greatly simplifies their analysis even when the number of factors is large. The use of ${\displaystyle {2}^{k}\,\!}$ designs in investigating a large number of factors calls for a revision of the notation used previously for the ANOVA models. The case for revised notation is made stronger by the fact that the ANOVA and multiple linear regression models are identical for ${\displaystyle {2}^{k}\,\!}$ designs because all factors are only at two levels. Therefore, the notation of the regression models is applied to the ANOVA models for these designs, as explained next.

### Notation

Based on the notation used in General Full Factorial Designs, the ANOVA model for a two level factorial experiment with three factors would be as follows:

{\displaystyle {\begin{aligned}&Y=&\mu +{{\tau }_{1}}\cdot {{x}_{1}}+{{\delta }_{1}}\cdot {{x}_{2}}+{{(\tau \delta )}_{11}}\cdot {{x}_{1}}{{x}_{2}}+{{\gamma }_{1}}\cdot {{x}_{3}}\\&&+{{(\tau \gamma )}_{11}}\cdot {{x}_{1}}{{x}_{3}}+{{(\delta \gamma )}_{11}}\cdot {{x}_{2}}{{x}_{3}}+{{(\tau \delta \gamma )}_{111}}\cdot {{x}_{1}}{{x}_{2}}{{x}_{3}}+\epsilon \end{aligned}}\,\!}

where:

${\displaystyle \mu \,\!}$ represents the overall mean
${\displaystyle {{\tau }_{1}}\,\!}$ represents the independent effect of the first factor (factor ${\displaystyle A\,\!}$) out of the two effects ${\displaystyle {{\tau }_{1}}\,\!}$ and ${\displaystyle {{\tau }_{2}}\,\!}$
${\displaystyle {{\delta }_{1}}\,\!}$ represents the independent effect of the second factor (factor ${\displaystyle B\,\!}$) out of the two effects ${\displaystyle {{\delta }_{1}}\,\!}$ and ${\displaystyle {{\delta }_{2}}\,\!}$
${\displaystyle {{(\tau \delta )}_{11}}\,\!}$ represents the independent effect of the interaction ${\displaystyle AB\,\!}$ out of the other interaction effects
${\displaystyle {{\gamma }_{1}}\,\!}$ represents the effect of the third factor (factor ${\displaystyle C\,\!}$) out of the two effects ${\displaystyle {{\gamma }_{1}}\,\!}$ and ${\displaystyle {{\gamma }_{2}}\,\!}$
${\displaystyle {{(\tau \gamma )}_{11}}\,\!}$ represents the effect of the interaction ${\displaystyle AC\,\!}$ out of the other interaction effects
${\displaystyle {{(\delta \gamma )}_{11}}\,\!}$ represents the effect of the interaction ${\displaystyle BC\,\!}$ out of the other interaction effects
${\displaystyle {{(\tau \delta \gamma )}_{111}}\,\!}$ represents the effect of the interaction ${\displaystyle ABC\,\!}$ out of the other interaction effects

and ${\displaystyle \epsilon \,\!}$ is the random error term.

The notation for a linear regression model having three predictor variables with interactions is:

{\displaystyle {\begin{aligned}&Y=&{{\beta }_{0}}+{{\beta }_{1}}\cdot {{x}_{1}}+{{\beta }_{2}}\cdot {{x}_{2}}+{{\beta }_{12}}\cdot {{x}_{1}}{{x}_{2}}+{{\beta }_{3}}\cdot {{x}_{3}}\\&&+{{\beta }_{13}}\cdot {{x}_{1}}{{x}_{3}}+{{\beta }_{23}}\cdot {{x}_{2}}{{x}_{3}}+{{\beta }_{123}}\cdot {{x}_{1}}{{x}_{2}}{{x}_{3}}+\epsilon \end{aligned}}\,\!}

The notation for the regression model is much more convenient, especially for the case when a large number of higher order interactions are present. In two level experiments, the ANOVA model requires only one indicator variable to represent each factor for both qualitative and quantitative factors. Therefore, the notation for the multiple linear regression model can be applied to the ANOVA model of the experiment that has all the factors at two levels. For example, for the experiment of the ANOVA model given above, ${\displaystyle {{\beta }_{0}}\,\!}$ can represent the overall mean instead of ${\displaystyle \mu \,\!}$, and ${\displaystyle {{\beta }_{1}}\,\!}$ can represent the independent effect, ${\displaystyle {{\tau }_{1}}\,\!}$, of factor ${\displaystyle A\,\!}$. Other main effects can be represented in a similar manner. The notation for the interaction effects is much more simplified (e.g., ${\displaystyle {{\beta }_{123}}\,\!}$ can be used to represent the three factor interaction effect, ${\displaystyle {{(\tau \beta \gamma )}_{111}}\,\!}$).

As mentioned earlier, it is important to note that the coding for the indicator variables for the ANOVA models of two level factorial experiments is reversed from the coding followed in General Full Factorial Designs. Here ${\displaystyle -1\,\!}$ represents the first level of the factor while ${\displaystyle 1\,\!}$ represents the second level. This is because for a two level factor a single variable is needed to represent the factor for both qualitative and quantitative factors. For quantitative factors, using ${\displaystyle -1\,\!}$ for the first level (which is the low level) and 1 for the second level (which is the high level) keeps the coding consistent with the numerical value of the factors. The change in coding between the two coding schemes does not affect the analysis except that signs of the estimated effect coefficients will be reversed (i.e., numerical values of ${\displaystyle {{\hat {\tau }}_{1}}\,\!}$, obtained based on the coding of General Full Factorial Designs, and ${\displaystyle {{\hat {\beta }}_{1}}\,\!}$, obtained based on the new coding, will be the same but their signs would be opposite).

{\displaystyle {\begin{aligned}&&{\text{Factor }}A{\text{ Coding (two level factor)}}\\&&\end{aligned}}\,\!}

${\displaystyle {\begin{matrix}{\text{Previous Coding}}&{}&{}&{}&{\text{Coding for }}{{\text{2}}^{k}}{\text{ Designs}}\\{}&{}&{}&{}&{}\\Effect{\text{ }}{{\tau }_{1}}\ \ :\ \ {{x}_{1}}=1{\text{ }}&{}&{}&{}&Effect{\text{ }}{{\tau }_{1}}{\text{ (or }}-{{\beta }_{1}}{\text{)}}\ \ :\ \ {{x}_{1}}=-1{\text{ }}\\Effect{\text{ }}{{\tau }_{2}}\ \ :\ \ {{x}_{1}}=-1{\text{ }}&{}&{}&{}&Effect{\text{ }}{{\tau }_{2}}{\text{ (or }}{{\beta }_{1}}{\text{)}}\ \ :\ \ {{x}_{1}}=1{\text{ }}\\\end{matrix}}\,\!}$

In summary, the ANOVA model for the experiments with all factors at two levels is different from the ANOVA models for other experiments in terms of the notation in the following two ways:

• The notation of the regression models is used for the effect coefficients.
• The coding of the indicator variables is reversed.

### Special Features

Consider the design matrix, ${\displaystyle X\,\!}$, for the ${\displaystyle {2}^{3}\,\!}$ design discussed above. The (${\displaystyle {{X}^{\prime }}X\,\!}$) ${\displaystyle ^{-1}\,\!}$ matrix is:

${\displaystyle {{({{X}^{\prime }}X)}^{-1}}=\left[{\begin{matrix}0.125&0&0&0&0&0&0&0\\0&0.125&0&0&0&0&0&0\\0&0&0.125&0&0&0&0&0\\0&0&0&0.125&0&0&0&0\\0&0&0&0&0.125&0&0&0\\0&0&0&0&0&0.125&0&0\\0&0&0&0&0&0&0.125&0\\0&0&0&0&0&0&0&0.125\\\end{matrix}}\right]\,\!}$

Notice that, due to the orthogonal design of the ${\displaystyle X\,\!}$ matrix, the ${\displaystyle {{({{X}^{\prime }}X)}^{-1}}\,\!}$ has been simplified to a diagonal matrix which can be written as:

{\displaystyle {\begin{aligned}{{({{X}^{\prime }}X)}^{-1}}=&0.125\cdot I=&{\frac {1}{8}}\cdot I=&{\frac {1}{{2}^{3}}}\cdot I\end{aligned}}\,\!}

where ${\displaystyle I\,\!}$ represents the identity matrix of the same order as the design matrix, ${\displaystyle X\,\!}$. Since there are eight observations per replicate of the ${\displaystyle {2}^{3}\,\!}$ design, the ${\displaystyle (X\,\!}$ ' ${\displaystyle X{{)}^{-1}}\,\!}$ matrix for ${\displaystyle m\,\!}$ replicates of this design can be written as:

${\displaystyle {{({{X}^{\prime }}X)}^{-1}}={\frac {1}{({{2}^{3}}\cdot m)}}\cdot I\,\!}$

The ${\displaystyle {{({{X}^{\prime }}X)}^{-1}}\,\!}$ matrix for any ${\displaystyle {2}^{k}\,\!}$ design can now be written as:

${\displaystyle {{({{X}^{\prime }}X)}^{-1}}={\frac {1}{({{2}^{k}}\cdot m)}}\cdot I\,\!}$

Then the variance-covariance matrix for the ${\displaystyle {2}^{k}\,\!}$ design is:

{\displaystyle {\begin{aligned}C=&{{\hat {\sigma }}^{2}}\cdot {{({{X}^{\prime }}X)}^{-1}}=&M{{S}_{E}}\cdot {{({{X}^{\prime }}X)}^{-1}}=&{\frac {M{{S}_{E}}}{({{2}^{k}}\cdot m)}}\cdot I\end{aligned}}\,\!}

Note that the variance-covariance matrix for the ${\displaystyle {2}^{k}\,\!}$ design is also a diagonal matrix. Therefore, the estimated effect coefficients (${\displaystyle {{\beta }_{1}}\,\!}$, ${\displaystyle {{\beta }_{2}}\,\!}$, ${\displaystyle {{\beta }_{12}},\,\!}$ etc.) for these designs are uncorrelated. This implies that the terms in the ${\displaystyle {2}^{k}\,\!}$ design (main effects, interactions) are independent of each other. Consequently, the extra sum of squares for each of the terms in these designs is independent of the sequence of terms in the model, and also independent of the presence of other terms in the model. As a result the sequential and partial sum of squares for the terms are identical for these designs and will always add up to the model sum of squares. Multicollinearity is also not an issue for these designs.

It can also be noted from the equation given above, that in addition to the ${\displaystyle C\,\!}$ matrix being diagonal, all diagonal elements of the ${\displaystyle C\,\!}$ matrix are identical. This means that the variance (or its square root, the standard error) of all estimated effect coefficients are the same. The standard error, ${\displaystyle se({{\hat {\beta }}_{j}})\,\!}$, for all the coefficients is:

{\displaystyle {\begin{aligned}se({{\hat {\beta }}_{j}})=&{\sqrt {{C}_{jj}}}=&{\sqrt {\frac {M{{S}_{E}}}{({{2}^{k}}\cdot m)}}}{\text{ }}for{\text{ }}all{\text{ }}j\end{aligned}}\,\!}

This property is used to construct the normal probability plot of effects in ${\displaystyle {2}^{k}\,\!}$ designs and identify significant effects using graphical techniques. For details on the normal probability plot of effects in a Weibull++ DOE folio, refer to Normal Probability Plot of Effects.

#### Example

To illustrate the analysis of a full factorial ${\displaystyle {2}^{k}\,\!}$ design, consider a three factor experiment to investigate the effect of honing pressure, number of strokes and cycle time on the surface finish of automobile brake drums. Each of these factors is investigated at two levels. The honing pressure is investigated at levels of 200 ${\displaystyle psi\,\!}$ and 400 ${\displaystyle psi\,\!}$, the number of strokes used is 3 and 5 and the two levels of the cycle time are 3 and 5 seconds. The design for this experiment is set up in a Weibull++ DOE folio as shown in the first two following figures. It is decided to run two replicates for this experiment. The surface finish data collected from each run (using randomization) and the complete design is shown in the third following figure. The analysis of the experiment data is explained next.

The applicable model using the notation for ${\displaystyle {2}^{k}\,\!}$ designs is:

{\displaystyle {\begin{aligned}Y=&{{\beta }_{0}}+{{\beta }_{1}}\cdot {{x}_{1}}+{{\beta }_{2}}\cdot {{x}_{2}}+{{\beta }_{12}}\cdot {{x}_{1}}{{x}_{2}}+{{\beta }_{3}}\cdot {{x}_{3}}\\&+{{\beta }_{13}}\cdot {{x}_{1}}{{x}_{3}}+{{\beta }_{23}}\cdot {{x}_{2}}{{x}_{3}}+{{\beta }_{123}}\cdot {{x}_{1}}{{x}_{2}}{{x}_{3}}+\epsilon \end{aligned}}\,\!}

where the indicator variable, ${\displaystyle {{x}_{1,}}\,\!}$ represents factor ${\displaystyle A\,\!}$ (honing pressure), ${\displaystyle {{x}_{1}}=-1\,\!}$ represents the low level of 200 ${\displaystyle psi\,\!}$ and ${\displaystyle {{x}_{1}}=1\,\!}$ represents the high level of 400 ${\displaystyle psi\,\!}$. Similarly, ${\displaystyle {{x}_{2}}\,\!}$ and ${\displaystyle {{x}_{3}}\,\!}$ represent factors ${\displaystyle B\,\!}$ (number of strokes) and ${\displaystyle C\,\!}$ (cycle time), respectively. ${\displaystyle {{\beta }_{0}}\,\!}$ is the overall mean, while ${\displaystyle {{\beta }_{1}}\,\!}$, ${\displaystyle {{\beta }_{2}}\,\!}$ and ${\displaystyle {{\beta }_{3}}\,\!}$ are the effect coefficients for the main effects of factors ${\displaystyle A\,\!}$, ${\displaystyle B\,\!}$ and ${\displaystyle C\,\!}$, respectively. ${\displaystyle {{\beta }_{12}}\,\!}$, ${\displaystyle {{\beta }_{13}}\,\!}$ and ${\displaystyle {{\beta }_{23}}\,\!}$ are the effect coefficients for the ${\displaystyle AB\,\!}$, ${\displaystyle AC\,\!}$ and ${\displaystyle BC\,\!}$ interactions, while ${\displaystyle {{\beta }_{123}}\,\!}$ represents the ${\displaystyle ABC\,\!}$ interaction.

If the subscripts for the run (${\displaystyle i\,\!}$ ; ${\displaystyle i=\,\!}$ 1 to 8) and replicates (${\displaystyle j\,\!}$ ; ${\displaystyle j=\,\!}$ 1,2) are included, then the model can be written as:

{\displaystyle {\begin{aligned}{{Y}_{ij}}=&{{\beta }_{0}}+{{\beta }_{1}}\cdot {{x}_{ij1}}+{{\beta }_{2}}\cdot {{x}_{ij2}}+{{\beta }_{12}}\cdot {{x}_{ij1}}{{x}_{ij2}}+{{\beta }_{3}}\cdot {{x}_{ij3}}\\&+{{\beta }_{13}}\cdot {{x}_{ij1}}{{x}_{ij3}}+{{\beta }_{23}}\cdot {{x}_{ij2}}{{x}_{ij3}}+{{\beta }_{123}}\cdot {{x}_{ij1}}{{x}_{ij2}}{{x}_{ij3}}+{{\epsilon }_{ij}}\end{aligned}}\,\!}

To investigate how the given factors affect the response, the following hypothesis tests need to be carried:

${\displaystyle {{H}_{0}}\ \ :\ \ {{\beta }_{1}}=0\,\!}$
${\displaystyle {{H}_{1}}\ \ :\ \ {{\beta }_{1}}\neq 0\,\!}$

This test investigates the main effect of factor ${\displaystyle A\,\!}$ (honing pressure). The statistic for this test is:

${\displaystyle {{({{F}_{0}})}_{A}}={\frac {M{{S}_{A}}}{M{{S}_{E}}}}\,\!}$

where ${\displaystyle M{{S}_{A}}\,\!}$ is the mean square for factor ${\displaystyle A\,\!}$ and ${\displaystyle M{{S}_{E}}\,\!}$ is the error mean square. Hypotheses for the other main effects, ${\displaystyle B\,\!}$ and ${\displaystyle C\,\!}$, can be written in a similar manner.

${\displaystyle {{H}_{0}}\ \ :\ \ {{\beta }_{12}}=0\,\!}$
${\displaystyle {{H}_{1}}\ \ :\ \ {{\beta }_{12}}\neq 0\,\!}$

This test investigates the two factor interaction ${\displaystyle AB\,\!}$. The statistic for this test is:

${\displaystyle {{({{F}_{0}})}_{AB}}={\frac {M{{S}_{AB}}}{M{{S}_{E}}}}\,\!}$

where ${\displaystyle M{{S}_{AB}}\,\!}$ is the mean square for the interaction ${\displaystyle AB\,\!}$ and ${\displaystyle M{{S}_{E}}\,\!}$ is the error mean square. Hypotheses for the other two factor interactions, ${\displaystyle AC\,\!}$ and ${\displaystyle BC\,\!}$, can be written in a similar manner.

${\displaystyle {{H}_{0}}\ \ :\ \ {{\beta }_{123}}=0\,\!}$
${\displaystyle {{H}_{1}}\ \ :\ \ {{\beta }_{123}}\neq 0\,\!}$

This test investigates the three factor interaction ${\displaystyle ABC\,\!}$. The statistic for this test is:

${\displaystyle {{({{F}_{0}})}_{ABC}}={\frac {M{{S}_{ABC}}}{M{{S}_{E}}}}\,\!}$

where ${\displaystyle M{{S}_{ABC}}\,\!}$ is the mean square for the interaction ${\displaystyle ABC\,\!}$ and ${\displaystyle M{{S}_{E}}\,\!}$ is the error mean square. To calculate the test statistics, it is convenient to express the ANOVA model in the form ${\displaystyle y=X\beta +\epsilon \,\!}$.

#### Expression of the ANOVA Model as ${\displaystyle y=X\beta +\epsilon \,\!}$

In matrix notation, the ANOVA model can be expressed as:

${\displaystyle y=X\beta +\epsilon \,\!}$

where:

${\displaystyle y=\left[{\begin{matrix}{{Y}_{11}}\\{{Y}_{21}}\\.\\{{Y}_{81}}\\{{Y}_{12}}\\.\\{{Y}_{82}}\\\end{matrix}}\right]=\left[{\begin{matrix}90\\90\\.\\90\\86\\.\\80\\\end{matrix}}\right]{\text{ }}X=\left[{\begin{matrix}1&-1&-1&1&-1&1&1&-1\\1&1&-1&-1&-1&-1&1&1\\.&.&.&.&.&.&.&.\\1&1&1&1&1&1&1&1\\1&-1&-1&1&-1&1&1&-1\\.&.&.&.&.&.&.&.\\1&1&1&1&1&1&1&1\\\end{matrix}}\right]\,\!}$

${\displaystyle \beta =\left[{\begin{matrix}{{\beta }_{0}}\\{{\beta }_{1}}\\{{\beta }_{2}}\\{{\beta }_{12}}\\{{\beta }_{3}}\\{{\beta }_{13}}\\{{\beta }_{23}}\\{{\beta }_{123}}\\\end{matrix}}\right]{\text{ }}\epsilon =\left[{\begin{matrix}{{\epsilon }_{11}}\\{{\epsilon }_{21}}\\.\\{{\epsilon }_{81}}\\{{\epsilon }_{12}}\\.\\.\\{{\epsilon }_{82}}\\\end{matrix}}\right]\,\!}$

#### Calculation of the Extra Sum of Squares for the Factors

Knowing the matrices ${\displaystyle y\,\!}$, ${\displaystyle X\,\!}$ and ${\displaystyle \beta \,\!}$, the extra sum of squares for the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics. Since the experiment design is orthogonal, the partial and sequential extra sum of squares are identical. The extra sum of squares for each effect can be calculated as shown next. As an example, the extra sum of squares for the main effect of factor ${\displaystyle A\,\!}$ is:

{\displaystyle {\begin{aligned}S{{S}_{A}}=&Model{\text{ }}Sum{\text{ }}of{\text{ }}Squares-Sum{\text{ }}of{\text{ }}Squares{\text{ }}of{\text{ }}model{\text{ }}excluding{\text{ }}the{\text{ }}main{\text{ }}effect{\text{ }}of{\text{ }}A\\=&{{y}^{\prime }}[H-(1/16)J]y-{{y}^{\prime }}[{{H}_{{\tilde {\ }}A}}-(1/16)J]y\end{aligned}}\,\!}

where ${\displaystyle H\,\!}$ is the hat matrix and ${\displaystyle J\,\!}$ is the matrix of ones. The matrix ${\displaystyle {{H}_{{\tilde {\ }}A}}\,\!}$ can be calculated using ${\displaystyle {{H}_{{\tilde {\ }}A}}={{X}_{{\tilde {\ }}A}}{{(X_{{\tilde {\ }}A}^{\prime }{{X}_{{\tilde {\ }}A}})}^{-1}}X_{{\tilde {\ }}A}^{\prime }\,\!}$ where ${\displaystyle {{X}_{{\tilde {\ }}A}}\,\!}$ is the design matrix, ${\displaystyle X\,\!}$, excluding the second column that represents the main effect of factor ${\displaystyle A\,\!}$. Thus, the sum of squares for the main effect of factor ${\displaystyle A\,\!}$ is:

{\displaystyle {\begin{aligned}S{{S}_{A}}=&{{y}^{\prime }}[H-(1/16)J]y-{{y}^{\prime }}[{{H}_{{\tilde {\ }}A}}-(1/16)J]y\\=&654.4375-549.375\\=&105.0625\end{aligned}}\,\!}

Similarly, the extra sum of squares for the interaction effect ${\displaystyle AB\,\!}$ is:

{\displaystyle {\begin{aligned}S{{S}_{AB}}=&{{y}^{\prime }}[H-(1/16)J]y-{{y}^{\prime }}[{{H}_{{\tilde {\ }}AB}}-(1/16)J]y\\=&654.4375-636.375\\=&18.0625\end{aligned}}\,\!}

The extra sum of squares for other effects can be obtained in a similar manner.

#### Calculation of the Test Statistics

Knowing the extra sum of squares, the test statistic for the effects can be calculated. For example, the test statistic for the interaction ${\displaystyle AB\,\!}$ is:

{\displaystyle {\begin{aligned}{{({{f}_{0}})}_{AB}}=&{\frac {M{{S}_{AB}}}{M{{S}_{E}}}}\\=&{\frac {S{{S}_{AB}}/dof(S{{S}_{AB}})}{S{{S}_{E}}/dof(S{{S}_{E}})}}\\=&{\frac {18.0625/1}{147.5/8}}\\=&0.9797\end{aligned}}\,\!}

where ${\displaystyle M{{S}_{AB}}\,\!}$ is the mean square for the ${\displaystyle AB\,\!}$ interaction and ${\displaystyle M{{S}_{E}}\,\!}$ is the error mean square. The ${\displaystyle p\,\!}$ value corresponding to the statistic, ${\displaystyle {{({{f}_{0}})}_{AB}}=0.9797\,\!}$, based on the ${\displaystyle F\,\!}$ distribution with one degree of freedom in the numerator and eight degrees of freedom in the denominator is:

{\displaystyle {\begin{aligned}p{\text{ }}value=&1-P(F\leq {{({{f}_{0}})}_{AB}})\\=&1-0.6487\\=&0.3513\end{aligned}}\,\!}

Assuming that the desired significance is 0.1, since ${\displaystyle p\,\!}$ value > 0.1, it can be concluded that the interaction between honing pressure and number of strokes does not affect the surface finish of the brake drums. Tests for other effects can be carried out in a similar manner. The results are shown in the ANOVA Table in the following figure. The values S, R-sq and R-sq(adj) in the figure indicate how well the model fits the data. The value of S represents the standard error of the model, R-sq represents the coefficient of multiple determination and R-sq(adj) represents the adjusted coefficient of multiple determination. For details on these values refer to Multiple Linear Regression Analysis.

#### Calculation of Effect Coefficients

The estimate of effect coefficients can also be obtained:

{\displaystyle {\begin{aligned}{\hat {\beta }}=&{{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y\\=&\left[{\begin{matrix}86.4375\\2.5625\\-4.9375\\1.0625\\-1.0625\\2.4375\\-1.3125\\-0.1875\\\end{matrix}}\right]\end{aligned}}\,\!}

The coefficients and related results are shown in the Regression Information table above. In the table, the Effect column displays the effects, which are simply twice the coefficients. The Standard Error column displays the standard error, ${\displaystyle se({{\hat {\beta }}_{j}})\,\!}$. The Low CI and High CI columns display the confidence interval on the coefficients. The interval shown is the 90% interval as the significance is chosen as 0.1. The T Value column displays the ${\displaystyle t\,\!}$ statistic, ${\displaystyle {{t}_{0}}\,\!}$, corresponding to the coefficients. The P Value column displays the ${\displaystyle p\,\!}$ value corresponding to the ${\displaystyle t\,\!}$ statistic. (For details on how these results are calculated, refer to General Full Factorial Designs). Plots of residuals can also be obtained from the DOE folio to ensure that the assumptions related to the ANOVA model are not violated.

#### Model Equation

From the analysis results in the above figure within calculation of effect coefficients section, it is seen that effects ${\displaystyle A\,\!}$, ${\displaystyle B\,\!}$ and ${\displaystyle AC\,\!}$ are significant. In a DOE folio, the ${\displaystyle p\,\!}$ values for the significant effects are displayed in red in the ANOVA Table for easy identification. Using the values of the estimated effect coefficients, the model for the present ${\displaystyle {2}^{3}\,\!}$ design in terms of the coded values can be written as:

{\displaystyle {\begin{aligned}{\hat {y}}=&{{\beta }_{0}}+{{\beta }_{1}}\cdot {{x}_{1}}+{{\beta }_{2}}\cdot {{x}_{2}}+{{\beta }_{13}}\cdot {{x}_{1}}{{x}_{3}}\\=&86.4375+2.5625{{x}_{1}}-4.9375{{x}_{2}}+2.4375{{x}_{1}}{{x}_{3}}\end{aligned}}\,\!}

To make the model hierarchical, the main effect, ${\displaystyle C\,\!}$, needs to be included in the model (because the interaction ${\displaystyle AC\,\!}$ is included in the model). The resulting model is:

${\displaystyle {\hat {y}}=86.4375+2.5625{{x}_{1}}-4.9375{{x}_{2}}+1.0625{{x}_{3}}+2.4375{{x}_{1}}{{x}_{3}}\,\!}$

This equation can be viewed in a DOE folio, as shown in the following figure, using the Show Analysis Summary icon in the Control Panel. The equation shown in the figure will match the hierarchical model once the required terms are selected using the Select Effects icon.

## Replicated and Repeated Runs

In the case of replicated experiments, it is important to note the difference between replicated runs and repeated runs. Both repeated and replicated runs are multiple response readings taken at the same factor levels. However, repeated runs are response observations taken at the same time or in succession. Replicated runs are response observations recorded in a random order. Therefore, replicated runs include more variation than repeated runs. For example, a baker, who wants to investigate the effect of two factors on the quality of cakes, will have to bake four cakes to complete one replicate of a ${\displaystyle {2}^{2}\,\!}$ design. Assume that the baker bakes eight cakes in all. If, for each of the four treatments of the ${\displaystyle {2}^{2}\,\!}$ design, the baker selects one treatment at random and then bakes two cakes for this treatment at the same time then this is a case of two repeated runs. If, however, the baker bakes all the eight cakes randomly, then the eight cakes represent two sets of replicated runs. For repeated measurements, the average values of the response for each treatment should be entered into a DOE folio as shown in the following figure (a) when the two cakes for a particular treatment are baked together. For replicated measurements, when all the cakes are baked randomly, the data is entered as shown in the following figure (b).

## Unreplicated 2k Designs

If a factorial experiment is run only for a single replicate then it is not possible to test hypotheses about the main effects and interactions as the error sum of squares cannot be obtained. This is because the number of observations in a single replicate equals the number of terms in the ANOVA model. Hence the model fits the data perfectly and no degrees of freedom are available to obtain the error sum of squares.

However, sometimes it is only possible to run a single replicate of the ${\displaystyle {2}^{k}\,\!}$ design because of constraints on resources and time. In the absence of the error sum of squares, hypothesis tests to identify significant factors cannot be conducted. A number of methods of analyzing information obtained from unreplicated ${\displaystyle {2}^{k}\,\!}$ designs are available. These include pooling higher order interactions, using the normal probability plot of effects or including center point replicates in the design.

### Pooling Higher Order Interactions

One of the ways to deal with unreplicated ${\displaystyle {2}^{k}\,\!}$ designs is to use the sum of squares of some of the higher order interactions as the error sum of squares provided these higher order interactions can be assumed to be insignificant. By dropping some of the higher order interactions from the model, the degrees of freedom corresponding to these interactions can be used to estimate the error mean square. Once the error mean square is known, the test statistics to conduct hypothesis tests on the factors can be calculated.

### Normal Probability Plot of Effects

Another way to use unreplicated ${\displaystyle {2}^{k}\,\!}$ designs to identify significant effects is to construct the normal probability plot of the effects. As mentioned in Special Features, the standard error for all effect coefficients in the ${\displaystyle {2}^{k}\,\!}$ designs is the same. Therefore, on a normal probability plot of effect coefficients, all non-significant effect coefficients (with ${\displaystyle \beta =0\,\!}$) will fall along the straight line representative of the normal distribution, N(${\displaystyle 0,{{\sigma }^{2}}/({{2}^{k}}\cdot m)\,\!}$). Effect coefficients that show large deviations from this line will be significant since they do not come from this normal distribution. Similarly, since effects ${\displaystyle =2\times \,\!}$ effect coefficients, all non-significant effects will also follow a straight line on the normal probability plot of effects. For replicated designs, the Effects Probability plot of a DOE folio plots the normalized effect values (or the T Values) on the standard normal probability line, N(0,1). However, in the case of unreplicated ${\displaystyle {2}^{k}\,\!}$ designs, ${\displaystyle {{\sigma }^{2}}\,\!}$ remains unknown since ${\displaystyle M{{S}_{E}}\,\!}$ cannot be obtained. Lenth's method is used in this case to estimate the variance of the effects. For details on Lenth's method, please refer to Montgomery (2001). The DOE folio then uses this variance value to plot effects along the N(0, Lenth's effect variance) line. The method is illustrated in the following example.

#### Example

Vinyl panels, used as instrument panels in a certain automobile, are seen to develop defects after a certain amount of time. To investigate the issue, it is decided to carry out a two level factorial experiment. Potential factors to be investigated in the experiment are vacuum rate (factor ${\displaystyle A\,\!}$), material temperature (factor ${\displaystyle B\,\!}$), element intensity (factor ${\displaystyle C\,\!}$) and pre-stretch (factor ${\displaystyle D\,\!}$). The two levels of the factors used in the experiment are as shown in below.

With a ${\displaystyle {2}^{4}\,\!}$ design requiring 16 runs per replicate it is only feasible for the manufacturer to run a single replicate.

The experiment design and data, collected as percent defects, are shown in the following figure. Since the present experiment design contains only a single replicate, it is not possible to obtain an estimate of the error sum of squares, ${\displaystyle S{{S}_{E}}\,\!}$. It is decided to use the normal probability plot of effects to identify the significant effects. The effect values for each term are obtained as shown in the following figure.

Lenth's method uses these values to estimate the variance. As described in [Lenth, 1989], if all effects are arranged in ascending order, using their absolute values, then ${\displaystyle {{s}_{0}}\,\!}$ is defined as 1.5 times the median value:

{\displaystyle {\begin{aligned}{{s}_{0}}=&1.5\cdot median(\left|effect\right|)\\=&1.5\cdot 2\\=&3\end{aligned}}\,\!}

Using ${\displaystyle {{s}_{0}}\,\!}$, the "pseudo standard error" (${\displaystyle PSE\,\!}$) is calculated as 1.5 times the median value of all effects that are less than 2.5 ${\displaystyle {{s}_{0}}\,\!}$ :

{\displaystyle {\begin{aligned}PSE=&1.5\cdot median(\left|effect\right|\ \ :\ \ \left|effect\right|<2.5{{s}_{0}})\\=&1.5\cdot 1.5\\=&2.25\end{aligned}}\,\!}

Using ${\displaystyle PSE\,\!}$ as an estimate of the effect variance, the effect variance is 2.25. Knowing the effect variance, the normal probability plot of effects for the present unreplicated experiment can be constructed as shown in the following figure. The line on this plot is the line N(0, 2.25). The plot shows that the effects ${\displaystyle A\,\!}$, ${\displaystyle D\,\!}$ and the interaction ${\displaystyle AD\,\!}$ do not follow the distribution represented by this line. Therefore, these effects are significant.

The significant effects can also be identified by comparing individual effect values to the margin of error or the threshold value using the pareto chart (see the third following figure). If the required significance is 0.1, then:

${\displaystyle margin{\text{ }}of{\text{ }}error={{t}_{\alpha /2,d}}\cdot PSE\,\!}$

The ${\displaystyle t\,\!}$ statistic, ${\displaystyle {{t}_{\alpha /2,d}}\,\!}$, is calculated at a significance of ${\displaystyle \alpha /2\,\!}$ (for the two-sided hypothesis) and degrees of freedom ${\displaystyle d=(\,\!}$ number of effects ${\displaystyle )/3\,\!}$. Thus:

{\displaystyle {\begin{aligned}margin{\text{ }}of{\text{ }}error=&{{t}_{0.05,5}}\cdot PSE\\=&2.015\cdot 2.25\\=&4.534\end{aligned}}\,\!}

The value of 4.534 is shown as the critical value line in the third following figure. All effects with absolute values greater than the margin of error can be considered to be significant. These effects are ${\displaystyle A\,\!}$, ${\displaystyle D\,\!}$ and the interaction ${\displaystyle AD\,\!}$. Therefore, the vacuum rate, the pre-stretch and their interaction have a significant effect on the defects of the vinyl panels.

### Center Point Replicates

Another method of dealing with unreplicated ${\displaystyle {2}^{k}\,\!}$ designs that only have quantitative factors is to use replicated runs at the center point. The center point is the response corresponding to the treatment exactly midway between the two levels of all factors. Running multiple replicates at this point provides an estimate of pure error. Although running multiple replicates at any treatment level can provide an estimate of pure error, the other advantage of running center point replicates in the ${\displaystyle {2}^{k}\,\!}$ design is in checking for the presence of curvature. The test for curvature investigates whether the model between the response and the factors is linear and is discussed in Center Pt. Replicates to Test Curvature.

#### Example: Use Center Point to Get Pure Error

Consider a ${\displaystyle {2}^{2}\,\!}$ experiment design to investigate the effect of two factors, ${\displaystyle A\,\!}$ and ${\displaystyle B\,\!}$, on a certain response. The energy consumed when the treatments of the ${\displaystyle {2}^{2}\,\!}$ design are run is considerably larger than the energy consumed for the center point run (because at the center point the factors are at their middle levels). Therefore, the analyst decides to run only a single replicate of the design and augment the design by five replicated runs at the center point as shown in the following figure. The design properties for this experiment are shown in the second following figure. The complete experiment design is shown in the third following figure. The center points can be used in the identification of significant effects as shown next.

Since the present ${\displaystyle {2}^{2}\,\!}$ design is unreplicated, there are no degrees of freedom available to calculate the error sum of squares. By augmenting this design with five center points, the response values at the center points, ${\displaystyle y_{i}^{c}\,\!}$, can be used to obtain an estimate of pure error, ${\displaystyle S{{S}_{PE}}\,\!}$. Let ${\displaystyle {{\bar {y}}^{c}}\,\!}$ represent the average response for the five replicates at the center. Then:

${\displaystyle S{{S}_{PE}}=Sum{\text{ }}of{\text{ }}Squares{\text{ }}for{\text{ }}center{\text{ }}points\,\!}$

{\displaystyle {\begin{aligned}S{{S}_{PE}}=&{\underset {i=1}{\overset {5}{\mathop {\sum } }}}\,{{(y_{i}^{c}-{{\bar {y}}^{c}})}^{2}}\\=&{{(25.2-25.26)}^{2}}+...+{{(25.3-25.26)}^{2}}\\=&0.052\end{aligned}}\,\!}

Then the corresponding mean square is:

{\displaystyle {\begin{aligned}M{{S}_{PE}}=&{\frac {S{{S}_{PE}}}{degrees{\text{ }}of{\text{ }}freedom}}\\=&{\frac {0.052}{5-1}}\\=&0.013\end{aligned}}\,\!}

Alternatively, ${\displaystyle M{{S}_{PE}}\,\!}$ can be directly obtained by calculating the variance of the response values at the center points:

{\displaystyle {\begin{aligned}M{{S}_{PE}}=&{{s}^{2}}\\=&{\frac {{\underset {i=1}{\overset {5}{\mathop {\sum } }}}\,{{(y_{i}^{c}-{{\bar {y}}^{c}})}^{2}}}{5-1}}\end{aligned}}\,\!}

Once ${\displaystyle M{{S}_{PE}}\,\!}$ is known, it can be used as the error mean square, ${\displaystyle M{{S}_{E}}\,\!}$, to carry out the test of significance for each effect. For example, to test the significance of the main effect of factor ${\displaystyle A,\,\!}$ the sum of squares corresponding to this effect is obtained in the usual manner by considering only the four runs of the original ${\displaystyle {2}^{2}\,\!}$ design.

{\displaystyle {\begin{aligned}S{{S}_{A}}=&{{y}^{\prime }}[H-(1/4)J]y-{{y}^{\prime }}[{{H}_{{\tilde {\ }}A}}-(1/4)J]y\\=&0.5625\end{aligned}}\,\!}

Then, the test statistic to test the significance of the main effect of factor ${\displaystyle A\,\!}$ is:

{\displaystyle {\begin{aligned}{{({{f}_{0}})}_{A}}=&{\frac {M{{S}_{A}}}{M{{S}_{E}}}}\\=&{\frac {0.5625/1}{0.052/4}}\\=&43.2692\end{aligned}}\,\!}

The ${\displaystyle p\,\!}$ value corresponding to the statistic, ${\displaystyle {{({{f}_{0}})}_{A}}=43.2692\,\!}$, based on the ${\displaystyle F\,\!}$ distribution with one degree of freedom in the numerator and eight degrees of freedom in the denominator is:

{\displaystyle {\begin{aligned}p{\text{ }}value=&1-P(F\leq {{({{f}_{0}})}_{A}})\\=&1-0.9972\\=&0.0028\end{aligned}}\,\!}

Assuming that the desired significance is 0.1, since ${\displaystyle p\,\!}$ value < 0.1, it can be concluded that the main effect of factor ${\displaystyle A\,\!}$ significantly affects the response. This result is displayed in the ANOVA table as shown in the following figure. Test for the significance of other factors can be carried out in a similar manner.

### Using Center Point Replicates to Test Curvature

Center point replicates can also be used to check for curvature in replicated or unreplicated ${\displaystyle {2}^{k}\,\!}$ designs. The test for curvature investigates whether the model between the response and the factors is linear. The way the DOE folio handles center point replicates is similar to its handling of blocks. The center point replicates are treated as an additional factor in the model. The factor is labeled as Curvature in the results of the DOE folio. If Curvature turns out to be a significant factor in the results, then this indicates the presence of curvature in the model.

#### Example: Use Center Point to Test Curvature

To illustrate the use of center point replicates in testing for curvature, consider again the data of the single replicate ${\displaystyle {2}^{2}\,\!}$ experiment from a preceding figure(labeled "${\displaystyle 2^{2}}$ design augmented by five center point runs"). Let ${\displaystyle {{x}_{1}}\,\!}$ be the indicator variable to indicate if the run is a center point:

${\displaystyle {\begin{matrix}{{x}_{1}}=0&{}&{\text{Center point run}}\\{{x}_{1}}=1&{}&{\text{Other run}}\\\end{matrix}}\,\!}$

If ${\displaystyle {{x}_{2}}\,\!}$ and ${\displaystyle {{x}_{3}}\,\!}$ are the indicator variables representing factors ${\displaystyle A\,\!}$ and ${\displaystyle B\,\!}$, respectively, then the model for this experiment is:

${\displaystyle Y={{\beta }_{0}}+{{\beta }_{1}}\cdot {{x}_{1}}+{{\beta }_{2}}\cdot {{x}_{2}}+{{\beta }_{3}}\cdot {{x}_{3}}+{{\beta }_{23}}\cdot {{x}_{2}}{{x}_{3}}\,\!}$

To investigate the presence of curvature, the following hypotheses need to be tested:

{\displaystyle {\begin{aligned}&{{H}_{0}}:&{{\beta }_{1}}=0{\text{ (Curvature is absent)}}\\&{{H}_{1}}:&{{\beta }_{1}}\neq 0\end{aligned}}\,\!}

The test statistic to be used for this test is:

${\displaystyle {{({{F}_{0}})}_{curvature}}={\frac {M{{S}_{curvature}}}{M{{S}_{E}}}}\,\!}$

where ${\displaystyle M{{S}_{curvature}}\,\!}$ is the mean square for Curvature and ${\displaystyle M{{S}_{E}}\,\!}$ is the error mean square.

Calculation of the Sum of Squares

The ${\displaystyle X\,\!}$ matrix and ${\displaystyle y\,\!}$ vector for this experiment are:

${\displaystyle X=\left[{\begin{matrix}1&1&-1&-1&1\\1&1&1&-1&-1\\1&1&-1&1&-1\\1&1&1&1&1\\1&0&0&0&0\\1&0&0&0&0\\1&0&0&0&0\\1&0&0&0&0\\1&0&0&0&0\\\end{matrix}}\right]{\text{ }}y=\left[{\begin{matrix}24.6\\25.4\\25.0\\25.7\\25.2\\25.3\\25.4\\25.1\\25.3\\\end{matrix}}\right]\,\!}$

The sum of squares can now be calculated. For example, the error sum of squares is:

{\displaystyle {\begin{aligned}&S{{S}_{E}}=&{{y}^{\prime }}[I-H]y\\&=&0.052\end{aligned}}\,\!}

where ${\displaystyle I\,\!}$ is the identity matrix and ${\displaystyle H\,\!}$ is the hat matrix. It can be seen that this is equal to ${\displaystyle S{{S}_{PE{\text{ }}}}\,\!}$ (the sum of squares due to pure error) because of the replicates at the center point, as obtained in the example. The number of degrees of freedom associated with ${\displaystyle S{{S}_{E}}\,\!}$, ${\displaystyle dof(S{{S}_{E}})\,\!}$ is four. The extra sum of squares corresponding to the center point replicates (or Curvature) is:

{\displaystyle {\begin{aligned}&S{{S}_{Curvature}}=&Model{\text{ }}Sum{\text{ }}of{\text{ }}Squares-\\&&Sum{\text{ }}of{\text{ }}Squares{\text{ }}of{\text{ }}model{\text{ }}excluding{\text{ }}the{\text{ }}center{\text{ }}point\\&=&{{y}^{\prime }}[H-(1/9)J]y-{{y}^{\prime }}[{{H}_{{\tilde {\ }}Curvature}}-(1/9)J]y\end{aligned}}\,\!}

where ${\displaystyle H\,\!}$ is the hat matrix and ${\displaystyle J\,\!}$ is the matrix of ones. The matrix ${\displaystyle {{H}_{{\tilde {\ }}Curvature}}\,\!}$ can be calculated using ${\displaystyle {{H}_{{\tilde {\ }}Curvature}}={{X}_{{\tilde {\ }}Curv}}{{(X_{{\tilde {\ }}Curv}^{\prime }{{X}_{{\tilde {\ }}Curv}})}^{-1}}X_{{\tilde {\ }}Curv}^{\prime }\,\!}$ where ${\displaystyle {{X}_{{\tilde {\ }}Curv}}\,\!}$ is the design matrix, ${\displaystyle X\,\!}$, excluding the second column that represents the center point. Thus, the extra sum of squares corresponding to Curvature is:

{\displaystyle {\begin{aligned}&S{{S}_{Curvature}}=&{{y}^{\prime }}[H-(1/9)J]y-{{y}^{\prime }}[{{H}_{{\tilde {\ }}Center}}-(1/9)J]y\\&=&0.7036-0.6875\\&=&0.0161\end{aligned}}\,\!}

This extra sum of squares can be used to test for the significance of curvature. The corresponding mean square is:

{\displaystyle {\begin{aligned}&M{{S}_{Curvature}}=&{\frac {Sum{\text{ }}of{\text{ }}squares{\text{ }}corresponding{\text{ }}to{\text{ }}Curvature}{degrees{\text{ }}of{\text{ }}freedom}}\\&=&{\frac {0.0161}{1}}\\&=&0.0161\end{aligned}}\,\!}

Calculation of the Test Statistic

Knowing the mean squares, the statistic to check the significance of curvature can be calculated.

{\displaystyle {\begin{aligned}&{{({{f}_{0}})}_{Curvature}}=&{\frac {M{{S}_{Curvature}}}{M{{S}_{E}}}}\\&=&{\frac {0.0161/1}{0.052/4}}\\&=&1.24\end{aligned}}\,\!}

The ${\displaystyle p\,\!}$ value corresponding to the statistic, ${\displaystyle {{({{f}_{0}})}_{Curvature}}=1.24\,\!}$, based on the ${\displaystyle F\,\!}$ distribution with one degree of freedom in the numerator and four degrees of freedom in the denominator is:

{\displaystyle {\begin{aligned}&p{\text{ }}value=&1-P(F\leq {{({{f}_{0}})}_{Curvature}})\\&=&1-0.6713\\&=&0.3287\end{aligned}}\,\!}

Assuming that the desired significance is 0.1, since ${\displaystyle p\,\!}$ value > 0.1, it can be concluded that curvature does not exist for this design. This results is shown in the ANOVA table in the figure above. The surface of the fitted model based on these results, along with the observed response values, is shown in the figure below.

## Blocking in 2k Designs

Blocking can be used in the ${\displaystyle {2}^{k}\,\!}$ designs to deal with cases when replicates cannot be run under identical conditions. Randomized complete block designs that were discussed in Randomization and Blocking in DOE for factorial experiments are also applicable here. At times, even with just two levels per factor, it is not possible to run all treatment combinations for one replicate of the experiment under homogeneous conditions. For example, each replicate of the ${\displaystyle {2}^{2}\,\!}$ design requires four runs. If each run requires two hours and testing facilities are available for only four hours per day, two days of testing would be required to run one complete replicate. Blocking can be used to separate the treatment runs on the two different days. Blocks that do not contain all treatments of a replicate are called incomplete blocks. In incomplete block designs, the block effect is confounded with certain effect(s) under investigation. For the ${\displaystyle {2}^{2}\,\!}$ design assume that treatments ${\displaystyle (1)\,\!}$ and ${\displaystyle ab\,\!}$ were run on the first day and treatments ${\displaystyle a\,\!}$ and ${\displaystyle b\,\!}$ were run on the second day. Then, the incomplete block design for this experiment is:

${\displaystyle {\begin{matrix}{\text{Block 1}}&{}&{\text{Block 2}}\\\left[{\begin{matrix}(1)\\ab\\\end{matrix}}\right]&{}&\left[{\begin{matrix}a\\b\\\end{matrix}}\right]\\\end{matrix}}\,\!}$

For this design the block effect may be calculated as:

{\displaystyle {\begin{aligned}&Block{\text{ }}Effect=&Average{\text{ }}response{\text{ }}for{\text{ }}Block{\text{ }}1-\\&&Average{\text{ }}response{\text{ }}for{\text{ }}Block{\text{ }}2\\&=&{\frac {(1)+ab}{2}}-{\frac {a+b}{2}}\\&=&{\frac {1}{2}}[(1)+ab-a-b]\end{aligned}}\,\!}

The ${\displaystyle AB\,\!}$ interaction effect is:

{\displaystyle {\begin{aligned}&AB=&Average{\text{ }}response{\text{ }}at{\text{ }}{{A}_{\text{high}}}{\text{-}}{{B}_{\text{high}}}{\text{ }}and{\text{ }}{{A}_{\text{low}}}{\text{-}}{{B}_{\text{low}}}-\\&&Average{\text{ }}response{\text{ }}at{\text{ }}{{A}_{\text{low}}}{\text{-}}{{B}_{\text{high}}}{\text{ }}and{\text{ }}{{A}_{\text{high}}}{\text{-}}{{B}_{\text{low}}}\\&=&{\frac {ab+(1)}{2}}-{\frac {b+a}{2}}\\&=&{\frac {1}{2}}[(1)+ab-a-b]\end{aligned}}\,\!}

The two equations given above show that, in this design, the ${\displaystyle AB\,\!}$ interaction effect cannot be distinguished from the block effect because the formulas to calculate these effects are the same. In other words, the ${\displaystyle AB\,\!}$ interaction is said to be confounded with the block effect and it is not possible to say if the effect calculated based on these equations is due to the ${\displaystyle AB\,\!}$ interaction effect, the block effect or both. In incomplete block designs some effects are always confounded with the blocks. Therefore, it is important to design these experiments in such a way that the important effects are not confounded with the blocks. In most cases, the experimenter can assume that higher order interactions are unimportant. In this case, it would better to use incomplete block designs that confound these effects with the blocks. One way to design incomplete block designs is to use defining contrasts as shown next:

${\displaystyle L={{\alpha }_{1}}{{q}_{1}}+{{\alpha }_{2}}{{q}_{2}}+...+{{\alpha }_{k}}{{q}_{k}}\,\!}$

where the ${\displaystyle {{\alpha }_{i}}\,\!}$ s are the exponents for the factors in the effect that is to be confounded with the block effect and the ${\displaystyle {{q}_{i}}\,\!}$ s are values based on the level of the ${\displaystyle i\,\!}$ the factor (in a treatment that is to be allocated to a block). For ${\displaystyle {2}^{k}\,\!}$ designs the ${\displaystyle {{\alpha }_{i}}\,\!}$ s are either 0 or 1 and the ${\displaystyle {{q}_{i}}\,\!}$ s have a value of 0 for the low level of the ${\displaystyle i\,\!}$ th factor and a value of 1 for the high level of the factor in the treatment under consideration. As an example, consider the ${\displaystyle {2}^{2}\,\!}$ design where the interaction effect ${\displaystyle AB\,\!}$ is confounded with the block. Since there are two factors, ${\displaystyle k=2\,\!}$, with ${\displaystyle i=1\,\!}$ representing factor ${\displaystyle A\,\!}$ and ${\displaystyle i=2\,\!}$ representing factor ${\displaystyle B\,\!}$. Therefore:

${\displaystyle L={{\alpha }_{1}}{{q}_{1}}+{{\alpha }_{2}}{{q}_{2}}\,\!}$

The value of ${\displaystyle {{\alpha }_{1}}\,\!}$ is one because the exponent of factor ${\displaystyle A\,\!}$ in the confounded interaction ${\displaystyle AB\,\!}$ is one. Similarly, the value of ${\displaystyle {{\alpha }_{2}}\,\!}$ is one because the exponent of factor ${\displaystyle B\,\!}$ in the confounded interaction ${\displaystyle AB\,\!}$ is also one. Therefore, the defining contrast for this design can be written as:

{\displaystyle {\begin{aligned}&L=&{{\alpha }_{1}}{{q}_{1}}+{{\alpha }_{2}}{{q}_{2}}\\&=&1\cdot {{q}_{1}}+1\cdot {{q}_{2}}\\&=&{{q}_{1}}+{{q}_{2}}\end{aligned}}\,\!}

Once the defining contrast is known, it can be used to allocate treatments to the blocks. For the ${\displaystyle {2}^{2}\,\!}$ design, there are four treatments ${\displaystyle (1)\,\!}$, ${\displaystyle a\,\!}$, ${\displaystyle b\,\!}$ and ${\displaystyle ab\,\!}$. Assume that ${\displaystyle L=0\,\!}$ represents block 2 and ${\displaystyle L=1\,\!}$ represents block 1. In order to decide which block the treatment ${\displaystyle (1)\,\!}$ belongs to, the levels of factors ${\displaystyle A\,\!}$ and ${\displaystyle B\,\!}$ for this run are used. Since factor ${\displaystyle A\,\!}$ is at the low level in this treatment, ${\displaystyle {{q}_{1}}=0\,\!}$. Similarly, since factor ${\displaystyle B\,\!}$ is also at the low level in this treatment, ${\displaystyle {{q}_{2}}=0\,\!}$. Therefore:

{\displaystyle {\begin{aligned}&L=&{{q}_{1}}+{{q}_{2}}\\&=&0+0=0{\text{ (mod 2)}}\end{aligned}}\,\!}

Note that the value of ${\displaystyle L\,\!}$ used to decide the block allocation is "mod 2" of the original value. This value is obtained by taking the value of 1 for odd numbers and 0 otherwise. Based on the value of ${\displaystyle L\,\!}$, treatment ${\displaystyle (1)\,\!}$ is assigned to block 1. Other treatments can be assigned using the following calculations:

{\displaystyle {\begin{aligned}&(1):&{\text{ }}L=0+0=0=0{\text{ (mod 2)}}\\&a:&{\text{ }}L=1+0=1=1{\text{ (mod 2)}}\\&b:&{\text{ }}L=0+1=1=1{\text{ (mod 2)}}\\&ab:&{\text{ }}L=1+1=2=0{\text{ (mod 2)}}\end{aligned}}\,\!}

Therefore, to confound the interaction ${\displaystyle AB\,\!}$ with the block effect in the ${\displaystyle {2}^{2}\,\!}$ incomplete block design, treatments ${\displaystyle (1)\,\!}$ and ${\displaystyle ab\,\!}$ (with ${\displaystyle L=0\,\!}$) should be assigned to block 2 and treatment combinations ${\displaystyle a\,\!}$ and ${\displaystyle b\,\!}$ (with ${\displaystyle L=1\,\!}$) should be assigned to block 1.

#### Example: Two Level Factorial Design with Two Blocks

This example illustrates how treatments can be allocated to two blocks for an unreplicated ${\displaystyle {2}^{k}\,\!}$ design. Consider the unreplicated ${\displaystyle {2}^{4}\,\!}$ design to investigate the four factors affecting the defects in automobile vinyl panels discussed in Normal Probability Plot of Effects. Assume that the 16 treatments required for this experiment were run by two different operators with each operator conducting 8 runs. This experiment is an example of an incomplete block design. The analyst in charge of this experiment assumed that the interaction ${\displaystyle ABCD\,\!}$ was not significant and decided to allocate treatments to the two operators so that the ${\displaystyle ABCD\,\!}$ interaction was confounded with the block effect (the two operators are the blocks). The allocation scheme to assign treatments to the two operators can be obtained as follows.
The defining contrast for the ${\displaystyle {2}^{4}\,\!}$ design where the ${\displaystyle ABCD\,\!}$ interaction is confounded with the blocks is:

${\displaystyle L={{q}_{1}}+{{q}_{2}}+{{q}_{3}}+{{q}_{4}}\,\!}$

The treatments can be allocated to the two operators using the values of the defining contrast. Assume that ${\displaystyle L=0\,\!}$ represents block 2 and ${\displaystyle L=1\,\!}$ represents block 1. Then the value of the defining contrast for treatment ${\displaystyle a\,\!}$ is:

${\displaystyle a\ \ :\ \ {\text{ }}L=1+0+0+0=1=1{\text{ (mod 2)}}\,\!}$

Therefore, treatment ${\displaystyle a\,\!}$ should be assigned to Block 1 or the first operator. Similarly, for treatment ${\displaystyle ab\,\!}$ we have:

${\displaystyle ab\ \ :\ \ {\text{ }}L=1+1+0+0=2=0{\text{ (mod 2)}}\,\!}$

Therefore, ${\displaystyle ab\,\!}$ should be assigned to Block 2 or the second operator. Other treatments can be allocated to the two operators in a similar manner to arrive at the allocation scheme shown in the figure below. In a DOE folio, to confound the ${\displaystyle ABCD\,\!}$ interaction for the ${\displaystyle {2}^{4}\,\!}$ design into two blocks, the number of blocks are specified as shown in the figure below. Then the interaction ${\displaystyle ABCD\,\!}$ is entered in the Block Generator window (second following figure) which is available using the Block Generator button in the following figure. The design generated by the Weibull++ DOE folio is shown in the third of the following figures. This design matches the allocation scheme of the preceding figure.

For the analysis of this design, the sum of squares for all effects are calculated assuming no blocking. Then, to account for blocking, the sum of squares corresponding to the ${\displaystyle ABCD\,\!}$ interaction is considered as the sum of squares due to blocks and ${\displaystyle ABCD\,\!}$. In the DOE folio, this is done by displaying this sum of squares as the sum of squares due to the blocks. This is shown in the following figure where the sum of squares in question is obtained as 72.25 and is displayed against Block. The interaction ABCD, which is confounded with the blocks, is not displayed. Since the design is unreplicated, any of the methods to analyze unreplicated designs mentioned in Unreplicated ${\displaystyle 2^{k}}$ designs have to be used to identify significant effects.

### Unreplicated 2k Designs in 2p Blocks

A single replicate of the ${\displaystyle {2}^{k}\,\!}$ design can be run in up to ${\displaystyle {2}^{p}\,\!}$ blocks where ${\displaystyle p. The number of effects confounded with the blocks equals the degrees of freedom associated with the block effect.

If two blocks are used (the block effect has two levels), then one (${\displaystyle 2-1=1)\,\!}$ effect is confounded with the blocks. If four blocks are used, then three (${\displaystyle 4-1=3\,\!}$) effects are confounded with the blocks and so on. For example an unreplicated ${\displaystyle {2}^{4}\,\!}$ design may be confounded in ${\displaystyle {2}^{2}\,\!}$ (four) blocks using two contrasts, ${\displaystyle {{L}_{1}}\,\!}$ and ${\displaystyle {{L}_{2}}\,\!}$. Let ${\displaystyle AC\,\!}$ and ${\displaystyle BD\,\!}$ be the effects to be confounded with the blocks. Corresponding to these two effects, the contrasts are respectively:

{\displaystyle {\begin{aligned}&{{L}_{1}}=&{{q}_{1}}+{{q}_{3}}\\&{{L}_{2}}=&{{q}_{2}}+{{q}_{4}}\end{aligned}}\,\!}

Based on the values of ${\displaystyle {{L}_{1}}\,\!}$ and ${\displaystyle {{L}_{2}},\,\!}$ the treatments can be assigned to the four blocks as follows:

${\displaystyle {\begin{matrix}{\text{Block 4}}&{}&{\text{Block 3}}&{}&{\text{Block 2}}&{}&{\text{Block 1}}\\{{L}_{1}}=0,{{L}_{2}}=0&{}&{{L}_{1}}=1,{{L}_{2}}=0&{}&{{L}_{1}}=0,{{L}_{2}}=1&{}&{{L}_{1}}=1,{{L}_{2}}=1\\{}&{}&{}&{}&{}&{}&{}\\\left[{\begin{matrix}(1)\\ac\\bd\\abcd\\\end{matrix}}\right]&{}&\left[{\begin{matrix}a\\c\\abd\\bcd\\\end{matrix}}\right]&{}&\left[{\begin{matrix}b\\abc\\d\\acd\\\end{matrix}}\right]&{}&\left[{\begin{matrix}ab\\bc\\ad\\cd\\\end{matrix}}\right]\\\end{matrix}}\,\!}$

Since the block effect has three degrees of freedom, three effects are confounded with the block effect. In addition to ${\displaystyle AC\,\!}$ and ${\displaystyle BD\,\!}$, the third effect confounded with the block effect is their generalized interaction, ${\displaystyle (AC)(BD)=ABCD\,\!}$. In general, when an unreplicated ${\displaystyle {2}^{k}\,\!}$ design is confounded in ${\displaystyle {2}^{p}\,\!}$ blocks, ${\displaystyle p\,\!}$ contrasts are needed (${\displaystyle {{L}_{1}},{{L}_{2}}...{{L}_{p}}\,\!}$). ${\displaystyle p\,\!}$ effects are selected to define these contrasts such that none of these effects are the generalized interaction of the others. The ${\displaystyle {2}^{p}\,\!}$ blocks can then be assigned the treatments using the ${\displaystyle p\,\!}$ contrasts. ${\displaystyle {{2}^{p}}-(p+1)\,\!}$ effects, that are also confounded with the blocks, are then obtained as the generalized interaction of the ${\displaystyle p\,\!}$ effects. In the statistical analysis of these designs, the sum of squares are computed as if no blocking were used. Then the block sum of squares is obtained by adding the sum of squares for all the effects confounded with the blocks.

#### Example: 2 Level Factorial Design with Four Blocks

This example illustrates how a DOE folio obtains the sum of squares when treatments for an unreplicated ${\displaystyle {2}^{k}\,\!}$ design are allocated among four blocks. Consider again the unreplicated ${\displaystyle {2}^{4}\,\!}$ design used to investigate the defects in automobile vinyl panels presented in Normal Probability Plot of Effects. Assume that the 16 treatments needed to complete the experiment were run by four operators. Therefore, there are four blocks. Assume that the treatments were allocated to the blocks using the generators mentioned in the previous section, i.e., treatments were allocated among the four operators by confounding the effects, ${\displaystyle AC\,\!}$ and ${\displaystyle BD,\,\!}$ with the blocks. These effects can be specified as Block Generators as shown in the following figure. (The generalized interaction of these two effects, interaction ${\displaystyle ABCD\,\!}$, will also get confounded with the blocks.) The resulting design is shown in the second following figure and matches the allocation scheme obtained in the previous section.

The sum of squares in this case can be obtained by calculating the sum of squares for each of the effects assuming there is no blocking. Once the individual sum of squares have been obtained, the block sum of squares can be calculated. The block sum of squares is the sum of the sum of squares of effects, ${\displaystyle AC\,\!}$, ${\displaystyle BD\,\!}$ and ${\displaystyle ABCD\,\!}$, since these effects are confounded with the block effect. As shown in the second following figure, this sum of squares is 92.25 and is displayed against Block. The interactions ${\displaystyle AC\,\!}$, ${\displaystyle BD\,\!}$ and ${\displaystyle ABCD\,\!}$, which are confounded with the blocks, are not displayed. Since the present design is unreplicated any of the methods to analyze unreplicated designs mentioned in Unreplicated ${\displaystyle 2^{k}}$ designs have to be used to identify significant effects.

## Variability Analysis

For replicated two level factorial experiments, the DOE folio provides the option of conducting variability analysis (using the Variability Analysis icon under the Data menu). The analysis is used to identify the treatment that results in the least amount of variation in the product or process being investigated. Variability analysis is conducted by treating the standard deviation of the response for each treatment of the experiment as an additional response. The standard deviation for a treatment is obtained by using the replicated response values at that treatment run. As an example, consider the ${\displaystyle {2}^{3}\,\!}$ design shown in the following figure where each run is replicated four times. A variability analysis can be conducted for this design. The DOE folio calculates eight standard deviation values corresponding to each treatment of the design (see second following figure). Then, the design is analyzed as an unreplicated ${\displaystyle {2}^{3}\,\!}$ design with the standard deviations (displayed as Y Standard Deviation. in second following figure) as the response. The normal probability plot of effects identifies ${\displaystyle AC\,\!}$ as the effect that influences variability (see third figure following). Based on the effect coefficients obtained in the fourth figure following, the model for Y Std. is:

{\displaystyle {\begin{aligned}&{\text{Y Std}}{\text{.}}=&0.6779+0.2491\cdot AC\\&=&0.6779+0.2491{{x}_{1}}{{x}_{3}}\end{aligned}}\,\!}

Based on the model, the experimenter has two choices to minimize variability (by minimizing Y Std.). The first choice is that ${\displaystyle {{x}_{1}}\,\!}$ should be ${\displaystyle 1\,\!}$ (i.e., ${\displaystyle A\,\!}$ should be set at the high level) and ${\displaystyle {{x}_{3}}\,\!}$ should be ${\displaystyle -1\,\!}$ (i.e., ${\displaystyle C\,\!}$ should be set at the low level). The second choice is that ${\displaystyle {{x}_{1}}\,\!}$ should be ${\displaystyle -1\,\!}$ (i.e., ${\displaystyle A\,\!}$ should be set at the low level) and ${\displaystyle {{x}_{3}}\,\!}$ should be ${\displaystyle -1\,\!}$ (i.e., ${\displaystyle C\,\!}$ should be set at the high level). The experimenter can select the most feasible choice.