General Full Factorial Designs
Experiments with two or more factors are encountered frequently. The best way to carry out such experiments is by using full factorial experiments. These are experiments in which all combinations of factors are investigated in each replicate of the experiment. Full factorial experiments are the only means to completely and systematically study interactions between factors in addition to identifying significant factors. One-factor-at-a-time experiments (where each factor is investigated separately by keeping all the remaining factors constant) do not reveal the interaction effects between the factors. Further, in one-factor-at-a-time experiments, full randomization is not possible.
To illustrate full factorial experiments, consider an experiment where the response is investigated for two factors, and . Assume that the response is studied at two levels of factor with representing the lower level of and representing the higher level. Similarly, let and represent the two levels of factor that are being investigated in this experiment. Since there are two factors with two levels, a total of combinations exist ( - , - , - , - ). Thus, four runs are required for each replicate if a factorial experiment is to be carried out in this case. Assume that the response values for each of these four possible combinations are obtained as shown in the next table.
Investigating Factor Effects
The effect of factor on the response can be obtained by taking the difference between the average response when is high and the average response when is low. The change in the response due to a change in the level of a factor is called the main effect of the factor. The main effect of as per the response values in the third table is:
Therefore, when is changed from the lower level to the higher level, the response increases by 20 units. A plot of the response for the two levels of
at different levels of
is shown next. The plot shows that change in the level of
leads to an increase in the response by 20 units regardless of the level of
. Therefore, no interaction exists in this case as indicated by the parallel lines on the plot.
The main effect of can be obtained as:
Investigating Interactions
Now assume that the response values for each of the four treatment combinations were obtained as shown next.
The main effect of in this case is:
It appears that does not have an effect on the response. However, a plot of the response of
at different levels of
shows that the response does change with the levels of
but the effect of
on the response is dependent on the level of
(see the figure below).
Therefore, an interaction between
and
exists in this case (as indicated by the non-parallel lines of the figure). The interaction effect between
and
can be calculated as follows:
Note that in this case, if a one-factor-at-a-time experiment were used to investigate the effect of factor
on the response, it would lead to incorrect conclusions. For example, if the response at factor
was studied by holding
constant at its lower level, then the main effect of
would be obtained as
, indicating that the response increases by 20 units when the level of
is changed from low to high. On the other hand, if the response at factor
was studied by holding
constant at its higher level than the main effect of
would be obtained as
, indicating that the response decreases by 20 units when the level of
is changed from low to high.
Analysis of General Factorial Experiments
In Weibull++ DOE folios, factorial experiments are referred to as factorial designs. The experiments explained in this section are referred to as general factorial designs. This is done to distinguish these experiments from the other factorial designs supported by Weibull++ DOE folios (see the figure below).
The other designs (such as the two level full factorial designs that are explained in
Two Level Factorial Experiments) are special cases of these experiments in which factors are limited to a specified number of levels. The ANOVA model for the analysis of factorial experiments is formulated as shown next. Assume a factorial experiment in which the effect of two factors,
and
, on the response is being investigated. Let there be
levels of factor
and
levels of factor
. The ANOVA model for this experiment can be stated as:
where:
- represents the overall mean effect
- is the effect of the th level of factor ()
- is the effect of the th level of factor ()
- represents the interaction effect between and
- represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of )
- and the subscript denotes the replicates ()
Since the effects ,
and
represent deviations from the overall mean, the following constraints exist:
Hypothesis Tests in General Factorial Experiments
These tests are used to check whether each of the factors investigated in the experiment is significant or not. For the previous example, with two factors, and , and their interaction, , the statements for the hypothesis tests can be formulated as follows:
The test statistics for the three tests are as follows:
-
- 1)
- where is the mean square due to factor and is the error mean square.
- 1)
-
- 2)
- where is the mean square due to factor and is the error mean square.
- 2)
-
- 3)
- where is the mean square due to interaction and is the error mean square.
- 3)
The tests are identical to the partial
test explained in
Multiple Linear Regression Analysis. The sum of squares for these tests (to obtain the mean squares) are calculated by splitting the model sum of squares into the extra sum of squares due to each factor. The extra sum of squares calculated for each of the factors may either be partial or sequential. For the present example, if the extra sum of squares used is sequential, then the model sum of squares can be written as:
where represents the model sum of squares,
represents the sequential sum of squares due to factor
,
represents the sequential sum of squares due to factor and
represents the sequential sum of squares due to the interaction
.
The mean squares are obtained by dividing the sum of squares by the associated degrees of freedom. Once the mean squares are known the test statistics can be calculated. For example, the test statistic to test the significance of factor
(or the hypothesis
) can then be obtained as:
Similarly the test statistic to test significance of factor
and the interaction
can be respectively obtained as:
It is recommended to conduct the test for interactions before conducting the test for the main effects. This is because, if an interaction is present, then the main effect of the factor depends on the level of the other factors and looking at the main effect is of little value. However, if the interaction is absent then the main effects become important.
Example
Consider an experiment to investigate the effect of speed and type of fuel additive used on the mileage of a sport utility vehicle. Three speeds and two types of fuel additives are investigated. Each of the treatment combinations are replicated three times. The mileage values observed are displayed in the table below.
The experimental design for the data is shown in the figure below.
In the figure, the factor Speed is represented as factor
and the factor Fuel Additive is represented as factor
. The experimenter would like to investigate if speed, fuel additive or the interaction between speed and fuel additive affects the mileage of the sport utility vehicle. In other words, the following hypotheses need to be tested:
The test statistics for the three tests are:
-
- 1.
- where is the mean square for factor and is the error mean square
- 1.
-
- 2.
- where is the mean square for factor and is the error mean square
- 2.
-
- 3.
- where is the mean square for interaction and is the error mean square
- 3.
The ANOVA model for this experiment can be written as:
where represents the
th treatment of factor
(speed) with
=1, 2, 3;
represents the
th treatment of factor
(fuel additive) with
=1, 2; and
represents the interaction effect. In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form
. This can be done as explained next.
Expression of the ANOVA Model as y = ΧΒ + ε
Since the effects , and represent deviations from the overall mean, the following constraints exist. Constraints on are:
Therefore, only two of the effects are independent. Assuming that
and
are independent,
. (The null hypothesis to test the significance of factor
can be rewritten using only the independent effects as
.) The DOE folio displays only the independent effects because only these effects are important to the analysis. The independent effects,
and
, are displayed as A[1] and A[2] respectively because these are the effects associated with factor
(speed).
Constraints on are:
Therefore, only one of the effects are independent. Assuming that
is independent,
. (The null hypothesis to test the significance of factor
can be rewritten using only the independent effect as
.) The independent effect
is displayed as B:B in the DOE folio.
Constraints on are:
The last five equations given above represent four constraints, as only four of these five equations are independent. Therefore, only two out of the six
effects are independent. Assuming that
and
are independent, the other four effects can be expressed in terms of these effects. (The null hypothesis to test the significance of interaction
can be rewritten using only the independent effects as
.) The effects
and
are displayed as A[1]B and A[2]B respectively in the DOE folio.
The regression version of the ANOVA model can be obtained using indicator variables, similar to the case of the single factor experiment in Fitting ANOVA Models. Since factor has three levels, two indicator variables, and , are required which need to be coded as shown next:
Factor has two levels and can be represented using one indicator variable,
, as follows:
The interaction will be represented by all possible terms resulting from the product of the indicator variables representing factors
and
. There are two such terms here -
and
. The regression version of the ANOVA model can finally be obtained as:
In matrix notation this model can be expressed as:
- where:
The vector can be substituted with the response values from the above table to get:
Knowing ,
and
, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics.
Calculation of Sum of Squares for the Model
The model sum of squares, , for the regression version of the ANOVA model can be obtained as:
where is the hat matrix and
is the matrix of ones. Since five effect terms (,
,
,
and
) are used in the model, the number of degrees of freedom associated with
is five ().
The total sum of squares, , can be calculated as:
Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 (). The error sum of squares can now be obtained:
Since there are three replicates of the full factorial experiment, all of the error sum of squares is pure error. (This can also be seen from the preceding figure, where each treatment combination of the full factorial design is repeated three times.) The number of degrees of freedom associated with the error sum of squares is:
Calculation of Extra Sum of Squares for the Factors
The sequential sum of squares for factor can be calculated as:
where and
is the matrix containing only the first three columns of the
matrix. Thus:
Since there are two independent effects (,
) for factor
, the degrees of freedom associated with
are two ().
Similarly, the sum of squares for factor can be calculated as:
Since there is one independent effect,
, for factor
, the number of degrees of freedom associated with
is one ().
The sum of squares for the interaction is:
Since there are two independent interaction effects,
and
, the number of degrees of freedom associated with
is two ().
Calculation of the Test Statistics
Knowing the sum of squares, the test statistic for each of the factors can be calculated. Analyzing the interaction first, the test statistic for interaction is:
The value corresponding to this statistic, based on the
distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator, is:
Assuming that the desired significance level is 0.1, since
value > 0.1, we fail to reject
and conclude that the interaction between speed and fuel additive does not significantly affect the mileage of the sport utility vehicle. The DOE folio displays this result in the ANOVA table, as shown in the following figure. In the absence of the interaction, the analysis of main effects becomes important.
The test statistic for factor is:
The value corresponding to this statistic based on the
distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:
Since value < 0.1,
is rejected and it is concluded that factor
(or speed) has a significant effect on the mileage.
The test statistic for factor is:
The value corresponding to this statistic based on the
distribution with 2 degrees of freedom in the numerator and 12 degrees of freedom in the denominator is:
Since value < 0.1,
is rejected and it is concluded that factor
(or fuel additive type) has a significant effect on the mileage.
Therefore, it can be concluded that speed and fuel additive type affect the mileage of the vehicle significantly. The results are displayed in the ANOVA table of the following figure.
Calculation of Effect Coefficients
Results for the effect coefficients of the model of the regression version of the ANOVA model are displayed in the Regression Information table in the following figure. Calculations of the results in this table are discussed next. The effect coefficients can be calculated as follows:
Therefore, ,
,
etc. As mentioned previously, these coefficients are displayed as Intercept, A[1] and A[2] respectively depending on the name of the factor used in the experimental design. The standard error for each of these estimates is obtained using the diagonal elements of the variance-covariance matrix
.
For example, the standard error for
is:
Then the statistic for
can be obtained as:
The value corresponding to this statistic is:
Confidence intervals on can also be calculated. The 90% limits on
are:
Thus, the 90% limits on are
and
respectively. Results for other coefficients are obtained in a similar manner.
Least Squares Means
The estimated mean response corresponding to the th level of any factor is obtained using the adjusted estimated mean which is also called the least squares mean. For example, the mean response corresponding to the first level of factor is . An estimate of this is or (). Similarly, the estimated response at the third level of factor is or or ().
Residual Analysis
As in the case of single factor experiments, plots of residuals can also be used to check for model adequacy in factorial experiments. Box-Cox transformations are also available in Weibull++ DOE folios for factorial experiments.