In order to develop the best possible products and processes in the most cost-effective way, it is necessary to plan the testing process carefully so that all relevant factors and potential interactions of factors are considered without wasting time, effort or money. Design of Experiments (DOE) is widely used across the natural and social sciences to develop experimentation strategies that maximize learning using a minimum of resources.
Designed experiments are usually carried out in five stages: planning, screening, optimization, robustness testing and verification. These stages are described next.
Careful and thorough planning prior to embarking upon the process of testing and data collection can save time and resources by eliminating unnecessary work and by helping the analyst to avoid making costly mistakes. Well-planned experiments are easy to execute and analyze, whereas experiments that are poorly planned may result in data sets that are difficult or impossible to analyze with even the most sophisticated statistical tools or, if they can be analyzed, give inconclusive results. Essential points to consider during the planning stage include:
Objective: What do you hope to learn through the experiment?
Team: Individuals from different disciplines related to the product or process should be chosen, in order to incorporate the broadest possible range of knowledge.
Factors and Responses: The team should identify a pool of factors that will be investigated and the response(s) that will be measured. For each response, the team should identify a goal. This might be to minimize or maximize the response, to bring it as close as possible to a target value, to minimize variability or some combination of these.
The goal of the screening stage is to determine which factors out of the pool of potential factors identified during the planning stage are important enough to examine in greater detail (i.e., to extract the "vital few" from the "trivial many"). To this end, screening experiments are carried out to identify factors with a significant effect on the measured response(s). Typically, these experiments are efficient designs that require only a few executions and focus on the main effects of factors rather than on interactions between the factors. These experiments, in conjunction with prior knowledge of the process, help in eliminating unimportant factors and focusing attention on the factors that require more detailed analysis.
In DOE, factorial designs are well suited for use in the screening stage.
Once the important factors have been identified, the objective is to determine the settings of these factors that, taken together, will yield the desired outcome. The analyst will need to consider the goal set for each response and, in cases where multiple responses are measured, may need to consider the relative importance of each response when determining optimal solutions.
Response surface method design types are often used in the optimization stage, when the focus is on the nature of the relationship between the response and the factors rather than on the identification of important factors. The purpose of response surface methods is to examine this relationship, or "surface."
In DOE, after analyzing data from an experiment with at least two factors (one of which must be quantitative), the optimization folio can be used to optimize the factor settings.
While the experimental environment can be carefully controlled, it is likely that there will be factors that affect the product or process in the application environment and are beyond the control of the analyst. Such factors, referred to as noise or uncontrollable factors, may include humidity, ambient temperature, variation in material, etc. The goal of robustness testing is to identify these factors and ensure that the product or process is made as insensitive, or robust, as possible to them.
In DOE, you can use robust parameter designs for robustness testing. In addition, response variability analysis is available for two level factorial and Plackett-Burman designs.
In the verification stage, the analyst confirms the results drawn from the previous phases by performing a few follow-up experimental runs to see if the observed response values are close to the predicted value(s). The goal is to validate the best settings that have been determined, making sure that the product or process functions as desired and that all objectives are met.
The ReliaWiki resource portal provides more information about Design of Experiments at http://www.reliawiki.org/index.php/DOE_Overview.