Overview of Experimental Design and the DOE Workflow
A six-step framework provides the structure for designing an experiment, running the experimental trials, and analyzing the results. Sound engineering and process knowledge is critical to all of these steps.
Figure 3.2 Framework for Experimental Design
Framework for Experimental Design
You perform the first three steps in the DOE platforms. The end result is a design that can be run in your work environment. For a detailed description of the workflow for these three steps, see “The DOE Workflow: Describe, Specify, Design”.
Describe
Determine the goal of your experiment. Identify responses and factors.
Your goal might be to identify active factors, to find optimal factor settings, or to build a predictive model.
Specify
Determine or specify an assumed model that you believe adequately describes the physical situation.
Your assumed model is an initial model that ideally contains all the effects that you want to estimate. In some platforms, you can explicitly build the model of interest. In others, the model is implicit in the choices that you make. For example, in the Screening Design platform, you might select a model with a given resolution. The resolution of the design determines which effects are confounded. Confounding of effects potentially leads to ambiguity about which effect is truly active.
Design
Generate a design that is consistent with your assumed model. Evaluate this design to understand its strengths and limitations, and to ensure that it provides the information that you need, given your model and goals.
The Design Evaluation or Design Diagnostics outline in the design generation platform give you insight about the properties of your design.
The next step is the data collection phase, where the experiment is run under controlled conditions.
Collect
Conduct each of the trials and record the response values.
After you run your experiment, scripts in the generated data table help you fit a model using platforms such as Fit Model and Screening. Depending on your goal, the model can help you identify active effects or find optimal settings.
Fit
Fit your assumed model to the experimental data.
Use the JMP modeling platforms to fit and refine your model. In some situations, you might need to augment the design and perform additional runs to resolve model ambiguity.
Predict
Use your refined model to address your experimental goals.
Determine which effects are active, find factor levels to optimize responses, or build a predictive model.
Designed experiments are typically used sequentially to construct process knowledge. A design strategy often begins with a screening design to narrow the list of potentially active factors. Then the identified factors are studied in designs that focus on building a better understanding of interactions and quadratic effects. Sometimes there is a need to augment a design to resolve ambiguities relating to the factors responsible for effects. The steps outlined in this section relate to conducting and analyzing a single experiment. However, you may require a sequence of experiments to achieve your goals.
The example in “The Coffee Strength Experiment” explicitly illustrates the steps in the DOE workflow process. It also shows how to use a data table script to analyze your experimental data. Many examples in the Design of Experiments Guide illustrate both the workflow that supports a good design and the analysis of the experimental data from the study.
The Coffee Strength Experiment
This example contains the following processes:
Define the Study and Goals
Your employer is a local mid-size coffee roaster. You need to address the strength of individually brewed twelve ounce cups of coffee. Your goal is to determine which factors have an effect on coffee strength and to find optimal settings for those factors.
Response
The response is coffee Strength. It is measured as total dissolved solids, using a refractometer. The coffee is brewed using a single cup coffee dripper and measured five minutes after the liquid is released from the grounds.
Previous studies indicate that a strength reading of 1.3 is most desirable, though the strength is still acceptable if it falls between 1.2 and 1.4.
Factors
Four factors are identified for the study: Grind, Temperature, Time, and Charge. Coffee is brewed at three stations in the work area. To account for variation due to brewing location, Station is included in the study as a blocking factor. The following describes the factors:
Grind is the coarseness of the grind. Grind is set at two levels, Medium and Coarse.
Temperature is the temperature in degrees Fahrenheit of the water measured immediately before pouring it over the grounds. Temperature is set at 195 and 205 degrees Fahrenheit.
Time is the brewing time in minutes. Time is set at 3 or 4 minutes.
Charge is the amount of coffee placed in the cone filter, measured in grams of coffee beans per ounce of water. Charge is set at 1.6 and 2.4.
Station is the location where the coffee is brewed. The three stations are labeled as 1, 2, and 3.
Table 3.1 summarizes information about the factors and their settings. The factors and levels are also given in the Coffee Factors.jmp sample data table, located in the Design Experiment folder.
 
Table 3.1 Factors and Range of Settings for Coffee Experiment 
Factor
Role
Range of Settings
Grind
Categorical
Medium, Coarse
Temperature
Continuous
195 - 205
Time
Continuous
3 - 4
Charge
Continuous
1.6 - 2.4
Station
Blocking
1, 2, 3
Note the following:
Grind is categorical with two levels.
Temperature, Time, and Charge are continuous.
Station is a blocking factor with three levels.
All factors can be varied and reset for each run. There are no hard-to-change factors for this experiment.
The apparatus used in running the coffee experiment is shown in Figure 3.3. This is the setup at one of the three brewing stations. The two other stations have the same type of equipment.
Figure 3.3 Coffee Experiment Apparatus
Coffee Experiment Apparatus
Number of Runs
Based on the resources and time available, you determine that you can conduct 12 runs in all. Since there are three stations, you conduct 4 runs at each station.
Create the Design
Create the design following the steps in the design workflow process outlined in “The DOE Workflow: Describe, Specify, Design”:
Define Responses and Factors
In the first outlines that appear, enter information about your response and factors.
Responses
1. Select DOE > Custom Design.
2. Double-click Y under Response Name and type Strength.
Note that the default Goal is Maximize. Your goal is to find factor settings that enable you to brew coffee with a target strength of 1.3, within limits of 1.2 and 1.4.
3. Click on the default Goal of Maximize and change it to Match Target.
Figure 3.4 Selection of Match Target as the Goal
Selection of Match Target as the Goal
4. Click under Lower Limit and type 1.2.
5. Click under Upper Limit and type 1.4.
6. Leave the area under Importance blank.
Because there is only one response, that response is given Importance 1 by default.
The completed Responses outline appears in Figure 3.5.
Factors
Enter factors either manually or from a pre-existing table that contains the factors and settings. If you are designing a new experiment, you must first enter the factors manually. Once you have saved the factors to a data table using the Save Factors option, you can load them using the saved table.
For this example, you can choose either option. See “Entering Factors Manually” or see “Entering Factors Using Load Factors”.
Entering Factors Manually
1. Click Add Factor > Categorical > 2 Level.
2. Type Grind over the default Name of X1.
Note that Role is set to Categorical, as requested. The Changes attribute is set to Easy by default, indicating that Grind settings can be reset for every run.
3. Click the default Values, L1 and L2, and change them to Coarse and Medium.
4. Type 3 next to Add N Factors. Then click Add Factor > Continuous.
5. Type the following names and values over the default entries:
Temperature (195 and 205)
Time (3 and 4)
Charge (1.6 and 2.4)
6. Click Add Factor > Blocking > 4 runs per block.
Recall that your run budget allows for 12 runs. You want to balance these runs among the three stations.
7. Type Station over the default Name of X5.
Notice that Role is set to Blocking and that only one setting for Values appears. This is because JMP cannot determine the number of blocks until the desired number of runs is specified. Once you specify the Number of Runs in the Design Generation outline, JMP updates the number of levels for Station to what is required.
The completed Factors outline is shown in Figure 3.5.
Figure 3.5 Completed Responses and Factors Outlines
Completed Responses and Factors Outlines
8. Click Continue.
The following outlines are added to the Custom Design window:
Define Factor Constraints (not used in this example)
Model
Alias Terms
Design Generation
Entering Factors Using Load Factors
To enter factors using a table containing factor information, proceed as follows:
1. From the Custom Design red triangle menu, select Load Factors.
2. Select Help > Sample Data Library and open Design Experiment/Coffee Factors.jmp.
After loading the factors, the Custom Design window is updated. The following outlines are added to the Custom Design window:
Define Factor Constraints (not used in this example)
Model
Alias Terms
Design Generation
Define Factor Constraints
The Define Factor Constraints outline appears once you have entered your factors manually and clicked Continue, or once you have loaded the factors from the factor table. Adding factor constraints, if you have any, is part of the Responses and Factors step. Since there are no constraints on factor settings for this design, leave this outline unchanged.
Specify the Model
Model Outline
Figure 3.7 shows the Model outline. The Model outline is where you specify your assumed model, which contains the effects that you want to estimate. See “Specify”. The list that appears by default shows all main effects as Necessary, indicating that the design is capable of estimating all main effects. Because your main interest at this point is in the main effects of the factors, you do not add any effects to the Model outline.
Figure 3.6 Model Outline with Main Effects Only
Model Outline with Main Effects Only
Steps to Duplicate Results (Optional)
Because the Custom Design algorithm begins with a random starting design, your design might differ from the one shown in Figure 3.8. To obtain a design with exactly the same runs, perform the following steps before generating your design:
1. From the Custom Design red triangle menu, select Set Random Seed.
2. Type 569534903.
3. Click OK.
4. From the Custom Design red triangle menu, select Number of Starts.
5. Type 100.
6. Click OK.
Note: Setting the Random Seed and Number of Starts reproduces the exact design shown in this example. However, the rows in the design table might be in a different order. In constructing a design on your own, these steps are not necessary.
Generate the Design
In the Design Generation outline, you can enter additional details about the structure and size of your design. The Default design is shown as having 12 runs. Recall that your design budget allows for 12 runs (“Number of Runs”).
Figure 3.7 Design Generation Outline
Design Generation Outline
1. Click Make Design.
The Design and Design Evaluation outlines are added to the Custom Design window. The Output Options panel also appears.
The Design outline shows the design (Figure 3.8). If you did not follow the steps in “Steps to Duplicate Results (Optional)”, your design might be different from the one in Figure 3.8. This is because the algorithm begins with a random starting design.
Figure 3.8 Design for Coffee Experiment
Design for Coffee Experiment
Evaluate the Design
The Design Evaluation outline provides various ways to evaluate your design. This is an important topic, but for simplicity, it is not covered in the context of this example. See the “Evaluate Designs” chapter.
Make the Table
Specify the order of runs in your data table using the Output Options panel. The default selection, Randomize within Blocks, is appropriate. This selection arranges the runs in a random order for each Station.
Figure 3.9 Output Options
Output Options
1. Click Make Table.
The data table shown in Figure 3.10 opens. Keep in mind that, if you did not follow the steps in “Steps to Duplicate Results (Optional)”, your design table might be different. Your design table represents another optimal design.
Figure 3.10 Custom Design Table
Custom Design Table
Note the asterisks in the Columns panel to the right of the factors and response. These indicate column properties that have been saved to the columns in the data table. These column properties are used in the analysis of the data. For more information, see “Factors” and “Factor Column Properties”.
Run the Experiment
At this point, you perform the experiment. At each Station, four runs are conducted in the order shown in the design table. Equipment and material are reset between runs. For example, if two consecutive runs require water at 195 degrees, separate 12-ounce batches of water are heated to 195 degrees after the heating container cools. The Strength measurements are recorded.
Your design and the experimental results for Strength are given in the Coffee Data.jmp sample data table (Figure 3.11), located in the Design Experiment folder.
Figure 3.11 Coffee Design with Strength Results
Coffee Design with Strength Results
Analyze the Data
The Custom Design platform facilitates the task of data analysis by saving a Model script to the design table that it creates. See Figure 3.10. Run this script after you conduct your experiment and enter your data. The script opens a Fit Model window containing the effects that you specified in the Model outline of the Custom Design window.
Fit the Model
1. Select Help > Sample Data Library and open Design Experiment/Coffee Data.jmp.
In the Table panel, notice the Model script created by Custom Design.
2. Click the green triangle next to the Model script.
The Model Specification window shows the effects that you specified in the Model outline.
Figure 3.12 Model Specification Window
Model Specification Window
3. Select the Keep dialog open option.
4. Click Run.
Analyze the Model
The Effect Summary and Actual by Predicted Plot reports give high-level information about the model.
Figure 3.13 Effect Summary and Actual by Predicted Plot for Full Model
Effect Summary and Actual by Predicted Plot for Full Model
Note the following:
The Actual by Predicted Plot shows no evidence of lack of fit.
The model is significant, as indicated by the Actual by Predicted Plot. The notation P = 0.0041, shown below the plot, gives the significance level of the overall model test.
The Effect Summary report shows that Charge, Station, and Time are significant at the 0.05 level.
The Effect Summary report also shows that Temperature and Grind are not significant.
Reduce the Model
Because Temperature and Grind appear not to be active, they contribute random noise to the model. Refit the model without these effects to obtain more precise estimates of the model parameters associated with the active effects.
1. In the Model Specification window, select Temperature and Grind in the Construct Model Effects list.
2. Click Remove.
3. Change the Emphasis to Effect Screening.
The Effect Screening emphasis presents reports (such as the Prediction Profiler) that are useful for analyzing experimental designs.
4. Click Run.
Figure 3.14 Partial Results for Reduced Model
Partial Results for Reduced Model
Note the following:
The Effect Tests report shows that all three effects remain significant.
The Scaled Estimates report further indicates that the Station[1]and Station[3] means differ significantly from the average response of Strength.
Note that the Estimates that appear in the Parameter Estimates report are identical to their counterparts in the Scaled Estimates report. This is because the effects are coded. See “Coding” in the “Column Properties” appendix.
The estimate of the Station[3] effect only appears in the Scaled Estimates report, where nominal factors are expanded to show estimates for all their levels.
The Parameter Estimates report gives estimates for the model coefficients where the model is specified in terms of the coded effects.
Explore the Model
The Prediction Profiler appears at the bottom of the report.
Figure 3.15 Prediction Profiler
Prediction Profiler
Recall that, in designing your experiment, you set a response Goal of Match Target with limits of 1.2 and 1.4. JMP uses this information to construct a desirability function to reflect your specifications. For more details, see “Factors”.
Note the following in Figure 3.15:
The first two plots in the top row of the graph show how Strength varies for one of the factors, given the setting of the other factor. For example, when Charge is 2, the line in the plot for Time shows how predicted Strength changes with Time.
The values to the left of the top row of plots give the Predicted Strength (in red) and a confidence interval for the mean Strength for the selected factor settings.
The right-most plot in the top row shows the desirability function for Strength. The desirability function indicates that the target of 1.3 is most desirable. Desirability decreases as you move away from that target. Desirability is close to 0 at the limits of 1.2 and 1.4.
The plots in the bottom row show the desirability trace for each factor at the setting of the other factor.
The value to the left of the bottom row of plots gives the Desirability of the response value for the selected factor settings.
Explore various factor settings by dragging the red dashed vertical lines in the columns for Time and Charge. Since there are no interactions in the model, the profiler indicates that increasing Charge increases Strength. Also, Strength seems to be more sensitive to changes in Charge than to changes in Time.
Since Station is a blocking factor, it does not appear in the Prediction Profiler. However, you might like to see how predicted Strength varies by Station. To include Station in the Prediction Profiler, follow these steps:
1. From the Prediction Profiler red triangle menu, select Reset Factor Grid.
A Factor Settings window appears with columns for Time, Charge, and Station. Under Station, notice that the box corresponding to Show is not selected. This indicates that Station is not shown in the Prediction Profiler.
2. Select the box under Station in the row corresponding to Show.
3. Deselect the box under Station in the row corresponding to Lock Factor Setting.
Figure 3.16 Factor Settings Window
Factor Settings Window
4. Click OK.
Plots for Station appear in the Prediction Profiler.
5. Click in either plot above Station to insert a dashed red vertical line.
6. Move the dashed red vertical line to Station 1.
Figure 3.17 Prediction Profiler Showing Results for Station 1
Prediction Profiler Showing Results for Station 1
7. Move the dashed red vertical line to Station 3.
Figure 3.18 Prediction Profiler Showing Results for Station 3
Prediction Profiler Showing Results for Station 3
The predicted Strength in the center of the design region for Station 1 is 1.44. For Station 3, the predicted Strength is about 1.18. The magnitude of the difference indicates that you need to address Station variability. Better control of Station variation should lead to more consistent Strength. Once Station consistency is achieved, you can determine common optimal settings for Time and Charge.
The process that you used to construct the design for the coffee experiment followed the steps in the DOE workflow. The next section describes the DOE workflow in more detail.
The DOE Workflow: Describe, Specify, Design
The DOE platforms are structured as a series of steps that present the workflow that is intrinsic to designing experiments. Once you complete each step, you click Continue to move to the next step. The elements described in this section are common to nine of the design of experiments platforms. These are the platforms that are addressed in this section:
Custom Design
Definitive Screening Design
Screening Design
Response Surface Design
Full Factorial Design
Mixture Design
Image shown here Covering Array
Space Filling Design
Taguchi Arrays
Three special-purpose platforms differ substantially: Choice Designs, Accelerated Life Test Design, and Nonlinear Design. These three platforms are not addressed in this section.
This section describes the steps in the DOE workflow. It also discusses their implementation in the various design platforms.
Define Responses and Factors
In the Describe step of the experimental design framework:
You identify the responses and factors of interest.
You determine your goals for the experiment. Do you want to maximize the response, or hit a target? What is that target? Or do you simply want to identify which factors have an effect on the response?
You identify factor settings that describe your experimental range or design space.
When they open, most of the JMP DOE platforms display outlines where you can list your responses and your factors. The Responses outline is common across platforms. There you insert your responses and additional information, such as the response goal, lower limit, upper limit, and importance.
The Factors outline varies across platforms. This is to accommodate the types of factors and specific design situations that each platform addresses. In certain platforms, once responses and factors are entered, a Define Factor Constraints outline appears after you click Continue. In this outline, you can constrain the values of the factors that are available for the design.
Figure 3.19 shows the Responses and Factors outline using the Custom Design platform for constructing the design in the Box Corrosion Split-Plot.jmp sample data table, located in the Design Experiment folder. Also shown is the Define Factor Constraints outline, which appears once you click Continue. The Define Constraints outline enables you to specify restrictions that your factor settings must satisfy.
Figure 3.19 Responses and Factors for Box Corrosion Split-Plot Experiment
Responses and Factors for Box Corrosion Split-Plot Experiment
Specify the Model
Once you have completed filling in the Responses and Factors outlines, click the Continue button. This brings you to the next phase of design construction, where you either explicitly or implicitly choose an assumed model.
The Custom Design platform enables you to explicitly specify the model that you want to fit. The design that is generated is optimal for this model. The other design platforms do not allow you to explicitly specify your model. For example, in the screening platform, one option enables you to choose from a list of full factorial, fractional factorial, and Plackett-Burman designs. The aliasing relationships in these designs implicitly define the models that you can fit.
In Custom Design, when you click Continue after filling in the Responses and Factors, you see the Model outline. An example, for the design used in the Box Corrosion Split-Plot.jmp sample data table, is shown in Figure 3.20. The assumed model requires that the Furnace Temp and Coating main effects, and their interaction, be estimable. The design that is generated guarantees estimability of these effects.
In most other platforms, clicking Continue gives you a collection of designs to choose from. In Full Factorial, Continue takes you directly to Output Options, since the design is determined once the Factors outline is completed.
Figure 3.20 Model Outline for Box Corrosion Split-Plot Experiment
Model Outline for Box Corrosion Split-Plot Experiment
Generate the Design
Most of the DOE platforms give you some control over the size of the final design. In Custom Design, you can specify the number of runs and, when appropriate, the number of center points and replicate runs. In other platforms, you have various degrees of flexibility. Often you can specify the number of center points, replicate runs, or replicates of the design.
Once you have specified your options in terms of the number of runs, click Make Design. The DOE window is updated to show your design in a Design outline.
The Design outline for a 24-run custom design for the Box Corrosion Split-Plot.jmp experiment is shown in Figure 3.21. Because Changes for Furnace Temp was specified as Hard, a Whole Plots factor is constructed to represent the random blocks of settings for Furnace Temp.
Figure 3.21 Design Outline for Box Corrosion Split-Plot Experiment
Design Outline for Box Corrosion Split-Plot Experiment
Evaluate the Design
When you click Make Design, in most platforms, a Design Evaluation outline appears. Here you can explore the design that you created in terms of the following: its power to detect effects, its prediction variance, its estimation efficiency, its aliasing relationships, the correlations between effects, and other design efficiency measures. The Design Evaluation outline for a Custom Design is shown in Figure 3.22. Design Evaluation is covered in the “Evaluate Designs” chapter.
For some platforms, other types of design diagnostics are appropriate. For example, Space Filling Design provides a Design Diagnostics outline with metrics specific to space-filling designs. Covering Array provides a Metrics outline with measures that are specific to coverage.
Figure 3.22 Design Evaluation Outline in Custom Design
Design Evaluation Outline in Custom Design
Make the Table
Most platforms provide an Output Options node or panel. Depending on the platform and the design, you can use the Output Options panel to specify additional design structure. For example, you can specify the number of runs, center points, replicates, or the order in which you want the design runs to appear in the generated data table.
The Output Options panel shown in Figure 3.23 is for the experiment in the Wine.jmp sample data table, located in the Design Experiment folder. In this example, you can choose various Run Order options and construct the design data table. Or, you can choose to go Back and restructure your design.
Figure 3.23 Output Options Panel for Wine Experiment
Output Options Panel for Wine Experiment
Principles and Guidelines for Experimental Design
Certain principles underlie the design of experiments and the analysis of experimental data. The principles of effect hierarchy, effect heredity, and effect sparsity relate primarily to model selection. These principles help you reduce the set of possible models in searching for a best model. For details, see Hamada and Wu, 1992, Wu and Hamada, 2009, and Goos and Jones, 2011.
Effect Hierarchy
In regression modeling, the principle of effect hierarchy maintains that main (first-order) effects tend to account for the largest amounts of variation in the response. Second-order effects, that is, interaction effects and quadratic terms, are next in terms of accounting for variation. Then come higher-order terms, in hierarchical order.
Here are the implications for modeling: main effects are more likely to be important than second-order effects; second-order effects are more likely to be important than third-order effects; and so on, for higher-order terms.
Effect Heredity
The principle of effect heredity relates to the inclusion in the model of lower-order components of higher-order effects. The motivation for this principle is observational evidence that factors with small main effects tend not to have significant interaction effects.
Strong effect heredity requires that all lower-order components of a model effect be included in the model. Suppose that a three-way interaction (ABC) is in the model. Then all of its component main effects and two-way interactions (A, B, C, AB, AC, BC) must also be in the model.
Weak effect heredity requires that only a sequence of lower-order components of a model effect be included. If a three-way interaction is in the model, then the model must contain one of the factors involved and one two-way interaction involving that factor. Suppose that the three-way interaction ABC is in the model. Then if B and BC are also in the model, the model satisfies weak effect heredity.
For continuous factors, effect heredity ensures that the model is invariant to changes in the location and scale of the factors.
Effect Sparsity
The principle of effect sparsity asserts that most of the variation in the response is explained by a relatively small number of effects. Screening designs, where many effects are studied, rely heavily on effect sparsity. Experience shows that the number of runs used in a screening design should be at least twice the number of effects that are likely to be significant.
Center Points, Replicate Runs, and Testing
Several DOE platforms enable you to add center points (for continuous factors), replicate runs, or full replicates of the design, to your design. Here is some background relative to adding design points.
Adding Center Points
Center points for continuous factors enable you to test for lack of fit due to nonlinear effects. Testing for lack of fit helps you determine whether the error variance estimate has been inflated due to a missing model term. This can be a wise investment of runs.
You can replicate runs solely at center points or you can replicate other design runs. JMP uses replicate runs to construct a model-independent error estimate (pure-error estimate).This pure-error estimate enables you to test for lack of fit.
Be aware that center points do not help you obtain more precise estimates of model effects. They enable you to test for evidence of curvature, but do not identify the responsible nonlinear effects.
To identify the source of curvature, you must set continuous factors at a minimum of three levels. Definitive screening designs are three-level designs with the ability to detect and identify any factors causing strong nonlinear effects on the response. For details, see Chapter 7, “Definitive Screening Designs”.
Adding Replicate Runs
If your run budget allows, you can either replicate runs or distribute new runs optimally within the design space. Adding replicate runs adds precision for some estimates and improves the power of the lack of fit test. However, for a given run budget, adding replicate runs generally lowers the ability of the design to estimate model effects. You are not able to estimate as many terms as you could by distributing the runs optimally within the design space.
Testing for Lack of Fit
Designed experiments are typically constructed to require as few runs as possible, consistent with the goals of the experiment. With too few runs, only extremely large effects can be detected. For example, for a given effect, the t-test statistic is the ratio of the change in response means to their standard error. If there is only one error degree of freedom (df), then the critical value of the test exceeds 12. So, for such a nearly saturated design to detect an effect, it has to be very large.
A similar observation applies to the lack-of-fit test. The power of this test to detect lack-of-fit depends on the numbers of degrees of freedom in the numerator and denominator. If you have only 1 df of each kind, you need an F value that exceeds 150 to declare significance at the 0.05 level. If you have 2 df of each kind, then the F value must exceed 19. In order for the test to be significant in this second case, the lack-of-fit mean square must be 19 times larger than the pure error mean square. It is also true that the lack-of-fit test is sensitive to outliers.
For details about the Lack of Fit test, see the Standard Least Squares chapter in the Fitting Linear Models book.
Determining the Number of Runs
In industrial applications, each run is often very costly, so there is incentive to minimize the number of runs. To estimate the fixed effects of interest, you need only as many runs as there are terms in the model. To determine whether the effects are active, you need a reasonable estimate of the error variance. Unless you already have a good estimate of this variance, consider adding at least 4 runs to the number required to estimate the model terms.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset