Overview of Comparing Designs
The Compare Designs platform, which is an extension of the Evaluate Design platform, enables you to easily compare two or three designs. To compare the performance of one or two designs relative to another, you select a reference design that is treated as the base design. You can specify effects in the Model outline, and effects of interest in the Alias Terms outline.
The Design Evaluation report shows diagnostic results and plots covering these areas:
Power analysis
Prediction variance
Fraction of design space
Relative estimation efficiency
Alias matrix diagnostics
Correlations among effects (including confounding)
Relative efficiency measures for the overall designs
Examples of Comparing Designs
This section contains three examples:
Designs of Same Run Size
In this example, you compare two designs for six factors each with 13 runs. One is a 12-run Plackett-Burman (PB) design augmented with a single center point. The other is a Definitive Screening Design (DSD).
Comparison in Terms of Main Effects Only
First, compare the two designs assuming that the model to be estimated contains only the main effects.
1. Select Help > Sample Data, click Open the Sample Scripts Directory, and select Compare Same Run Size.jsl.
2. Right-click in the script window and select Run Script.
Two 13-run design tables are constructed: Definitive Screening Design and Plackett-Burman. You want to compare these two designs. Because the Plackett-Burman table is active, it is the reference design to which you compare the DSD.
3. In the Plackett-Burman data table, select DOE > Design Diagnostics > Compare Designs.
4. Select Definitive Screening Design from the Compare ‘Plackett-Burman’ with list.
5. Select X1 through X6 in the Plackett-Burman panel and in the Definitive Screening Panel.
6. Open the Match Columns outline and click Match.
Figure 16.2 Launch Window with Matched Columns
Launch Window with Matched Columns
This defines the correspondence between the factors in your two designs.
7. Click OK.
The reference design is the Plackett-Burman design. In the Design Evaluation outline, comparison metrics compare the PB to the DSD. The designs are compared relative to power, prediction variance, estimation efficiency, aliasing, and design efficiency measures.
Figure 16.3 Power Analysis for PB and DSD Comparison
Power Analysis for PB and DSD Comparison
In terms of power, prediction variance, and estimation efficiency, the PB design outperforms the DSD. Figure 16.3 shows the Power Analysis report with the default settings for the significance level, Anticipated RMSE, and coefficients. For tests for the main effects, the PB design has higher power than does the DSD.
Figure 16.4 Fraction of Design Space Plot for PB and DSD Comparison
Fraction of Design Space Plot for PB and DSD Comparison
The Fraction of Design Space plot indicates that the PB design has smaller prediction variance than the DSD over the entire design space.
You conclude that, if you suspect that only main effects are active, the PB design is preferable.
Comparison in Terms of Two-Way Interactions
Now suppose you suspect that some two-way interactions might be active. The analysis below shows that if those two-way interactions are actually active, then the PB design might be less desirable than the DSD.
1. In the Absolute Correlations report, open the Color Map on Correlations report and the color map reports under it.
Figure 16.5 Color Maps for PB and DSD Comparison
Color Maps for PB and DSD Comparison
The Color Map on Correlations plots in Figure 16.5 show that the PB design aliases main effects with two-way interactions. In contrast, the DSD does not alias main effects with two-way interactions.
To gain more insight on how the designs compare if some two-way interactions are active, add two-way interactions in the Model outline.
2. In the Factors outline, select X1 through X3.
3. In the Model outline, select Interactions > 2nd.
Figure 16.6 Power Analysis for PB and DSD Comparison with Interactions
Power Analysis for PB and DSD Comparison with Interactions
The Term list shows the three two-way interactions. If these two-way interactions are active, then the DSD has better performance in terms of power across all effects than the PB.
Figure 16.7 Prediction Variance for PB and DSD Comparison with Interactions
Prediction Variance for PB and DSD Comparison with Interactions
The DSD also outperforms the PB design in terms of prediction variance with the three interactions in the model. You can explore the other reports to see that the DSD is preferred when there are potentially active interactions.
Designs of Different Run Sizes
In this example, compare three designs with run sizes 16, 20, and 24. The designs are constructed for main effect models. Use the Compare Designs platform to determine whether the potential benefits of using a larger run size are worth the additional cost in resources.
1. Select Help > Sample Data, click Open the Sample Scripts Directory, and select Compare Three Run Sizes.jsl.
2. Right-click in the script window and select Run Script.
Three design tables are constructed using Custom Design, with only main effects as entries in the Model outline:
16-Run Design
20-Run Design
24-Run Design
You want to compare these three designs. Notice that the 16-Run Design table is active.
3. In the 16-Run Design table, select DOE > Design Diagnostics > Compare Designs.
4. From the Compare ‘16-Run Design’ with list, select 20-Run Design and 24-Run Design.
Panels for each of these designs are added to the launch window. JMP automatically matches the columns in the order in which they appear in the three design tables.
5. Click OK.
Figure 16.8 Power Analysis Comparison
Power Analysis Comparison
All three designs have high power for detecting main effects if the coefficients are on the order of the Anticipated RMSE.
Figure 16.9 Fraction of Design Space Comparison
Fraction of Design Space Comparison
As expected, the 24-run design is superior to the other two designs in terms of prediction variance over the entire design space. The 20-run design is superior to the 16-run design.
6. In the Absolute Correlations report, open the Color Map on Correlations report and the three color map reports under it.
Figure 16.10 Color Map on Correlations Comparison
Color Map on Correlations Comparison
For the 16-run design, the Color Map on Correlations indicates that there is confounding of some main effects with some two-factor interactions, and confounding of two-factor interactions.
For the 20-run design, the Color Map on Correlations indicates that there are some large correlations between some main effects and some two-factor interactions, and between some two-factor interactions.
The 24-run design shows only moderate correlations between main effects and two-factor interactions, and between two-factor interactions.
Figure 16.11 Absolute Correlations Comparison
Absolute Correlations Comparison
The Absolute Correlations table summarizes the information shown in the Color Maps on Correlations. Recall that the model for all three designs consists of only main effects and the Alias Matrix contains two-factor interactions.
For the 16-run design, the Model x Alias portion of the table indicates that there are nine confoundings of main effects with two-factor interactions. The Alias x Alias portion indicates that six two-factor interactions are confounded.
Figure 16.12 Design Diagnostics Comparison
Design Diagnostics Comparison
The Design Diagnostics report compares the efficiency of the 16-run design to both the 20-run and 24-run designs in terms of several efficiency measures. Relative efficiency values that exceed 1 indicate that the reference design is preferable for the given measure. Values less than 1 indicate that the design being compared to the reference design is preferable. The 16-run design has lower efficiency than the other two designs across all metrics, indicating that the larger designs are preferable.
7. In the Factors outline, select X1 through X3.
8. In the Model outline, select Interactions > 2nd.
An Inestimable Terms window appears, telling you that the 16-run design cannot fit one of the effects that you just added to the model (X1*X2).
9. Click OK.
The other two effects, X1*X3 and X2*X3, are added to the Compare Design report. You can examine the report to compare the designs if the two interactions are active.
Split Plot Designs with Different Numbers of Whole Plots
In this example, compare two split-plot designs with different numbers of whole plots. The designs are for three factors:
A continuous hard-to-change factor
A continuous easy-to-change factor
A three-level categorical easy-to-change factor
The designs include all two-factor interactions in the assumed model. You can afford 20 runs and want to compare using 4 or 8 whole plots.
Launch Compare Designs
1. Select Help > Sample Data, click Open the Sample Scripts Directory, and select Compare Split Plots.jsl.
2. Right-click in the script window and select Run Script.
Two design tables are constructed using Custom Design:
4 Whole Plots
8 Whole Plots
You want to compare these two designs. Notice that the 4 Whole Plots table is active.
3. In the 4 Whole Plots table, select DOE > Design Diagnostics > Compare Designs.
4. From the Compare ‘4 Whole Plots’ with list, select 8 Whole Plots.
A panel for this design is added to the launch window. JMP automatically matches the columns in the order in which they appear in the two design tables.
Figure 16.13 Completed Launch Window
Completed Launch Window
5. Click OK.
6. Open the Matching Specification outline under Reference Design: 20 run ‘4 Whole Plots’.
Figure 16.14 Matching Specification for Split-Plot Designs
Matching Specification for Split-Plot Designs
Notice that the Whole Plots column is entered as part of the design. This is necessary because Compare Designs needs to know the whole plot structure.
Examine the Report
The Design Evaluation report provides various diagnostics that compare the two designs.
Figure 16.15 Power Analysis for Two Split-Plot Designs
Power Analysis for Two Split-Plot Designs
The Power Analysis report shows that the power for the whole-plot factor, X1, is much smaller for the four whole-plot design (0.19) than for the eight whole-plot design (0.497). However, the four whole-plot design has higher power to detect split-plot effects, especially the interaction of the two split-plot factors, X2*X3 (0.797 compared to 0.523). Notice that the power for the combined effect X2*X3 is given under the color bar and legend.
Figure 16.16 Relative Estimation Efficiency Comparing Split-Plot Designs
Relative Estimation Efficiency Comparing Split-Plot Designs
The Relative Estimation Efficiency report shows the relative estimation efficiency for X1 to be 0.778. This indicates that the standard error for X1 is notably larger for the four whole-plot design than for the eight whole-plot design.
Open the Relative Std Error of Estimates report. You can see that the relative standard error for X1 in the four whole-plot design is 0.553, compared to the eight whole-plot error of 0.43.
In the Relative Estimation Efficiency report, the relative estimation efficiency for X2*X3 2 is 1.449, indicating that the standard error for the parameter associated with X2*X3 2 is notably larger for the eight whole-plot design than for the four whole-plot design.
The Power Analysis and the Relative Estimation Efficiency reports indicate that the choice of designs revolves around the importance of detecting the whole plot effect X1. The eight whole-plots design gives you a better chance of detecting a whole plot effect. The four whole-plots design is somewhat better for detecting split-plot effects involving the categorical variable.
Compare Designs Launch Window
Launch the Compare Designs platform by selecting DOE > Design Diagnostics > Compare Designs. All open data tables appear in the list at the left. The active data table and its columns appear in a Source Columns panel. The design in the initial Source Columns panel is the reference design, namely, the design to which other designs are compared. When you add designs to compare to the reference design, their columns appear in panels under the reference design panel.
Figure 16.17 shows the launch window for the three designs in “Designs of Different Run Sizes”.
Figure 16.17 Compare Designs Launch Window
Compare Designs Launch Window
Design Table Selection
Select one or two design tables from the list on the left.
To compare two designs to the reference design, you must select their design tables simultaneously from the list on the left.
To replace a design (or designs) in the Source Columns list, select the desired table (or tables) from the list at the left. The design table (or tables) under the reference design table are replaced.
Note: The reference design table can be compared to itself, which can be useful when exploring the assignment of design columns to factors.
Match Columns
Specify which columns in each of the design tables correspond to each other in the Match Columns panel. To match columns, select the columns to match in each of the design table Source Columns lists, and then click Match.
Figure 16.18 Selection of Columns for Matching
Selection of Columns for Matching
To match single columns in each list, select the single column in each list, and then click Match.
To match several columns that appear in the correct matching order in each list, select them in each list. Click the Match button. They are matched in their list order. See Figure 16.18. In this example, Feed Rate is matched with X1, and Catalyst is matched with X3.
If the lists contain the same numbers of columns and your desired match order is their order of appearance in the lists, you do not have to click Match. When you click OK to run the launch window, JMP matches the columns automatically in their order of appearance. You can review the matching in the report’s Matching Specification outline.
Compare Designs Window: Specify Model and Alias Terms
The Compare Designs window consists of two sets of outlines:
Specify which effects are in the model and which effects are potentially active using the Factors, Model, and Alias Terms outlines.
Compare the designs using the diagnostics in the Design Evaluation outlines. Changes that you make in the Model and Alias Terms outlines are updated in the Design Evaluation report.
The Compare Designs report uses the column names from the reference design.
This section describes the Reference Design, Factors, Model, and Alias Terms outlines. See “Compare Designs Window: Design Evaluation” for a description of the Design Evaluation outlines.
Reference Design
The name of the window for the reference design appears in the outline title. The Matching Specification outline lists the specifications that you entered in the launch window.
Factors
Use the Factors outline to add effects to the Model and Alias Terms lists.
The Factors outline lists the factors, using the column names from the reference design, and coded values. Because they are not factors, whole plot and subplot columns do not appear in the Factors outline. However, they are required for the analysis.
Model
Add or remove effects to compare your designs for the effects that you believe should be in the model. The Model outline initially lists effects that are in the Model script of the reference design table and that are estimated by all designs being compared. If there is no Model script in the reference design table, the Model outline shows only the main effects that can be estimated by all designs being compared. For details about how to add and remove effects, see “Model” in the “Evaluate Designs” chapter.
Note: If any of the designs are supersaturated, meaning that the number of parameters to be estimated exceeds the number of runs, the Model outline lists only a set of effects that can be estimated.
Alias Terms
Add or remove effects to compare your designs for effects that might be active. The Alias Terms outline initially contains all two-factor interactions that are not in the Model outline. The effects in this outline impact the calculations in the Alias Matrix Summary and Absolute Correlations outline. See “Alias Matrix Summary” and “Absolute Correlations”.
For details about how to add and remove effects, see “Alias Terms” in the “Evaluate Designs” chapter.
Compare Designs Window: Design Evaluation
The Design Evaluation report consists of these outlines:
Color Dashboard
Several of the Design Evaluation outlines show values colored according to a color bar. The colors are applied to diagnostic measures and they help you see which values (and designs) reflect good or bad behavior. You can edit the legend values to apply colors that reflect your definitions of good and bad behavior.
Figure 16.19 Color Dashboard
Color Dashboard
You can modify the color bar by selecting these two options in the red triangle menu for the outline or by right-clicking the color bar:
Show Legend Values
Shows or hides the values that appear under the color bar.
Edit Legend Values
Specify the values that define the colors.
Power Analysis
Power is the probability of detecting an active effect of a given size. The Power Analysis report helps you evaluate and compare the ability of your designs to detect effects of practical importance. For each of your designs, the Power Analysis report calculates the power of tests for the effects in the Model outline.
The Power Analysis report gives the power of tests for individual model parameters and for whole effects. It also provides a Power Plot and a Power versus Sample Size plot.
Power depends on the number of runs, the significance level, and the estimated error variation. For details about how power is calculated, see “Power Calculations” in the “Technical Details” appendix.
Figure 16.20 Power Analysis Outline for Three Designs
Power Analysis Outline for Three Designs
Figure 16.20 shows the Power Analysis outline for the three designs constructed in “Designs of Different Run Sizes”. Two two-way interactions have been added to the Model outline.
Power Analysis Report
When you specify values for the Significance Level and Anticipated RMSE, they are used to calculate the power of the tests for the model parameters. Enter coefficient values that reflect differences that you want to detect as Anticipated Coefficients. To update the results for all designs, click Apply Changes to Anticipated Coefficients.
Significance Level
The probability of rejecting the hypothesis of no effect, if it is true. The power calculations update immediately when you enter a value.
Anticipated RMSE
An estimate of the square root of the error variation. The power calculations update immediately when you enter a value.
The power values are colored according to a color gradient that appears under the Apply Changes to Anticipated Coefficients button. You can control the color legend using the options in the Power Analysis red triangle menu. See “Color Dashboard”.
For details about the Power Plots, see “Power Plot”.
Note: If the design is supersaturated, meaning that the number of parameters to be estimated exceeds the number of runs, the Power Analysis outline lists only a set of effects that can be estimated.
Tests for Individual Parameters
The Term column contains a list of model terms. For each term, the Anticipated Coefficient column contains a value for that term. The Power value is the power of a test that the coefficient for the term is zero if the true value of the coefficient is given by the Anticipated Coefficient, given the design, and the terms in the Model outline.
Term
The model term associated with the coefficient being tested.
Anticipated Coefficient
A value for the coefficient associated with the model term. This value is used in the calculations for Power. When you set a new value in the Anticipated Coefficient column, click Apply Changes to Anticipated Coefficients to update the Power calculations.
Note: The anticipated coefficients have default values of 1 for continuous effects. They have alternating values of 1 and –1 for categorical effects.
Power
The probability of rejecting the null hypothesis of no effect when the true coefficient value is given by the specified Anticipated Coefficient.
For a coefficient associated with a numeric factor, the change in the mean response (based on the model) is twice the coefficient value.
For a coefficient associated with a categorical factor, the change in the mean response (based on the model) across the levels of the factor equals twice the absolute value of the anticipated coefficient.
Calculations use the specified Significance Level and Anticipated RMSE. For details about the power calculation, see “Power for a Single Parameter” in the “Technical Details” appendix.
Apply Changes to Anticipated Coefficients
When you set a new value in the Anticipated Coefficient column, click Apply Changes to Anticipated Coefficients to update the Power values.
Tests for Categorical Effects with More Than Two Levels
If your model contains a categorical effect with more than two levels, then the following columns appear below the Apply Changes to Anticipated Coefficients button:
Effect
The categorical effect.
Power
The power calculation for a test of no effect. The null hypothesis for the test is that all model parameters corresponding to the effect are zero. The difference to be detected is defined by the values in the Anticipated Coefficient column that correspond to the model terms for the effect. The power calculation reflects the differences in response means determined by the anticipated coefficients.
Calculations use the specified Significance Level and Anticipated RMSE. For details about the power calculation, see “Power for a Categorical Effect” in the “Technical Details” appendix.
Power Plot
The Power Plot shows the power values from the Power Analysis in graphical form. The plot shows the power for each effect and for each design in a side-by-side bar chart.
Figure 16.21 Power Plot for Three Designs
Power Plot for Three Designs
The Power Plot in Figure 16.21 is for the three designs constructed in “Designs of Different Run Sizes”. Two two-way interactions have been added to the Model outline.
Power versus Sample Size
The Power versus Sample Size profiler appears only when the designs that you are comparing differ in run size. The profiler enables you to see how sample size affects power for each effect in the model. It conveys the same information as is in the Power Plots graph, but in a different format. The power values at integer sample sizes are connected with line segments.
Figure 16.22 Power versus Sample Size Profiler for Three Designs
Power versus Sample Size Profiler for Three Designs
The Power versus Sample Size profiler in Figure 16.22 is for the three designs constructed in “Designs of Different Run Sizes”. Two two-way interactions have been added to the Model outline. Notice that the power for X4 increases more dramatically with sample size than does the power for other factors.
Prediction Variance Profile
The Prediction Variance Profile outline shows profilers of the relative variance of prediction for each design being compared. Each plot shows the relative variance of prediction as a function of each factor at fixed values of the other factors.
To find the maximum value of the relative prediction variance over the design space for all designs, select the Optimization and Desirability > Maximize Desirability option from the red triangle next to Prediction Variance Profile. For more details, see “Maximize Desirability”.
Figure 16.23 Prediction Variance Profile for Three Designs
Prediction Variance Profile for Three Designs
The Prediction Variance Profile plot in Figure 16.23 is for the three designs constructed in “Designs of Different Run Sizes”. Two two-way interactions, X1*X3 and X2*X3, have been added to the Model outline. The initial value for each continuous factor in the plot is the midpoint of its design settings. The Variance values to the left indicate that, as the number of runs increases, the variance decreases at the center point.
Relative Prediction Variance
For given settings of the factors, the prediction variance is the product of the error variance and a quantity that depends on the design and the factor settings. Before you run your experiment, the error variance is unknown, so the prediction variance is also unknown. However, the ratio of the prediction variance to the error variance is not a function of the error variance. This ratio, called the relative prediction variance, depends only on the design and the factor settings. Consequently, the relative variance of prediction can be calculated before acquiring the data. For details, see “Relative Prediction Variance” in the “Technical Details” appendix.
After you run your experiment and fit a least squares model, you can estimate the error variance using the mean squared error (MSE) of the model fit. You can estimate the actual variance of prediction at any setting by multiplying the relative variance of prediction at that setting.
Ideally, the prediction variance is small throughout the design space. Generally, the error variance drops as the sample size increases. In comparing designs, a design with lower prediction variance on average is preferable.
Maximize Desirability
You can also evaluate a design or compare designs in terms of the maximum relative prediction variance. Select the Optimization and Desirability > Maximize Desirability option from the red triangle next to Prediction Variance Profile. JMP uses a desirability function that maximizes the relative prediction variance. The value of the Variance in the Prediction Variance Profile is the worst (least desirable from a design point of view) value of the relative prediction variance.
Figure 16.24 Prediction Variance Profile Showing Maximum Variance for Three Designs
Prediction Variance Profile Showing Maximum Variance for Three Designs
Figure 16.24 shows the Prediction Variance Profile after Maximize Desirability was selected for the three designs constructed in “Designs of Different Run Sizes”. As expected, the maximum relative prediction variance decreases as the run size increases. The plot also shows values of the factors that give this worst-case relative variance. However, keep in mind that many settings can lead to this same maximum relative variance.
Fraction of Design Space Plot
The Fraction of Design Space Plot shows the proportion of the design space over which the relative prediction variance lies below a given value.
Figure 16.25 Fraction of Design Space Plot for Three Designs
Fraction of Design Space Plot for Three Designs
Figure 16.25 shows the Fraction of Design Space plot for the three designs constructed in “Designs of Different Run Sizes”. Note the following:
The X axis in the plot represents the proportion of the design space, ranging from 0 to 100%.
The Y axis represents relative prediction variance values.
For a point Equation shown here that falls on a given curve, the value x is the proportion of design space with variance less than or equal to y.
Red dotted crosshairs mark the value that bounds the relative prediction variance for 50% of design space for the reference design.
Figure 16.25 shows that the relative prediction variance for the 24-run design is uniformly smaller than for the other two designs. The 20-run design has uniformly smaller prediction variance than the 16-run design. The red dotted crosshairs indicate that the relative prediction variance for the 20-run design is less than about 0.23 over about 50% of the design space.
You can use the crosshairs tool to find the maximum relative prediction variance that corresponds to any Fraction of Space value. For example, use the crosshairs tool to see that for the 24-run design, 90% of the prediction variance values are below approximately 0.20.
Note: Plots for the same design might vary slightly, since Monte Carlo sampling of the design space is used in constructing the Fraction of Design Space Plot.
Relative Estimation Efficiency
The Relative Estimation Efficiency report compares designs in terms of the standard errors of parameter estimates for parameters in the assumed model. The standard errors control the length of confidence intervals for the parameter estimates. This report provides an efficiency ratio and the relative standard errors.
The relative estimation efficiency values are colored according to a color gradient shown under the table of relative estimation efficiency values. You can control the color legend using the options in the Power Analysis red triangle menu. See “Color Dashboard”.
Figure 16.26 Relative Estimation Efficiency Comparing Two Split-Plot Designs
Relative Estimation Efficiency Comparing Two Split-Plot Designs
Figure 16.26 shows the Relative Estimation Efficiency outline for the split-plot designs compared in “Split Plot Designs with Different Numbers of Whole Plots”.
Relative Estimation Efficiency
For a given term, the estimation efficiency of the reference design relative to a comparison design is the relative standard error of the term for the comparison design divided by the relative standard error of the term for the reference design. A value less than one indicates that the reference design is not as efficient as the comparison design. A value greater than one indicates that it is more efficient.
Relative Standard Error of Estimates
The Relative Std Error of Estimates report gives the ratio of the standard deviation of a parameter’s estimate to the error standard deviation. These values indicate how large the standard errors of the model’s parameter estimates are, relative to the error standard deviation. For the ith parameter estimate, the Relative Std Error of Estimate is defined as follows:
Equation shown here
where:
Equation shown here is the ith diagonal entry of Equation shown here.
Alias Matrix Summary
The alias matrix addresses the issue of how terms that are not included in the model affect the estimation of the model terms, if they are indeed active. In the Alias Terms outline, you list potentially active effects that are not in your assumed model but that might bias the estimates of model terms. The alias matrix entries represent the degree of bias imparted to model parameters by the Alias Terms effects. See “Alias Terms” and “Alias Matrix”.
The Alias Matrix Summary table lists the terms in the assumed model. These are the terms that correspond to effects listed in the Model outline. Given a design, for each entry in the Term column, the square root of the sum of the squared alias matrix entries for the terms corresponding to effects in the Alias Terms outline is computed. This value is reported in the Root Mean Squared Values column for the given design. For an example, see “Example of Calculation of Alias Matrix Summary Values”.
Note: The Alias Matrix Summary report appears only if there are effects in the Alias Terms list.
Figure 16.27 Alias Matrix Summary for Two Designs
Alias Matrix Summary for Two Designs
Figure 16.27 shows the Alias Matrix Summary report for the Plackett-Burman and Definitive Screening designs constructed in “Designs of Same Run Size”, with only main effects in the Model outline. All two-factor interactions are in the Alias Terms list. The table shows that, for the Definitive Screening Design, main effects are uncorrelated with two-factor interactions.
The Root Mean Squares Values are colored according to a color gradient shown under the Alias Matrix Summary table. You can control the color legend using the options in the Alias Matrix Summary red triangle menu. See “Color Dashboard”.
Alias Matrix
The rows of the Alias Matrix are the terms corresponding to the model effects listed in the Model outline. The columns are terms corresponding to effects listed in the Alias Terms outline. The entry in a given row and column indicates the degree to which the alias term affects the parameter estimate corresponding to the model term.
In evaluating your design, you ideally want one of two situations to occur relative to any entry in the Alias Matrix. Either the entry is small or, if it is not small, the effect of the alias term is small so that the bias is small. If you suspect that the alias term might have a substantial effect, then that term should be included in the model or you should consider an alias-optimal design. In fact, alias-optimality is driven by the squared values of the alias matrix.
For additional background on the Alias Matrix, see “The Alias Matrix” in the “Technical Details” appendix. See also Lekivetz, R. (2014).
Example of Calculation of Alias Matrix Summary Values
This example illustrates the calculation of the values that appear in the Alias Matrix Summary outline. In this example, you compare the two designs assuming that only main effects are active.
1. Select Help > Sample Data, click Open the Sample Scripts Directory, and select Compare Same Run Size.jsl.
2. Right-click in the script window and select Run Script.
Two 13-run design tables are constructed:
Definitive Screening Design
Plackett-Burman
You are interested only in the Plackett-Burman design. This is the active table.
3. From the Plackett-Burman table, select DOE > Design Diagnostics > Evaluate Design.
4. Select X1 through X6 and click X, Factor.
5. Click OK.
6. Open the Alias Terms outline to confirm that all two-factor interactions are in the Alias Terms list.
7. Open the Alias Matrix outline.
For each model term listed in the Effect column, the entry in that row for a given column indicates the degree to which the alias term affects the parameter estimate corresponding to the model term.
For example, to obtain the Alias Matrix Summary entry in Figure 16.27 corresponding to X1, square the terms in the row for X1 in the Alias Matrix, average these, and take the square root. You obtain 0.2722.
Absolute Correlations
The Absolute Correlations report summarizes information about correlations between model terms and alias terms.
Figure 16.28 Absolute Correlations Report for Three Designs
Absolute Correlations Report for Three Designs
Figure 16.28 shows the Absolute Correlations report for the three designs constructed in “Designs of Different Run Sizes”, with only main effects in the Model outline.
Absolute Correlations Table
The table in the Absolute Correlations report is divided into three sections:
Model x Model considers correlations between terms corresponding to effects in the Model list.
Model x Alias considers correlations between terms corresponding to effects in the Model list and terms corresponding to effects in the Alias list.
Alias x Alias considers correlations between terms corresponding to effects in the Alias list.
Note: If there are no alias terms, only the Model x Model section appears.
For each section of the report, the following are given:
Average Correlation
The average of the correlations for all pairs of terms considered in this section of the report.
Number of Confoundings
The number of pairs of terms consisting of confounded terms.
Number of Terms
The total number of pairs of terms considered in this section of the report.
The values in the Absolute Correlations table are colored according to a color gradient shown under the table. You can control the color legend using the options in the Absolute Correlations red triangle menu. See “Color Dashboard”.
Color Map on Correlations
The Color Map on Correlations outline shows plots for each of the designs. The cells of the color map are identified above the map. There are cells for all terms that correspond to effects that appear in either the Model outline or the Alias Terms outline. Each cell is colored according to the absolute value of the correlation between the two terms.
By default, the absolute magnitudes of the correlations are represented by a blue to gray to red intensity color theme. In general terms, the color map for a good design shows a lot of blue off the diagonal, indicating orthogonality or small correlations between distinct terms. Large absolute correlations among effects inflate the standard errors of estimates.
To see the absolute value of the correlation between two effects, place your pointer over the corresponding cell. To change the color theme for the entire plot, right-click in the plot and select Color Theme.
Absolute Correlations and Color Map on Correlations Example
Figure 16.28 shows the Absolute Correlations report for the Plackett-Burman and Definitive Screening designs constructed in “Designs of Different Run Sizes”. The Model outline contains only main effects, so the Alias Terms outline contains all two-factor interactions. All main effects and two-way interactions are shown in the color maps.
In the Color Map on Correlations for the 16-run design, the red cells off the main diagonal indicate that the corresponding terms have correlation one and therefore are completely confounded. There are nine instances where model terms (main effects) are confounded with alias terms (two factor interactions), and six instances where alias terms are confounded with each other. This is shown in the report under Pairwise Confoundings.
The color maps for the 20- and 24-run designs have no off-diagonal cells that are solid red. It follows that these designs show no instances of confounding between any pair of main or two-way interaction effects. However, it is interesting to note that the 20- and 24-run designs both have a higher Average Correlation for Model x Alias terms than does the 16-run design. Although the 16-run design shows confounding, the average amount of correlation is less than for the 20- and 24-run designs.
Design Diagnostics
The Design Diagnostics outline shows D-, G-, A, and I-efficiencies for the reference design relative to the comparison designs. It also shows the Additional Run Size. Given two designs, the one with the higher relative efficiency measure is better.
Figure 16.29 Design Diagnostics for Three Designs
Design Diagnostics for Three Designs
Figure 16.29 shows the Design Diagnostics report for the three designs constructed in “Designs of Different Run Sizes”, with only main effects in the Model outline.
The values in the Design Diagnostics table are colored according to a color gradient shown under the table. You can control the color legend using the options in the Design Diagnostics red triangle menu. See “Color Dashboard”.
Efficiency and Additional Run Size
Relative efficiencies for each of D-, G-, A-, and I-efficiency are shown in the Design Diagnostics report. These are obtained by computing each design’s efficiency value and then taking the appropriate ratio. The descriptions of the relative efficiency measures are given in “Relative Efficiency Measures”.
Additional Run Size is the number of runs in the reference design minus the number of runs in the comparison design. If your reference design has more runs than your comparison design, then the Additional Run Size tells you how many additional runs you need to achieve the efficiency of the reference design.
Relative Efficiency Measures
Notation
X is the model matrix
p is the number of terms, including the intercept, in the model
Equation shown here is the relative prediction variance at the point Equation shown here. See “Relative Prediction Variance” in the “Technical Details” appendix.
Relative Efficiencies
The relative efficiency of the reference design (Ref) to the comparison design (Comp) is given by the following expressions:
D Efficiency
EffRef / EffComp, where Eff for each design is given as follows:
Equation shown here
G Efficiency
EffComp / EffRef, where Eff for each design is given as follows:
Equation shown here
Here, D denotes the design region.
Note: G-Efficiency is calculated using Monte Carlo sampling of the design space. The reported value is based on the larger of Equation shown here or the prediction variance from the Monte Carlo sampling. Therefore, calculations for the same design might vary slightly.
A Efficiency
EffComp / EffRef, where Eff for each design is given as follows:
Equation shown here
I Efficiency
EffComp / EffRef, where Eff for each design is given as follows:
Equation shown here
For details of the calculation, see Section 4.3.5 in Goos and Jones, 2011.
Compare Designs Options
Advanced Options > Split Plot Variance Ratio
Specify the ratio of the variance of the random whole plot and the subplot variance (if present) to the error variance. Before setting this value, you must define a hard-to-change factor for your split-plot design, or hard and very-hard-to-change factors for your split-split-plot design. Then you can enter one or two positive numbers for the variance ratios, depending on whether you have specified a split-plot or a split-split-plot design.
Advanced Options > Set Delta for Power
Specify the difference in the mean response that you want to detect for model effects. See “Set Delta for Power” in the “Custom Designs” chapter.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset