Evaluate Design Launch Window
To launch the Evaluate Design platform, open the data table of interest and select DOE > Design Diagnostics > Evaluate Design. The example in Figure 15.10 uses the Bounce Data.jmp sample data table, located in the Design Experiment folder.
Figure 15.10 Evaluate Design Launch Window
Evaluate Design Launch Window
The launch window contains the following buttons:
Y, Response
Enter the response column or columns. Entering a response is optional. Response values are not used in evaluating the design. Responses must be numeric.
X, Factor
Enter the factor columns. Factors can be of any Data Type or Modeling Type.
Evaluate Design Window
The Evaluate Design window consists of two parts. See Figure 15.11, where all outline nodes are closed.
The Factors, Model, Alias Terms, and Design outlines define the model and design.
The Design Evaluation outline provides results that describe the properties of your design.
Figure 15.11 Evaluate Design Window Showing All Possible Outlines
Evaluate Design Window Showing All Possible Outlines
The Factors, Model, Alias Terms, and Design outlines contain information that you enter about the factors, assumed model, potentially aliased effects of interest, and the actual design. JMP populates these outlines using your selections in the launch window and the design table. However, you can modify the effects in the Model and Alias Terms outlines. These outlines are described in the following sections:
Once you have made your specifications, the Design Evaluation outlines are updated. You can open these outlines to see reports or control windows that provide information about your design. These outlines are described in the following sections:
Factors
The factors outline lists the factors entered in the launch window. You can select factors to construct effects in the Model outline.
Model
If the data table contains a script called Model or Fit Model, the Model outline contains the effects specified in that script. Otherwise, the Model outline contains only main effects.
Figure 15.12 shows the Model outline for the Bounce Data.jmp data table, found in the Design Experiment folder. The Model script in the data table contains response surface effects for the three factors Silica, Silane, and Sulfur. Consequently, the Model outline contains the main effects, two-way interactions, and quadratic effects for these three factors.
Figure 15.12 Model Outline for Bounce Data.jmp
Model Outline for Bounce Data.jmp
You can add effects to the Model outline using the following buttons:
Main Effects
Adds main effects for all factors in the model.
Interactions
Adds interaction effects. If no factors are selected in the Factors outline, select 2nd, 3rd, 4th, or 5th to add all appropriate interactions up to that order. Add interactions up to a given order for specific factors by selecting the factor names in the Factors outline, selecting Interactions, and then specifying the appropriate order. Interactions between non-mixture and mixture factors, and interactions with blocking and constant factors, are not added.
RSM
Adds interaction and quadratic terms up to the second order (response surface model terms) for continuous factors. Categorical factors are not included in RSM terms. Main effects for non-mixture factors that interact with all the mixture factors are removed.
Cross
Adds specific interaction terms. Select factor names in the Factors outline and effect names in the Model outline. Click Cross to add the crossed terms to the Model outline.
Powers
Adds polynomial terms. If no factor names are selected in the Factors outline, adds polynomial terms for all continuous factors. If factor names are selected in the Factors outline, adds polynomial terms for only those factors. Select 2nd, 3rd, 4th, or 5th to add polynomial terms of that order.
Scheffé Cubic
Adds Scheffé cubic terms for all mixture factors. These terms are used to specify a mixture model with third-degree polynomial terms.
Remove Term
Removes selected effects.
Alias Terms
It is possible that effects not included in your assumed model are active. In the Alias Terms outline, list potentially active effects that are not in your assumed model but might bias the estimates of model terms. The Alias Matrix entries represent the degree of bias imparted to model parameters by the effects that you specified in the Alias Terms outline. For details, see “The Alias Matrix” in the “Technical Details” appendix.
By default, the Alias Terms outline includes all two-way interaction effects that are not in your Model outline (with the exception of terms involving blocking factors). Add or remove terms using the buttons. For a description of how to use these buttons to add effects to the Alias Terms table, see “Model”.
In the Evaluate Design platform, the Alias Matrix outline is immediately updated to reflect changes to Alias Matrix effects. In the Custom Design platform, you must click Make Design after modifying the effects in the Alias Terms outline. Within other DOE platforms that construct designs, there is no Alias Terms outline. However, the Alias Matrix outline, containing appropriate effects, appears under Design Evaluation after you construct the design.
Design
The Design outline shows the design runs for the factors that you have specified in the launch window. You can easily view the design as you explore its properties in the Design Evaluation outline.
Design Evaluation
Design Evaluation within the Evaluate Design platform is based on your design and the specifications that you make in the Model and Alias Terms outlines. Several DOE Design platforms provide a Design Evaluation outline: Custom, Definitive Screening, Screening, Response Surface, and Mixture with Optimal design type. Design Evaluation within these platforms is based on the design that you construct.
The Design Evaluation outline contains eight headings:
Power Analysis
The Power Analysis outline calculates the power of tests for the parameters in your model. Power is the probability of detecting an active effect of a given size. The Power Analysis outline helps you evaluate the ability of your design to detect effects of practical importance. Power depends on the number of runs, the significance level, and the estimated error variation. In particular, you can determine if additional runs are necessary.
This section covers the following topics:
Power Analysis Overview
Power is calculated for the effects listed in the Model outline. These include continuous, discrete numeric, categorical, blocking, covariate, mixture, and covariate factors. The tests are for individual model parameters and for whole effects. For details on how power is calculated, see “Power Calculations” in the “Technical Details” appendix.
Power is the probability of rejecting the null hypothesis of no effect at specified values of the model parameters. In practice, your interest is not in the values of the model parameters, but in detecting differences in the mean response of practical importance. In the Power Analysis outline, you can compute Anticipated Responses for specified values of the Anticipated Coefficients. This helps you to determine the coefficient values associated with the differences you want to detect in the mean response.
Figure 15.13 shows the Power Analysis outline for the design in the Coffee Data.jmp sample data table, found in the Design Experiment folder. The model specified in the Model script is a main effects only model.
Figure 15.13 Power Analysis for Coffee Data.jmp
Power Analysis for Coffee Data.jmp
In the Power Analysis outline, you can:
Specify coefficient values that reflect differences that you want to detect. You enter these as Anticipated Coefficients in the top part of the outline.
Specify anticipated response values and apply these to determine the corresponding Anticipated Coefficients. You specify Anticipated Responses in the Design and Anticipated Responses panel.
Power Analysis Details
Specify values for the Significance Level and Anticipated RMSE. These are used to calculate the power of the tests for the model parameters.
Significance Level
The probability of rejecting the hypothesis of no effect, if it is true. The power calculations update immediately when you enter a value.
Anticipated RMSE
An estimate of the square root of the error variation. The power calculations update immediately when you enter a value.
The top portion of the Power Analysis report opens with default values for the Anticipated Coefficients. See Figure 15.13. The default values are based on Delta. For details, see “Advanced Options > Set Delta for Power”.
Note: If the design is supersaturated, meaning that the number of parameters to be estimated exceeds the number of runs, the anticipated coefficients are set to 0.
Figure 15.14 shows the top portion of the Power Analysis report where values have been specified for the Anticipated Coefficients. These values reflect the differences you want to detect.
Figure 15.14 Possible Specification of Anticipated Coefficients for Coffee Data.jmp
Possible Specification of Anticipated Coefficients for Coffee Data.jmp
Tests for Individual Parameters
The Term column contains a list of model terms. For each term, the Anticipated Coefficient column contains a value for that term. The value in the Power column is the power of a test that the coefficient for the term is 0 if the true value of the coefficient is given by the Anticipated Coefficient.
Term
The model term associated with the coefficient being tested.
Note: The order in which model terms appear in the Power Analysis report may not be identical to their order in the Parameter Estimates report obtained using Standard Least Squares. This difference can only occur when the model contains an interaction with more than one degree of freedom.
Anticipated Coefficient
A value for the coefficient associated with the model term. This value is used in the calculations for Power. These values are also used to calculate the Anticipated Response column in the Design and Anticipated Responses outline. When you set a new value in the Anticipated Coefficient column, click Apply Changes to Anticipated Coefficients to update the Power and Anticipated Response columns.
Note: The anticipated coefficients have default values of 1 for continuous effects. They have alternating values of 1 and –1 for categorical effects. You can specify a value for Delta be selecting Advanced Options > Set Delta for Power from the red triangle menu. If you change the value of Delta, the values of the anticipated coefficients are updated so that their absolute values are one-half of Delta. For details, see “Advanced Options > Set Delta for Power”.
Power
Probability of rejecting the null hypothesis of no effect when the true coefficient value is given by the specified Anticipated Coefficient. For a coefficient associated with a numeric factor, the change in the mean response (based on the model) is twice the coefficient value. For a coefficient associated with a categorical factor, the change in the mean response (based on the model) across the levels of the factor equals twice the absolute value of the anticipated coefficient.
Calculations use the specified Significance Level and Anticipated RMSE. For details of the power calculation, see “Power for a Single Parameter” in the “Technical Details” appendix.
Apply Changes to Anticipated Coefficients
When you set a new value in the Anticipated Coefficient column, click Apply Changes to Anticipated Coefficients to update the Power and Anticipated Response columns.
Tests for Categorical Effects with More Than Two Levels
If your model contains a categorical effect with more than two levels, then the following columns appear below the Apply Changes to Anticipated Coefficients button:
Effect
The categorical effect.
Power
The power calculation for a test of no effect. The null hypothesis for the test is that all model parameters corresponding to the effect are zero. The difference to be detected is defined by the values in the Anticipated Coefficient column that correspond to the model terms for the effect. The power calculation reflects the differences in response means determined by the anticipated coefficients.
Calculations use the specified Significance Level and Anticipated RMSE. For details of the power calculation, see “Power for a Categorical Effect” in the “Technical Details” appendix.
Design and Anticipated Responses Outline
The Design and Anticipated Responses outline shows the design preceded by an Anticipated Response column. Each entry in the first column is the Anticipated Response corresponding to the design settings. The Anticipated Response is calculated using the Anticipated Coefficients.
Figure 15.15 shows the Design and Anticipated Responses outline corresponding to the specification of Anticipated Coefficients given in Figure 15.14.
Figure 15.15 Anticipated Responses for Coffee Data.jmp
Anticipated Responses for Coffee Data.jmp
In the Anticipated Response column, you can specify a value for each setting of the factors. These values reflect the differences you want to detect.
Click Apply Changes to Anticipate Responses to update both the Anticipated Coefficient and Power columns.
Anticipated Response
The response value obtained using the Anticipated Coefficient values as coefficients in the model. When the outline first appears, the calculation of Anticipated Response values is based on the default values in the Anticipated Coefficient column. When you set new values in the Anticipated Response column, click Apply Changes to Anticipated Responses to update the Anticipated Coefficient and Power columns.
Design
The columns to the right of the Anticipated Response column show the factor settings for all runs in your design.
Apply Changes to Anticipated Responses
When you set new values in the Anticipated Response column, click Apply Changes to Anticipated Responses to update the Anticipated Coefficient and Power columns.
Power Analysis for Coffee Experiment
Consider the design in the Coffee Data.jmp data table. Suppose that you are interested in the power of your design to detect effects of various magnitudes on Strength. Recall that Grind is a two-level categorical factor, Temperature, Time, and Charge are continuous factors, and Station is a three-level categorical (blocking) factor.
In this example, ignore the role of Station as a blocking factor. You are interested in the effect of Station on Strength. Since Station is a three-level categorical factor, it is represented by two terms in the Parameters list: Station 1 and Station 2.
Specifically, you are interested the probability of detecting the following changes in the mean Strength:
A change of 0.10 units as you vary Grind from Coarse to Medium.
A change of 0.10 units or more as you vary Temperature, Time, and Charge from their low to high levels.
An increase due to each of Stations 1 and 2 of 0.10 units beyond the overall anticipated mean. This corresponds to a decrease due to Station 3 of 0.20 units from the overall anticipated mean.
You set 0.05 as your Significance Level. Your estimate of the standard deviation of Strength for fixed design settings is 0.1 and you enter this as the Anticipated RMSE.
Figure 15.16 shows the Power Analysis node with these values entered. Specifically, you specify the Significance Level, Anticipated RMSE, and the value of each Anticipated Coefficient.
When you click Apply Changes to Anticipated Coefficients, the Anticipated Response values are updated to reflect the model you have specified.
Figure 15.16 Power Analysis Outline with User Specifications in Anticipated Coefficients Panel
Power Analysis Outline with User Specifications in Anticipated Coefficients Panel
Recall that Temperature is a continuous factor with coded levels of -1 and 1. Consider the test whose null hypothesis is that Temperature has no effect on Strength. Figure 15.16 shows that the power of this test to detect a difference of 0.10 (=2*0.05) units across the levels of Temperature is only 0.291.
Now consider the test for the whole Station effect, where Station is a three-level categorical factor. Consider the test whose null hypothesis is that Station has no effect on Strength. This is the usual F test for a categorical factor provided in the Effect Tests report when you run Analyze > Fit Model. (See the Standard Least Squares chapter in the Fitting Linear Models book.)
The Power of this test is shown directly beneath the Apply Changes to Anticipated Coefficients button. The entries under Anticipated Coefficients for the model terms Station 1 and Station 2 are both 0.10. These settings imply that the effect of both stations is to increase Strength by 0.10 units above the overall anticipated mean. For these settings of the Station 1 and Station 2 coefficients, the effect of Station 3 on Strength is to decrease it by 0.20 units from the overall anticipated mean. Figure 15.16 shows that the power of the test to detect a difference of at least this magnitude is 0.888.
Prediction Variance Profile
The Prediction Variance Profile outline shows a profiler of the relative variance of prediction. Select the Optimization and Desirability > Maximize Desirability option from the red triangle next to Prediction Variance Profile to find the maximum value of the relative prediction variance over the design space. For details, see “Maximize Desirability”.
The Prediction Variance Profile plots the relative variance of prediction as a function of each factor at fixed values of the other factors. Figure 15.17 shows the Prediction Variance Profile for the Bounce Data.jmp data table, located in the Design Experiment folder.
Figure 15.17 Prediction Variance Profiler
Prediction Variance Profiler
Relative Prediction Variance
For given settings of the factors, the prediction variance is the product of the error variance and a quantity that depends on the design and the factor settings. Before you run your experiment, the error variance is unknown, so the prediction variance is also unknown. However, the ratio of the prediction variance to the error variance is not a function of the error variance. This ratio, called the relative prediction variance, depends only on the design and the factor settings. Consequently, the relative variance of prediction can be calculated before acquiring the data. For details, see “Relative Prediction Variance” in the “Technical Details” appendix.
After you run your experiment and fit a least squares model, you can estimate the error variance using the mean squared error (MSE) of the model fit. You can estimate the actual variance of prediction at any setting by multiplying the relative variance of prediction at that setting.
It is ideal for the prediction variance to be small throughout the design space. Generally, the error variance drops as the sample size increases. In comparing designs, you may want to place the prediction variance profilers for two designs side-by-side. A design with lower prediction variance on average is preferred.
Maximize Desirability
You can also evaluate a design or compare designs in terms of the maximum relative prediction variance. Select the Optimization and Desirability > Maximize Desirability option from the red triangle next to Prediction Variance Profile. JMP uses a desirability function that maximizes the relative prediction variance. The value of the Variance displayed in the Prediction variance Profile is the worst (least desirable from a design point of view) value of the relative prediction variance.
Figure 15.18 shows the Prediction Variance Profile after Maximize Desirability was selected. The plot is for the Bounce Data.jmp sample data table, located in the Design Experiment folder. The largest value of the relative prediction variance is 1.395833. The plot also shows values of the factors that give this worst-case relative variance. However, keep in mind that many settings can lead to this same relative variance. See “Prediction Variance Surface”.
Figure 15.18 Prediction Variance Profile Showing Maximum Variance
Prediction Variance Profile Showing Maximum Variance
Fraction of Design Space Plot
The Fraction of Design Space Plot shows the proportion of the design space over which the relative prediction variance lies below a given value. Figure 15.19 shows the Fraction of Design Space plot for the for the Bounce Data.jmp sample data table, located in the Design Experiment folder.
Figure 15.19 Fraction of Design Space Plot
Fraction of Design Space Plot
The X axis in the plot represents the proportion of the design space, ranging from 0 to 100%. The Y axis represents relative prediction variance values. For a point Equation shown here that falls on the blue curve, the value x is the proportion of design space with variance less than or equal to y. Red dotted crosshairs mark the value that bounds the relative prediction variance for 50% of design space.
Figure 15.19 shows that the minimum relative prediction variance is slightly less than 0.3, while the maximum is below 1.4. (The actual maximum is 1.395833, as shown in Figure 15.18.) The red dotted crosshairs indicate that the relative prediction variance is less than 0.34 over about 50% of the design space. You can use the crosshairs tool to find the maximum relative prediction variance that corresponds to any Fraction of Space value. Use the crosshairs tool in Figure 15.19 to see that 90% of the prediction variance values are below approximately 0.55.
Note: Monte Carlo sampling of the design space is used in constructing the Fraction of Design Space Plot. Therefore, plots for the same design may vary slightly.
Prediction Variance Surface
The Prediction Variance Surface report plots the relative prediction variance surface as a function of any two design factors. Figure 15.20 shows the Prediction Variance Surface outline for the for the Bounce Data.jmp sample data table, located in the Design Experiment folder. Show or hide the controls by selecting Control Panel on the red triangle menu. See “Control Panel”.
Figure 15.20 Prediction Variance Surface
Prediction Variance Surface
When there are two or more factors, the Prediction Variance Surface outline shows a plot of the relative prediction variance for any two variables. The Prediction Variance Surface outline plots the relative prediction variance formula. Drag on the plot to rotate and change the perspective.
Control Panel
The Control Panel consists of the following:
Response Grid Slider
The Grid check box superimposes a grid that shows constant values of Variance. The value of the Variance is shown in the text box. The slider enables you to adjust the placement of the grid. Alternatively, you can enter a Variance value in the text box. Click outside the box to update the plot.
Independent Variables
This panel enables you to select which two factors are used as axes for the plot and to specify the settings for factors not used as axes. Select a factor for each of the X and Y axes by clicking in the appropriate column. Use the sliders and text boxes to specify values for each factor not selected for an axis. The plot shows the three-dimensional slice of the surface at the specified values of the factors that are not used as axes in the plot. Move the sliders to see different slices.
Each grid check box activates a grid for the corresponding factor. Use the sliders to adjust the placement of each grid.
Lock Z Scale locks the z-axis to its current values. This is useful when moving the sliders that are not on an axis.
Appearance
The Resolution slider affects how many points are evaluated for a formula. Too coarse a resolution means that a function with a sharp change might not be represented very well. But setting the resolution high can make evaluating and displaying the surface slower.
The Orthographic projection check box shows a projection of the plot in two dimensions.
The Contour menu controls the placement of contour curves. A contour curve is a set of points whose Response values are constant. You can select to turn the contours Off (the default) or place them contours Below, Above, or On Surface.
Estimation Efficiency
This report gives the Fractional Increase in CI (Confidence Interval) Length and Relative Std (Standard) Error of Estimate for each parameter estimate in the model. Figure 15.21 shows the Estimation Efficiency outline for the Bounce Data.jmp sample data table, located in the Design Experiment folder.
Figure 15.21 Estimation Efficiency Outline
Estimation Efficiency Outline
Fractional Increase in CI Length
The Fractional Increase in CI Length compares the length of a parameter’s confidence interval as given by the current design to the length of such an interval given an ideal design:
The length of the ideal confidence interval for the parameter is subtracted from the length of its actual confidence interval.
This difference is then divided by the length of the ideal confidence interval.
For an orthogonal D-optimal design, the fractional increase is zero. In selecting a design, you would like the fractional increase in confidence interval length to be as small as possible.
The Ideal Design
The covariance matrix for the ordinary least squares estimator is σ2(XX)-1. The diagonal elements of (XX)-1 are the relative variances (the variances divided by σ2) of the parameter estimates. For two-level designs and using the effects coding convention (see “Coding” in the “Column Properties” appendix), the minimum value of the relative variance for any parameter estimate is 1/n, where n is the number of runs. This occurs when all the effects for the design are orthogonal and the design is D-optimal.
Let Equation shown here denote the vector of parameter estimates. The ideal design, which may not exist, is a design whose covariance matrix is given as follows:
Equation shown here
where Equation shown here is the n by n identity matrix and σ is the standard deviation of the response.
If an orthogonal D-optimal design exists, it is the ideal design. However, the definition above extends the idea of an ideal design to situations where a design that is both orthogonal and D-optimal does not exist.
The definition is also appropriate for designs with multi-level categorical factors. The orthogonal coding used for categorical factors allows such designs to have the ideal covariance matrix. For a Custom Design, you can view the coding matrix by selecting Save X Matrix from the options in the Custom Design window, making the design table, and looking at the script Model Matrix that is saved to the design table.
Fractional Increase in Length of Confidence Interval
Note that, in the ideal design, the standard error for the parameter estimates would be given as follows:
Equation shown here
The length of a confidence interval is determined by the standard error. The Fractional Increase in Confidence Interval Length is the difference between the standard error of the given design and the standard error of the ideal design, divided by the standard error of the ideal design.
Specifically, for the ith parameter estimate, the Fractional Increase in Confidence Interval Length is defined as follows:
Equation shown here
where
σ2 is the unknown response variance,
X is the model matrix for the given design, defined in “The Alias Matrix” in the “Technical Details” appendix,
Equation shown here is the ith diagonal entry of Equation shown here, and
n is the number of runs.
Relative Std Error of Estimate
The Relative Std Error of Estimate gives the ratio of the standard deviation of a parameter’s estimate to the error standard deviation. These values indicate how large the standard errors of the model’s parameter estimates are, relative to the error standard deviation. For the ith parameter estimate, the Relative Std Error of Estimate is defined as follows:
Equation shown here
where
Equation shown here is the ith diagonal entry of Equation shown here.
Alias Matrix
The Alias Matrix addresses the issue of how terms that are not included in the model affect the estimation of the model terms, if they are indeed active. In the Alias Terms outline, you list potentially active effects that are not in your assumed model but that might bias the estimates of model terms. The Alias Matrix entries represent the degree of bias imparted to model parameters by the Alias Terms effects. See “Alias Terms”.
The rows of the Alias Matrix are the terms corresponding to the model effects listed in the Model outline. The columns are terms corresponding to effects listed in the Alias Terms outline. The entry in a given row and column indicates the degree to which the alias term affects the parameter estimate corresponding to the model term.
In evaluating your design, you ideally want one of two situations to occur relative to any entry in the Alias Matrix. Either the entry is small or, if it is not small, the effect of the alias term is small so that the bias will be small. If you suspect that the alias term may have a substantial effect, then that term should be included in the model or you should consider an alias optimal design.
For details on how the Alias Matrix is computed, see “The Alias Matrix” in the “Technical Details” appendix. See also Lekivetz, R. (2014).
Note the following:
If the design is orthogonal for the assumed model, then the correlations in the Alias Matrix correspond to the absolute correlations in the Color Map on Correlations.
Depending on the complexity of the design, it is possible to have alias matrix entries greater than 1 or less than -1.
Alias Matrix Examples
Consider the Coffee Data.jmp sample data table, located in the Design Experiment folder. The design assumes a main effects model. You can see this by running the Model script in the data table. Consequently, in the Evaluate Design window’s Model outline, only the Intercept and five main effects appear. The Alias Terms outline contains the two-way interactions. The Alias Matrix is shown in Figure 15.22.
Figure 15.22 Alias Matrix for Coffee Data.jmp
Alias Matrix for Coffee Data.jmp
The Alias Matrix shows the Model terms in the first column defining the rows. The two-way interactions in the Alias Terms are listed across the top, defining the columns. Consider the model effect Temperature for example. If the Grind*Time interaction is the only active two-way interaction, the estimate for the coefficient of Temperature is biased by 0.333 times the true value of the Grind*Time effect. If other interactions are active, then the value in the Alias Matrix indicates the additional amount of bias incurred by the Temperature coefficient estimate.
Consider the Bounce Data.jmp sample data table, located in the Design Experiment folder. The Model script contains all two-way interactions. Consequently, the Evaluate Design window shows all main effects and two-way interactions in the Model outline. The three two-way interactions are automatically added to the list of Alias Terms. Therefore, the Alias Matrix shows a column for each of these three interactions (Figure 15.23). Notice that the only non-zero entries in the alias matrix correspond to the bias impact of the two-way interactions on themselves. These entries are 1s, which is expected because the two-way interactions are already in the model.
Figure 15.23 Alias Matrix for Bounce Data.jmp
Alias Matrix for Bounce Data.jmp
Color Map on Correlations
The Color Map on Correlations shows the absolute value of the correlation between any two effects that appear in either the Model or the Alias Terms outline. The cells of the color map are identified above the map. There is a cell for each effect in the Model outline and a cell for each effect in the Alias Terms outline.
By default, the absolute magnitudes of the correlations are represented by a blue to gray to red intensity color theme. In general terms, the color map for a good design shows a lot of blue off the diagonal, indicating orthogonality or small correlations between distinct terms. Large absolute correlations among effects inflate the standard errors of estimates.
To see the absolute value of the correlation between two effects, hover your cursor over the corresponding cell. To change the color theme for the entire plot, right click in the plot and select Color Theme.
Color Map Example
Figure 15.24 shows the Color Map on Correlations for the Bounce Data.jmp sample data table, found in the Design Experiment folder. The deep red coloring indicates absolute correlations of one. Note that there are red cells on the diagonal, showing correlations of model terms with themselves.
All other cells are either deep blue or light blue. The light blue squares correspond to correlations between quadratic terms. To see this, hover your cursor over each of the light blue squares. The absolute correlations of quadratic terms with each other are small, 0.0714.
From the perspective of correlation, this is a good design. When effects are highly correlated, it is more difficult to determine which is responsible for an effect on the response.
Figure 15.24 Color Map on Correlations
Color Map on Correlations
Design Diagnostics
The Design Diagnostics outline shows D-, G-, and A-efficiencies and the average variance of prediction. These diagnostics are not shown for designs that include factors with Changes set to Hard or Very Hard or effects with Estimability designated as If Possible.
When Evaluate Design is accessed from a DOE platform other than Evaluate Design, the Design Creation Time gives the amount of time required to create the design. When Design Diagnostics is accessed from the Evaluate Design platform, Design Creation Time gives the amount of time required for the Evaluate Design platform to calculate results.
Figure 15.25 shows the Design Diagnostics outline for the Bounce Data.jmp sample data table, found in the Design Experiment folder.
Figure 15.25 Design Diagnostics Outline
Design Diagnostics Outline
Caution: The efficiency measures should not be interpreted on their own. But they can be used to compare designs. Given two designs, the one with the higher efficiency measure is better. While the maximum efficiency is 100 for any criterion, an efficiency of 100% is impossible for many design problems.
Notation
The descriptions of the efficiency measures given below use the following notation:
X is the model matrix
n is the number of runs in the design
p is the number of terms, including the intercept, in the model
Equation shown here is the relative prediction variance at the point Equation shown here. See “Relative Prediction Variance” in the “Technical Details” appendix.
Equation shown hereis the maximum relative prediction variance over the design region
D Efficiency
The efficiency of the design to that of an ideal orthogonal design in terms of the D-optimality criterion. A design is D-optimal if it minimizes the volume of the joint confidence region for the vector of regression coefficients:
Equation shown here
G Efficiency
The efficiency of the design to that of an ideal orthogonal design in terms of the G-optimality criterion. A design is G-optimal if it minimizes the maximum prediction variance over the design region:
Equation shown here
Letting D denote the design region,
Equation shown here
Note: G-Efficiency is calculated using Monte Carlo sampling of the design space. Therefore, calculations for the same design may vary slightly.
A Efficiency
The efficiency of the design to that of an ideal orthogonal design in terms of the A-optimality criterion. A design is A-optimal if it minimizes the sum of the variances of the regression coefficients:
Equation shown here
Average Variance of Prediction
At a point Equation shown here in the design space, the relative prediction variance is defined as:
Equation shown here
Note that this is the prediction variance divided by the error variance. For details of the calculation, see Section 4.3.5 in Goos and Jones, 2011.
Design Creation Time
Design Creation Time gives the amount of time required for the Evaluate Design platform to calculate results.
Evaluate Design Options
The Evaluate Design red triangle menu contains the following options:
Advanced Options > Split Plot Variance Ratio
Specify the ratio of the variance of the random whole plot and the subplot variance (if present) to the error variance. Before setting this value, you must define a hard-to-change factor for your split-plot design, or hard and very-hard-to-change factors for your split-split-plot design. Then you can enter one or two positive numbers for the variance ratios, depending on whether you have specified a split-plot or a split-split-plot design.
Advanced Options > Set Delta for Power
Specify a value for the difference you want to detect that is applied to Anticipated Coefficients in the Power Analysis report. The Anticipated Coefficient values are set to Delta/2 for continuous effects. For categorical effects, they are alternating values of Delta/2 and –Delta/2. For additional details on power analysis, see “Power Analysis”.
By default, Delta is set to two. Consequently, the Anticipated Coefficient default values are 1 for continuous effects and alternating values of 1 and –1 for categorical effects. The default values that are entered as Anticipated Coefficients when Delta is 2 ensure these properties:
The power calculation for a numeric effect assumes a change of Delta in the response mean due to linear main effects as the factor changes from the lowest setting to the highest setting in the design region.
The power calculation for the parameter associated with a two-level categorical factor assumes a change of Delta in the response mean across the levels of the factor.
The power calculation for a categorical effect with more than two levels is based on the multiple degree of freedom F-test for the null hypothesis that all levels have the same response mean. Power is calculated at the values of the response means that are determined by the Anticipated Coefficients. Various configurations of the Anticipated Coefficients can define a difference in levels as large as Delta. However, the power values for such configurations will differ based on the Anticipated Coefficients for the other levels.
Save Script to Script Window
Creates a script that reproduces the Evaluate Design window and places it in an open script window.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset