Attribute Gauge Charts Overview
Before you create an attribute gauge chart, your data should be formatted using the following guidelines:
In order to compare agreement among raters, each rater in the data table must be in a separate column. These columns are then assigned to the Y, Response role in the launch window. In Figure 9.2, each rater (A, B, and C) is in a separate column.
Responses in the different columns can be character (pass or fail) or numeric (0 or 1). In Figure 9.2, rater responses are numeric (0 for pass, 1 for fail). All response columns must have the same data type.
Any other variables of interest that you might want to use as X, Grouping variables should appear stacked in one column each (see the Part column in Figure 9.2). You can also define a Standard column, which produces reports that compare raters with the standard. The Standard column and response columns must have the same data type.
Figure 9.2 Attribute Gauge Data
Attribute Gauge Data
Example of an Attribute Gauge Chart
Suppose that you have data containing pass or fail ratings for parts. Three raters, identified as A, B, and C, each noted a 0 (pass) or a 1 (fail) for 50 parts, three times each. You want to examine how effective the raters were in correctly classifying the parts, and how well the raters agreed with each other and with themselves over the course of the ratings.
1. Select Help > Sample Data Library and open Attribute Gauge.jmp.
2. Select Analyze > Quality and Process > Variability / Attribute Gauge Chart.
3. For Chart Type, select Attribute.
4. Select A, B, and C and click Y, Response.
5. Select Standard and click Standard.
6. Select Part and click X, Grouping.
7. Click OK.
Figure 9.3 Example of an Attribute Chart
Example of an Attribute Chart
The first chart (Part) shows how well the raters agreed with each other for each part. For example, here you can see that the percent agreement dropped for part 6, 12, 14, 21, 22, and so on. These parts might have been more difficult to categorize.
The second chart (Rater) shows each rater’s agreement with him or herself and the other raters for a given part, summed up over all of the parts. In this example, it looks like the performance of the raters is relatively similar. Rater C had the lowest agreement, but the difference is not major (about 89% instead of 91%).
Launch the Variability/Attribute Gauge Chart Platform
Launch the Variability/Attribute Gauge Chart platform by selecting Analyze > Quality and Process > Variability/Attribute Gauge Chart. Set the Chart Type to Attribute.
Figure 9.4 The Variability/Attribute Gauge Chart Launch Window
The Variability/Attribute Gauge Chart Launch Window
Chart Type
Choose between a variability gauge analysis (for a continuous response) or an attribute gauge analysis (for a categorical response, usually “pass” or “fail”).
Note: The content in this chapter covers only the Attribute chart type. For details about the Variability chart type, see “Variability Gauge Charts” chapter.
Specify Alpha
Specify the alpha level used by the platform.
Y, Response
Specify the columns of ratings given by each rater. You must specify more than one rating column.
Standard
Specify a standard or reference column that contains the “true” or known values for the part. In the report window, an Effectiveness Report and an additional section in the Agreement Comparisons report appear, which compare the raters with the standard.
X, Grouping
Specify the classification columns that group the measurements. If the factors form a nested hierarchy, specify the higher terms first.
Freq
Identifies the data table column whose values assign a frequency to each row. Can be useful when you have summarized data.
By
Identifies a column that creates a report consisting of separate analyses for each level of the variable.
For more information about the launch window, see the Get Started chapter in the Using JMP book.
The Attribute Gauge Chart and Reports
Attribute gauge chart plots the % Agreement, which is a measurement of rater agreement for every part in the study. The agreement for each part is calculated by comparing the ratings for every pair of raters for all ratings of that part. See “Statistical Details for Attribute Gauge Charts”.
Follow the instructions in “Example of an Attribute Gauge Chart” to produce the results shown in Figure 9.5.
Figure 9.5 Attribute Gauge Chart
Attribute Gauge Chart
The first chart in Figure 9.5 uses all X grouping variables (in this case, the Part) on the x-axis. The second chart uses all Y variables on the x-axis (typically, and in this case, the Rater).
In the first graph, you can look for parts with a low % Agreement value, and investigate to determine why raters do not agree about the measurement of that particular part.
In the second graph, you can look for raters with a low % Agreement value, and investigate to determine why they do not agree with the other raters or with themselves.
For information about additional options, see “Attribute Gauge Platform Options”.
Agreement Reports
Note: The Kappa value is a statistic that expresses agreement. The closer the Kappa value is to 1, the more agreement there is. A Kappa value closer to 0 indicates less agreement.
The Agreement Report shows agreement summarized for each rater and overall agreement. This report is a numeric form of the data presented in the second chart in the Attribute Gauge Chart report. See Figure 9.5.
The Agreement Comparisons report shows each rater compared with all other raters, using Kappa statistics. The rater is compared with the standard only if you have specified a Standard variable in the launch window.
The Agreement within Raters report shows the number of items that were inspected. The confidence intervals are score confidence intervals (as suggested by Agresti and Coull, 1998). The Number Matched is the sum of the number of items inspected, where the rater agreed with him or herself on each inspection of an individual item. The Rater Score is the Number Matched divided by the Number Inspected.
The Agreement across Categories report shows the agreement in classification over that which would be expected by chance. It assesses the agreement between a fixed number of raters when classifying items.
Figure 9.6 Agreement Reports
Agreement Reports
Effectiveness Report
The Effectiveness Report appears only if you have specified a Standard variable in the launch window. For a description of a Standard variable, see “Launch the Variability/Attribute Gauge Chart Platform”. This report compares every rater with the standard.
Figure 9.7 Effectiveness Report
Effectiveness Report
The Agreement Counts table shows cell counts on the number correct and incorrect for every level of the standard. In Figure 9.7, the standard variable has two levels, 0 and 1. Rater A had 45 correct responses and 3 incorrect responses for level 0, and 97 correct responses and 5 incorrect responses for level 1.
Effectiveness is defined as follows: the number of correct decisions divided by the total number of opportunities for a decision. For example, say that rater A sampled every part three times. On the sixth part, one of the decisions did not agree (for example, pass, pass, fail). The other two decisions would still be counted as correct decisions. This definition of effectiveness is different from the MSA 3rd edition. According to MSA, all three opportunities for rater A on part six would be counted as incorrect. Including all of the inspections separately gives you more information about the overall inspection process.
In the Effectiveness table, 95% confidence intervals are given about the effectiveness. These are score confidence intervals. It has been demonstrated that score confidence intervals provide increased coverage probability, particularly where observations lie near the boundaries. (See Agresti and Coull, 1998.)
The Misclassifications table shows the incorrect labeling. The rows represent the levels of the standard or accepted reference value. The columns contain the levels given by the raters.
Conformance Report
The Conformance Report shows the probability of false alarms and the probability of misses. The Conformance Report appears only when the rating has two levels (such as pass or fail, or 0 or 1).
The following descriptions apply:
False Alarm
The part is determined to be non-conforming, when it actually is conforming.
Miss
The part is determined to be conforming, when it actually is not conforming.
P(False Alarms)
The number of parts that have been incorrectly judged to be nonconforming divided by the total number of parts that are judged to be conforming.
P(Miss)
The number of parts that have been incorrectly judged to be conforming divided by the total number of parts that are actually nonconforming.
The Conformance Report red triangle menu contains the following options:
Change Conforming Category
Reverses the response category that is considered conforming.
Calculate Escape Rate
Calculates the Escape Rate, which is the probability that a non-conforming part is produced and not detected. The Escape Rate is calculated as the probability that the process will produce a non-conforming part times the probability of a miss. You specify the probability that the process will produce a non-conforming part, also called the Probability of Nonconformance.
Note: Missing values are treated as a separate category in this platform. To avoid this separate category, exclude rows of missing values in the data table.
Attribute Gauge Platform Options
The Attribute Gauge red triangle menu contains the following options:
Attribute Gauge Chart
Shows or hides the gauge attribute chart and the efficiency chart.
Show Agreement Points
Shows or hides the agreement points on the charts.
Connect Agreement Points
Connects the agreement points in the charts.
Agreement by Rater Confid Intervals
Shows or hides the agreement by rater confidence intervals on the efficiency chart.
Show Agreement Group Means
Shows or hides the agreement group means on the gauge attribute chart. This option is available when you specify more than one X, Grouping variable.
Show Agreement Grand Mean
Shows or hides the overall agreement mean on the gauge attribute chart.
Show Effectiveness Points
Shows or hides the effectiveness points on the charts.
Connect Effectiveness Points
Draws lines between the effectiveness points in the charts.
Effectiveness by Rater Confid Intervals
Shows or hides confidence intervals on the second chart in the Attribute Gauge Chart report. See Figure 9.5.
Effectiveness Report
Shows or hides the Effectiveness report. This report compares every rater with the standard, using the Kappa statistic.
See the JMP Reports chapter in the Using JMP book for more information about the following options:
Local Data Filter
Shows or hides the local data filter that enables you to filter the data used in a specific report.
Redo
Contains options that enable you to repeat or relaunch the analysis. In platforms that support the feature, the Automatic Recalc option immediately reflects the changes that you make to the data table in the corresponding report window.
Save Script
Contains options that enable you to save a script that reproduces the report to several destinations.
Save By-Group Script
Contains options that enable you to save a script that reproduces the platform report for all levels of a By variable to several destinations. Available only when a By variable is specified in the launch window.
Statistical Details for Attribute Gauge Charts
For the first chart in Figure 9.5 that plots all X, Grouping variables on the x-axis, the % Agreement is calculated as follows:
Equation shown here
For the second chart in Figure 9.5 that plots all Y, Response variables on the x-axis, the % Agreement is calculated as follows:
Equation shown here
Note the following:
n = number of subjects (grouping variables)
ri = number of reps for subject i (i = 1,...,n)
m = number of raters
k = number of levels
Ni = m x ri. Number of ratings on subject i (i = 1,...,n). This includes responses for all raters, and repeat ratings on a part. For example, if subject i is measured 3 times by each of 3 raters, then Ni is 3 x 3 = 9.
For example, consider the following table of data for three raters, each having three replicates for one subject.
 
Table 9.1 Three Replicates for Raters A, B, and C 
 
A
B
C
1
1
1
1
2
1
1
0
3
0
0
0
Using this table, you can make these calculations:
% Agreement =Equation shown here
% Agreement [rater A] = % Agreement [rater B] = Equation shown here and
% Agreement [rater C] =Equation shown here
Statistical Details for the Agreement Report
The simple Kappa coefficient is a measure of inter-rater agreement.
Equation shown here
where:
Equation shown here
and:
Equation shown here
If you view the two response variables as two independent ratings of the n subjects, the Kappa coefficient equals +1 when there is complete agreement of the raters. When the observed agreement exceeds chance agreement, the Kappa coefficient is positive, and its magnitude reflects the strength of agreement. Although unusual in practice, Kappa is negative when the observed agreement is less than the chance agreement. The minimum value of Kappa is between -1 and 0, depending on the marginal proportions.
Estimate the asymptotic variance of the simple Kappa coefficient with the following equation:
Equation shown here
where:
Equation shown here
Equation shown here
and:
Equation shown here
The Kappas are plotted and the standard errors are also given.
Note: The Kappa statistics in the Attribute Chart platform are shown even when the levels of the variables are unbalanced.
Categorical Kappa statistics (Fleiss 1981) are found in the Agreement Across Categories report.
Given the following assumptions:
n = number of subjects (grouping variables)
m = number of raters
k = number of levels
ri = number of reps for subject i (i = 1,...,n)
Ni = m x ri. Number of ratings on subject i (i = 1, 2,...,n). This includes responses for all raters, and repeat ratings on a part. For example, if subject i is measured 3 times by each of 2 raters, then Ni is 3 x 2 = 6.
xij = number of ratings on subject i (i = 1, 2,...,n) into level j (j=1, 2,...,k)The individual category Kappa is as follows:
Equation shown here
The overall Kappa is as follows:
Equation shown here
The variance of Equation shown here and Equation shown here are as follows:
Equation shown here
Equation shown here
The standard errors of Equation shown here and Equation shown here are shown only when there are an equal number of ratings per subject (for example, Ni = N for all i =1,…,n).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset