CHAPTER 23

Product/Process Comparison: Statistical Tests of Significance

Scientific laws are not advanced by the principle of authority or justified by faith or medieval philosophy; statistics is the only court of appeal to new knowledge

– P. C. Mahalanobis

SYNOPSIS

Statistical test of significance is a technique of statistical analysis to know whether a certain observed result is due to ‘chance’ or any deliberate cause which one may know or one may have to find out. This is crucial to recognise the ‘cause/factor’ that led to a result and that the result was not due to chance. A general rule is specified for conducting the tests. A number of worked examples are given to illustrate the use of tests in almost all commonly encountered types of comparisons between two samples.

Statistical significance

Any result achieved may or may not be significant from technical, economic or knowledge point of view. For example, a new drug formulation for certain disease may not have any technical significance or value to knowledge by way of giving newer insights into the curative process. But, if it were to have its efficacy on the disease, the data on efficacy should reflect that the efficacy found is not due to chance. Assessing that efficacy found is not due to chance is accomplished by subjecting the data to a process of statistical analysis called ‘statistical test of significance’. What is the meaning, logic and reasoning behind this? The following example of one’s common experience brings this out.

The sun rises in the East everyday at 6 AM and has a range say 5.55 to 6.05 AM. This is a normal occurrence and causes no surprise as long as the time of rising is in the range 5.55 to 6.05 AM.

Suppose the sun on a certain day rises at 7.00 AM. Does it not cause surprise? Definitely it does. Why? The reason is that the normal limit is 6.05 AM for sunrise but it happens at 7.00 AM, an unusual happening. It is unusual against the accepted background. It is also significant as it is unusual. This is the essence of statistical logic.

The expressions unusual and significant are expressed as probability, a value ranging from the minimum value ‘zero’ indicating an event that is uncertain-to-happen to a maximum of ‘one’ certain-to-happen. The event of the sun rising at 7.00 AM is unusual, against the background of the time of rising 5.55 to 6.05 AM and hence it is significant. Unusual means its occurrence is rare. Rare occurrence means the probability of its occurrence is low. Thus, statistically, for an event to be significant, the probability of its occurrence must be low against a given background.

Test of significance specifies the background against which significance is assessed. Background is in the nature of statistical distributions (statistical laws of chance or laws of variation due to common causes applied to different types of situations) and the probability of occurrence of an event is assessed on the basis of statistical law.

Thus the constituents of a test of statistical significance are

  • Background against which the significance of a result on hand is to be tested. Background as well as result are to be numerically stated.
  • Statistical law governing the background.
  • Compare the result against the background and obtain the result of comparison computed as a difference or a ratio as specified beforehand.
  • Obtain the probability of getting the result of comparison due to chance.
  • Decision: accept that the result is obtained from the background when the probability of the result is not low, a case of ‘statistically’ not significant.
  • Decision: accept that the ‘result’ is not obtained from the background, a case of ‘statistically’ significant, when probability of getting the result is ‘low’.

This logic is developed into a procedure for applying tests of significance to different types of situations. Prior to this, it is necessary to appreciate the linkage of tests of significance to the continual improvement exercise. This is explained as follows. From the past data, it is known that the average yield is 95 per cent and its standard deviation (s.d.) is 4.8 per cent. One can encounter two situations, (A) and (B), as discussed in the following sections.

Situation A

Recent data spread over a 2-week period covering 25 batches shows an average yield of 97.5 per cent. This is an interesting result of technical significance. To accept this, it becomes necessary to know the statistical significance of the result; that is whether it is due to any special cause or common cause belonging to the background quantified as 95 per cent average and 4.8 per cent standard deviation (s.d.).

In the continual improvement exercise, one lands into interesting situations of this type during the data analysis stage. When the result obtained is found to be statistically significant, it gives confidence to probe further with the possible technological factors responsible for such a result and come up with appropriate conjecture/hypothesis. Once certain conjectures are formulated it leads to situation B.

Situation B

As per the conjecture, yield improves when infeed concentration is maintained at 36–37 per cent instead of the present level of 34–35 per cent. Accordingly, a dozen trials are conducted maintaining the concentration at 36–37 per cent. The average yield is found to be 98.5 per cent. Again the same question as stated in situation A does arise and the test of significance has indicated that the resultant 98.5 per cent does not belong to the ‘background of 95 and 4.8 per cent and hence the result is not due to common causes, but due to a special cause, in this case, concentration at a new level.

In the continual improvement exercise, this phase is one of experimentation, trying out the conjectures well founded on the past data or special data that might have been collected based on the conjecture.

Experiments will have to be planned properly and the data of the experiment will have to be analysed in a manner that fits the design adopted in the experiments. These two together constitute another important area: design of experiments (DOE), the subject matter of Chapter 25.

Statistical laws, tests associated with statistical law: single- or double-sided test

The different statistical laws used in the tests of significance and the type of issues handled by each law are given in Table 23.1.

TABLE 23.1 List of Different Statistical Laws Commonly used and the Types of Enquiries Associated with Them

Statistical law   Types of enquiries associated with the statistical law
Normal law
(Bell-shaped curve)
a) Comparing a result (average) with standard value and its s.d. both known beforehand.
Illustrative example: Exercise 1
b) Comparing averages of the two samples when s.d. is known.
Illustrative example: Exercise 2
t-distribution a) Comparing a result (average) with ‘standard’ value when s.d. is not known.
Illustrative example: Exercise 3
b) Comparing averages of two samples when s.d. of the background is not known.
Illustrative example: Exercise 4
c) Testing of linear relationship.
Illustrative example: Chapter 24
χ2-distribution (Chi-square distribution) a) Comparing the s.d. based on sample with the background (standard) s.d.
Illustrative example: Exercise 5
b) Comparing frequencies distributed over different classification.
Illustrative example: Chapter 24
F-distribution a) Comparing the variability of two samples.
Illustrative example: Exercises 6 and 7
b) Comparing average values of more than two samples.
Illustrative example: Chapter 24
‘Outliers’ distribution a) To test whether a suspected observation in a set of data belongs to it.
Illustrative example: Exercise 8
Normal law for attribute data a) Comparing new level of ‘performance’ with the ‘standard’.
Illustrative example: Exercise 9
b) Comparing two ‘performances’.
Illustrative example: Exercise 10

Statistical significance: probability

Probability is a number. Its minimum value is zero, and maximum value is 1. ‘Zero’ refers to an event that cannot occur and ‘one’ refers to an event that is certain to occur. Values closer to zero are those events whose probability of occurrence is low. Hence, they are also referred to as rare events. Thus, if in a given frame of experience, knowledge or background, a situation were to occur and if its probability of occurrence is judged to be low, such a situation is to be perceived as a rare one to occur in the given frame work. Hence, some special causes/features not in the framework must have operated for that situation to happen and this rarity of event is termed as ‘statistically significant’. This statistical significance is judged on the basis of probability. Probability values chosen for judging the significance are termed as levels of significance α (alpha). The commonly used levels of significance are α = 0.05 and 0.01.

Single- and double-sided tests

In Table 23.2, the meaning and application of single- and double-sided tests are explained using normal law for the purpose of illustration. The same explanation holds good for other statistical tests of significance also.

TABLE 23.2 Single- and Double-Sided Tests

Particulars

 

Figure 23.1 Single-sided: higher the better (example: yield, life, purity)

Single-sided

Is the difference of 1% ‘really’ greater than zero? Thus, the direction of interest is ‘higher the better’ to the right as shown in Figure 23.1. The level of significance is 0.01 and this means that the probability of getting an observed result, in the present case increase in yield of 1.0%, is 0.01 or less. The value Z0.01 shown in Figure 23.1 is the value obtained from normal Table A given in statistical tables corresponding to the level of significance 0.01. For judging the result on its statistical significance, note the following:

  • The result, in our example difference of +1.0%, is statistically significant when it falls in the shaded area of 0.01 (shown in Figure 23.1).

Working rule

The rule to judge the ‘statistical significance’ of a result is as follows and these have been set out as a procedure in Table 23.3.

  1. From the data compute the test statistic specified
  2. Obtain from the statistical tables the value corresponding to the level of significance
  3. Compare (1) with (2)
  4. If the value of (1) is greater than (2), the result is statistically significant
  5. If the value of (1) is less than (2), the result is not statistically significant

Figure 23.2 Single-sided: lower the value better it is (example: rejection, rework, variability, impurity)

Single-sided

 

Figure 23.3 Double-sided: closer to the target is important

Double-sided

Test procedure

The test procedure for assessing the statistical significance of a result covering single- and double-sided lists is given in Table 23.3. Statistical tables of each type of statistical law mentioned in Table 23.2 are given in Annexure 23A.

The value to be computed from the data referred to in Tables 23.2 and 23.3 as ‘test statistic’ is appropriate to the different types of enquiry associated with the statistical law. The type of statistic to be computed is given and explained in each of the illustrative examples 1 to 10. Each of the 10 exercises are worked as per the nine steps stated in Table 23.3.

TABLE 23.3 Steps for Carrying out the Test of Statistical Significance

Step no. Description
1. State what is to be tested. This is postulated on the basis that the observed result of the sample does not indicate any change from the present. If it were to be one of postulating that the result represents a change, where is the need for a test?
2. Identify the statistical law relevant to (1) using the information in Table 23.1
3. Know the test statistic to be computed corresponding to (1) and (2)
4. Compute the value of statistic on the basis of data
5. Know the type of test: single-higher single-lower; double sided
6. Choose the level of significance, normally 0.05 or 0.01
7. Obtain the value from the appropriate table of the statistical law
8. Compare the value of the statistic obtained in (4) with that obtained in (7)
9. Conclude that the result is statistically significant if the value of the statistic is beyond the value obtained from the table

Conclusion

Before one proceeds with gaining a grip over the use of tools of statistical significance as well as DOEs by getting familiar with a number of examples furnished in this and Chapters 24 and 25, the authors would like to mention the following points based on the general attitude they have come across on tools and techniques:

  1. The present day workforce is far more educated than what it used to be. They are eager to enlarge their knowledge base as applicable to their area of work.
  2. The tools and techniques that are considered tough, theoretical and cannot be applied as on date turn out to be of easy applicability sooner than later. Hence it is a retrograde step if training is confined to only 7 (old and/or new) tools and 5S as found in the practice of many organisations as well as consultants. This may serve well the interests of the latter but not definitely those of the former.
  3. The tools dealt in this book are a minimum-must one should be equipped with.
  4. There are no such things as techniques for the ordinary. In fact the natural path is that techniques thought of as advanced sooner than later lose their intellectual shine, and find themselves in the hands of many ordinary. A good example at one end is that of the Indian Army where jawans handle with ease, good thought and competence many sophisticated defence systems and at the other, a middle-class household which handles with ease and competence, a conglomeration of sophisticated kitchen ware such as the microwave oven. With these observations, we reiterate our contention that all the tools and techniques—soft as well as hard—covered in this book are a minimum-must.

Illustrative example 1

Population mean (standard) = 95%
Population s.d. = 4.8%
Sample mean xbar = 97.5%
Sample size n = 25
Level of significance = 0.01

Step no. as per Table 23.3 Answer
1. Is 97.5% statistically higher than 95.00 assuming that it is not so
2. Normal law
3. Test statistic denoted by Z is ch23-ueq1
where μ = 95.0%; σ = 4.8%, n = 25, xsbar = 97.5
4. Computed value of test statistic
ch23-ueq2
5. The type of test: single-sided, higher
6. Level of significance = α = 0.01
7. Value of Z0.01 obtained from ‘normal’ distribution Table A is 2.33
8. Value of statistic computed from the data is greater than that obtained from the table
9. There is reason to believe that the average yield of the sample is higher than the existing one. Any special reason for this needs to be probed. Alternately, if the result obtained from the sample is due to any deliberate action, it shows that the action taken is a factor affecting the yield

Illustrative example 2

Breaking strength obtained from two similar processes A and B are as follows. The process s.d. is known to be 2.5. Do the two processes differ?

Process A   9, 4, 10, 7, 9, 10, 11, 12
Process B   14, 9, 13, 12, 13, 8, 10, 11, 12, 14

Step no. as per Table 23.3 Description
1. The average strength of processes A and B are 9.0 and 11.6 respectively. Are these two averages statistically different on the assumption that they are not different?
2. Normal law
3. Test statistic denoted by Z is
ch23-ueq3
where x1bar is the average of first sample = 9.0, n1 the sample size of first sample = 8, X2 and n2 are, respectively, 11.6 and 10 of second sample and σ the s.d. of population (standard) = 2.5
4. Computed value of statistic is
ch23-ueq4
5. The type of test: two-sided
6. Level of significance = α = 0.01; For two-sided test, α = 0.005
7. Value of Z0.005 obtained from Normal table (Table A) is 2.57
8. Value of statistic computed from the data is not higher than that obtained from the table
9. There is reason to believe that the two processes give rise to the same level of breaking strength

Illustrative example 3

Sample average = 65 psi
Sample size = 11
Standard = 60 psi
Sample s.d. = 10 psi

Step no. as per Table 23.3 Answer
1. Do 65 psi mean and 60 psi population mean (standard) statistically differ from each other on the assumption that it is not so?
2. t-distribution with an associated degree of freedom (d.f.)
3. Test statistic denoted by t is ch23-ueq5 with d.f. = n − 1
where x1bar is the sample average = 65 psi, μ the standard mean = 60 psi, n the sample size = 11 and s the sample s.d. =10
4. Computed value of statistic
ch23-ueq6
5. The type of test: double-sided
6. Level of significance = α = 0.01, α for double-sided = 0.005
7. Value of ch23-ueq7; d.f. = t0.005, 10 = 3.169 from the table of t-distribution (Table B)
8. The computed value of the statistic is less than that obtained from the table
9. There is reason to believe that the sample value of the average is as good as the existing standard value

Illustrative example 4

Octane numbers of two fuels, Fuel A and B, are as follows. Do they differ from each other?

Fuel A   81, 84, 79, 76, 82, 85, 88, 84, 80, 79, 82, 81
Fuel B   76, 74, 78, 79, 80, 79, 82, 76, 80, 79, 82, 78

Step no. as per Table 23.3 Answer
1. The average octane number of fuel A and B are 81.8 and 78.7. Are these two statistically different assuming that it is not so?
2. t-distribution
3. Test statistic denoted by t is ch23-ueq8 with d.f. = n1 + n2 − 2
ch23-ueq9
where s12 and s22 are the variance of the first and second sample, respectively: s12 =10.2, s22 = 6.06
4. Computed value of test statistic is
ch23-ueq10
5. The type of test: two-sided
6. Level of significance = α = 0.01; α for double-sided = 0.005
7. Value of t0.005; 22 from t table is 2.819
8. Value of test statistic 2.17 is less than that obtained from the table, 2.819
9. There is no reason to believe that the samples differ from one another w.r.t. octane value

Illustrative example 5

Sample size = 30 s.d. based on the sample is 0.73 g. Population (past data) s.d. = 1.00 g. Is the sample variation lower than that of the population s.d.?

Step no. as per Table 23.3 Answer
1. Is there reason to believe that the sample s.d. is statistically less than that of the population s.d. assuming it not to be so?
2. χ2-distribution with associated d.f.
3. Test statistic denoted by χ2 is
ch23-ueq11
where s2 is the sample variance, σ2 the population variance and n the sample size
4. Computed value of test statistic is
ch23-ueq12
5. The type of test: single-sided lower (smaller the better)
6. Level of significance = α = 0.01
7. Value of χ20.01, 29 from the table of χ2, Table C is 14.256 on the lower side of the distribution
8. Computed value of χ2 is not less than the lower value obtained from the table
9. There is no reason to believe that the system that generated the sample is superior in its variability to the existing one

A typical situation to be encountered in calibration is as follows:

  1. A new instrument is to be purchased on the claim that its precision is superior to that of the existing instrument. How to establish the claim? The above example (5) deals with this.
  2. An instrument is repaired, overhauled. The agency claims that its precision is improved. How to establish this? Following example (6) deals with this.

Illustrative example 6

Sample 1: s12 = 0.1690 n1 = 10

Sample 2: s22 = 0.2890 n2 = 10

Is there reason to believe that the variability in the second sample is higher than that in the first sample?

Step no. as per Table 23.3 Answer
1. Is there reason to believe that the second sample has its variance that is statistically higher than the other?
2. F-distribution
3. Test statistic denoted by F is
ch23-ueq13
with d.f.1 and d.f.2 corresponding to the degrees of freedom of higher variance followed by lower variance
4. Computed value of
ch23-ueq14
5. The type of test: single-sided higher
6. Level of significance = α = 0.01
7. Value of F0.01, 9, 9 = 5.35 from F-distribution (Table E)
8. Computed value of statistic F is lower than that obtained from the table of F-distribution
9. There is no reason to accept that variability in the system from which the second sample is obtained is higher than that corresponding to the first sample

Illustrative example 7

One criteria in evaluating oral anaesthetics for use in general dentistry is the variability in the length of time between injection and complete loss of sensation in the patient termed effect delay time. Data on two types of anaesthetics are as under and it is intended to know whether the two types differ in their effect-delay-time.

Type Sample size (n) Sample variation s2
A 31 1296
B 41 784
Step no. Answer
1. Is there reason to believe that variability in the two types are statistically different form each other assuming it not to be so?
2. F-distribution
3. Test statistic F is given by
ch23-ueq15
with d.f.1 the degrees of freedom of higher variance followed by d.f.2 the degrees of freedom of the lower variance.
4. Computed value of
ch23-ueq16
5. Type of test: two-sided test, lower as well as higher
6. Level of significance α = 0.02; α for double-sided = 0.01
7.
  1. Value of F0.01, 30, 40 corresponding to the higher side (upper tail) is 2.20
  2. Value of F0.99, 30, 40 corresponding to the lower side (lower tail) is obtained as follows as per statistical rule

In general,
ch23-ueq17
or
ch23-ueq18
From the above relations it follows that
ch23-ueq19
where 2.30 is the value obtained from F-distribution corresponding F0.01,40,30
8. Computed value of statistic F on the basis of the data, is 1.65. This computed value lies between 0.43 the lower limit and 2.20 the upper limit, the region of non-significance
9. There is reason to accept that the variability of both the types are not statistically different

Note: Only in the case of F-distribution the contraption shown above is to be adopted to get the lower value from the F tables.

Illustrative example 8

Dixon test for outlying observations   This procedure has the advantage that an estimate of the s.d. is not needed to use it. The procedures are

Step 1. Rank the data in the order of increasing numerical value, i.e.,

X1 < X2 < X3 < … < Xn−1 < Xn

Step 2. Decide whether the smallest X1, or the largest Xn, is suspected to be an outlier

Step 3. Select the risk you are willing to take for false rejection

Step 4. Compute the following ratios (statistic):

ch23-ufig1

Step 5. Compare the ratio (statistic) calculated with the values in the table. If the calculated ratio is greater than the tabulated value, rejection may be made with the tabulated risk

Example   Given the ranked data set:

10.45, 10.47, 10.47, 10.48, 10.49, 10.50, 10.50, 10.52, 10.53, 10.58

The value 10.58 is the suspect. Sample size is 10. Hence, τ11 is to be chosen.

  1. Calculate τ11 as per the formula in step 4
    ch23-ueq20
  2. At 5 per cent risk of false rejection τ11 = 0.477 for n =10 (Table F).
  3. Conclusion: No reason to reject the value 10.58.

Grubbs test for outlying observations   This test is useful for making decisions on the rejection of outliers. The procedure for using it is as follows:

Step 1. Rank the data in the order of increasing numerical value.

X1 < X2 < X3 < … < Xn−1 < Xn

Step 2. Decide whether the smallest X1, or the largest Xn, is suspected to be an outlier.

Step 3. Estimate the standard deviation, s, of the data set (using all data).

Step 4. Calculate appropriate value of T as follows:

ch23-ueq21

Step 5. Select the risk you are willing to take for false rejection.

Step 6. Compare T with the values tabulated in the table depending on n and the acceptable risk. If T is larger than the tabulated value, rejection may be made with the associated risk.

Example   Given the ranked data set:

10.45, 10.47, 10.47, 10.48, 10.49, 10.50, 10.50, 10.52, 10.53, 10.57

The value 10.57 is the suspect.

  1. Calculate xbar and s
    ch23-ueq22
  2. Calculate T
    ch23-ueq23
  3. Compare T with values in Table G
    At 5% risk, for n = 10, T = 2.176.
  4. Conclusion: no reason to reject 10.57.

Illustrative example 9

In the context of Six Sigma, the following example would be more instructive. The process yield as per the existing level is 98 per cent. It is taken as the existing standard. Certain measures were taken to improve the yield. One hundred batches were made with the new measures. The yield from these batches on an average was 99 per cent. Defining a batch with less than 98 per cent as defect, 16 batches were found to be defective. On this basis, can it be concluded that the measures taken did contribute to the increased yield? The corresponding defect rate for 98 per cent was 20 per cent.

Step no. as per Table 23.3 Answer
1. Is there reason to believe in the efficacy of new measures to improve yield, assuming it not to be so?
2. Normal distribution
3. Test-statistic Z is given by
ch23-ueq24
where r is the number non-conforming in the sample n, the sample size = 100, p the proportion not conforming to the standard = 0.20 and q = 1−p
4. Computed value of
ch23-ueq25
5. Type of test: one-sided, lower
6. Level of significance α = 0.05
7. Value of Z0.05 from the table on the lower side is −1.645
8. The calculated value of Z exceeds the value of Z obtained from the tables
9. There is no reason to believe that the actions taken are effective in improving the yield

Illustrative example 10

Two new formulations A and B are meant to reduce the blood pressure level. They were tried on a set of similar laboratory animals. The results are as follows. Is there reason to believe that the two formulations differ in their effect on reducing blood pressure level?

Sample No. of animals No. with favourable response
A 100 (n1) 71 (f1)
B 90 (n2) 58 (f2)
Step no. as per Table 23.3 Answer
1. Is there reason to believe that the two compounds differ in their ability to reduce blood pressure level assuming it not to be so?
2. Normal distribution
3. Test-statistic Z is given by
ch23-ueq26
ch23-ueq27 of first sample and its sample size n1, ch23-ueq28 of second sample and its sample size ch23-ueq29 and q = 1 − p
n = n + n2
4. Calculated value of Z is derived as follows:
ch23-ueq30
5. Level of significance α = 0.05
6. Type of test: two-sided. Hence α = 0.025
7. Value of Z0.025 from Table A is 1.96
8. The calculated value of Z is less than the corresponding value obtained from the table
9. There is reason to believe that the two formulations do not differ in their effect on reducing the blood pressure

Annexure

Statistical tables

Ref. Title
Table A Standard normal probability distribution
Table B t-distribution
Table C Area in the right tail of χ2-distribution
Table D Values of F for F-distribution with 0.05 of the area in the right of tail
Table E Values of F for F-distribution with 0.01 of the area in the right of tail
Table F Critical values of criteria based on order statistics for testing an outlier
Table G Critical values of T1 and T2 for testing an outlier
Table H Table of constants and formulas for control charts

Table A

Areas under the standard normal probability distribution between the mean and the positive values of z are shown here.

ch23-ufig2

Example: To find the area under the curve between the mean and a point 2.2 standard deviations to the right of the mean, look up the value opposite 2.2 in the table; 0.4861 of the area under the curve lies between the mean and a z-value of 2.2.

ch23-utab1

Table B: t-Distribution

ch23-ufig3

Example: To find the value of t which corresponds to an area of 0.10 in both tails of the distribution combined, when there are 19 degrees of freedom, look under the 0.10 column and proceed down to the 19 degrees of freedom row; the value is 1.729.

ch23-utab2

Table C

Area in the right tail of a Chi-square (χ2) distribution is shown here.

ch23-ufig4

Example: In a Chi-square distribution with 11 degrees of freedom, if we want to find the appropriate Chi-square value for 0.20 of the area under the curve (the shaded area in the right tail) we look under the 0.20 column in the table and proceed down to the 11 degrees of freedom row; the appropriate Chi-square value is 14.631.

ch23-utab3

Table D

Values of F for F-distribution with 0.05 of the area in the right tail are shown here.

ch23-ufig5

Example: For a test at a significance level of 0.05 where we have 15 degrees of freedom for the numerator and 6 degrees of freedom for the denominator, the appropriate F value is found by looking under the 15 degrees of freedom column and proceeding down to the 6 degrees of freedom row; there we find the appropriate F value to be 3.94.

ch23-utab4

Table E

Values of F for F-distribution with 0.01 of the area in the right tail.

ch23-ufig6

Example: For a test at a significance level of 0.01, where we have 7 degrees of freedom for the numerator and 5 degrees of freedom for the denominator, the appropriate F value is found by looking under the 7 degrees of freedom column and proceeding down to the 5 degrees of freedom row; there we find the appropriate F value to be 10.5.

ch23-utab5

Table F

Critical values of criteria based on order statistics for testing an outlier.

ch23-utab6

Table G

Critical values of T1 or T2 for testing an outlier.

Sample size, n Significance level
5% 1%
3 1.153 1.155
4 1.463 1.492
5 1.672 1.749
6 1.822 1.944
7 1.938 2.097
8 2.032 2.221
9 2.110 2.323
10 2.176 2.410
11 2.234 2.485
12 2.285 2.550
13 2.331 2.607
14 2.371 2.659
15 2.409 2.705
16 2.443 2.747
17 2.475 2.785
18 2.504 2.821
19 2.532 2.854
20 2.557 2.884
21 2.580 2.912
22 2.603 2.939
23 2.624 2.963
24 2.644 2.987
25 2.663 3.009
30 2.745 3.103
35 2.811 3.178
40 2.866 3.240
45 2.914 3.292
50 2.956 3.336

Table H

Table of constants and formulas for control charts.

ch23-utab7
ch23-ufig7

Summary of statistical tests of significance

The summary of statistical tests of significance dealt with in this chapter and in Chapter 24 is as follows.

ch23-utab8
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset