Chapter 7. Solving Problems: Improve and Control

A good problem definition is considered to be the halfway point through solving the problem. During the first three phases of DMAIC, we gained knowledge about the process activities, process performance, process problems, and potential causative relationship between the problems and its sources. In Chapter 6, “Understanding Problems,” we ended up with a clear definition of the problems and a reduced number of critical variables that form the likely recipe of the solution.

In this chapter, we work with the remaining key variables and develop a recipe for improvement. In order to reduce the number of variables further, and develop an optimized recipe, one can use one’s knowledge, or experiment to determine effects of multiple variables. For evaluating effects of experimental settings, various combinations, or alternate solutions, one must understand statistical testing methods. Having developed a solution to the process problem, sustaining the gain becomes a challenge. Many times, we have the idea or changes required in the process; however, keeping up with the desired changes becomes a challenge. We must create a system to consistently practice the new methods of treating patients, doing surgery, or practicing medicine. The Improve phase and Control phase have been designed to develop a desired solution and sustain the improvement as long as needed.

The Solving Problem portion requires tools for creativity, evaluation, verification, implementation, monitoring, change management, and communication. Thus, the following tools can be implemented to solve a problem and realize benefits over time:

• Improve phase

• Systems thinking

• Testing of hypothesis

• Comparative experiments

• Design of experiments

• Control phase

• Control charts

• Documentation

• Change management

• Communication

• Reward and recognition

Improve Phase

The improve phase has been designed to identify actions remedying the root cause of the waste or inefficiency of a process, or in a department. People tend to jump to the action or improve phase without taking time to understand the problem and the process. However, the Six Sigma methodology promotes a systemic approach to problem solving, avoiding our tendency to jump to conclusions and take a rash action to be reversed later.

Systems Thinking

Six Sigma is perceived by many as a data-driven expensive methodology that can only be learned or used after six months of intensive training at the cost of more than $10,000 per person. People think Six Sigma implies a lot of statistics and that just by applying those rigorous statistical techniques, one can fix all problems. If that were true, most problems could have been solved using some statistical software. Actually, I had seen this at a company where a statistical consultant ran a design of experiment, and based upon his statistical analysis, he concluded that a certain process was infeasible. I have found that learning everything about the process factually helps in formulating a right solution quickly. As someone has said, “Let the product and the process do the talking.” We must listen to our product and processes and make decisions based on facts and intuition, rather than either facts or intuition. While applying systems thinking to problem-solving, it has been said that one should solve the problem at least one level above the visible symptoms of a problem. For example, if an error is made in improperly identifying patients during a routine examination, then similar mistakes could be made in surgery too. Thus, first we must investigate where else one may have a similar problem, gather facts, and understand the nature and scope of the problem.

One cannot apply the Six Sigma methodology to an organization that has a leadership crisis, or does not have a clear vision or purpose for the organization. Thus, achieving excellence will require more than Six Sigma in specific areas; instead, one must look at the entire organization in order to make substantial improvement. Synergy created at this level has a chance to produce dramatic results.

In applying systems thinking, one must first clearly understand the expected outcome. Then, the details can be worked out to get there. We must clearly visualize the results; in other words, if an organization is implementing Six Sigma, leadership must be clearly able to see what the organization would look like at the end of the Six Sigma journey. Would it be a one-time measure of success, or would it be a continual improvement at an aggressive rate? We should be able to anticipate specific needs for achieving the future state. Thus, we can commit to solving the problem at system or the process level, instead of at the symptom level. That’s where the root cause analysis becomes more effective.

Testing of Hypothesis

Testing of hypothesis is an inferential statistical technique that is used to make a statement about an activity or process, based on its sample output. A hypothesis is a statement about a potential change event (normally an improvement, new treatment, or drug based on an experiment). Validating the statement is called “testing of hypothesis.” The technique involves setting up two hypotheses: one about the expected change, and the other about the remaining possibilities. The statement about the expected experimental outcome is called the alternate hypothesis, and the other one about remaining possibilities of no interest is called the null hypothesis. For example, when evaluating a new cancer medicine or chemotherapy dose during clinical trials, one can set up a hypothesis or make a statement that the new treatment will require a shorter time than the time for current treatment. This will be an alternate hypothesis. If the new treatment causes no improvement, or takes longer time, then the treatment is no good; thus it is null. In other words, we are not interested in the null outcome. The hypotheses can be written as

Null Hypothesis: H0 = New Treatment Time is no different from Current Treatment Time.

Alternate Hypothesis: Ha = New Treatment Time is less than Current Treatment Time.

A combination of null and alternative hypotheses includes all possibilities and does not overlap with each other. The outcome of any hypothesis testing leads to concluding as follows:

• “Rejecting the null hypothesis,” or “Not rejecting the alternate hypothesis”

• “Not rejecting the null hypothesis,” or “Rejecting the alternate hypothesis”

This conclusion is made based upon the statistical significance of evidence. The statistical significance implies that the shift between two process means or average values are so far apart that it is unlikely to happen by chance. Thus, it can be concluded that the new treatment is due to planned changes. Figure 7.1 shows three cases of improvement.

Image

Figure 7.1. Testing of hypothesis

In case 1, Δ (delta) represents the difference between the current process and new process. The difference is about one standard deviation. One can see that there is a significant overlap between the current and new processes. It implies that the expected outcome could be the same even with the new process; in other words, there is a good chance that the new process may not produce improved results. Thus, we conclude that there is no difference, and we do not reject the null hypothesis. In case 2, the difference between the current process and new process is a little more than two standard deviation points. This implies that the probability of overlap does not exceed more than 5 percent. In other words, with almost 95 percent confidence, one can say that the new process is better than the current process. In case 3, the difference exceeds more than nine standard deviations, thus there is no likelihood of any overlap. One can say that the new process is absolutely different than the current process. In such case, one does not even need to do statistical validation. So, one can see that case 1 implies rejecting the alternate hypothesis; case 2 rejects the null hypothesis, and case 3 rejects the null without even a test. Of course, in order to create evidence of testing for regulatory requirements, one may still conduct testing of the hypothesis.

When testing the hypothesis, mistakes can be made. Sometimes, we may conclude that the new treatment is better than the current process when in reality there is no difference; then a Type I mistake is made. The risk associated with Type I error is called alpha (α) risk. This is the risk associated with rejecting the null hypothesis. When we conclude that there is no improvement, in reality there is some improvement; in other words, we missed the improvement, and we commit Type II error. The risk associated with Type II error is called beta (β) risk. Typical risk values associated with Type I and II errors are 5 percent and 10 percent, respectively. For patient safety related tests, the α-risk could be set at 1 percent or much less. For typical fever-type tests, one can set the α-risk similar to industrial risks at 5 percent. For healthcare professionals who used statistical software programs to run hypothesis tests, the value of p represents the α-risk.

Conducting Testing of Hypothesis

To validate a claim or a new treatment, follow these steps:

1. Make a statement about the expected effect of the new treatment or trial (Ha).

2. With remaining options, make a statement of no change (H0).

3. Stating the alternate hypothesis could imply type of test. If less is better (<), more is better (>) or dissimilar or different is better (↑), we establish the type of test. Based on the sign of inequality, the test is set to be left tail, right tail, or two-tail test.

Image

4. Establish the risk associated with the test which is 5 percent or less. In case of having a sign of equality, the risk is divided on both sides, thus have 2.5 percent risk on each side.

5. Collect data from trials or experiments.

6. Compute test statistics to evaluate the outcome.

7. Evaluate significance by comparing the test statistic with theoretical value.

8. Draw a conclusion about rejecting the null or the alternate hypotheses.

Calculating Test Statistic

To determine whether the new treatment is better than the current treatment, one is looking for the average performance, and the consistency of the treatment. Therefore, tests are designed for various situations. Table 7.1 summarizes types of test, their applicability, and evaluation criteria.

Table 7.1. Parametric Tests

Image

Table 7.2 shows that there are parametric tests for variable data and nonparametric tests for attribute data. The details of each test becomes a statistician’s job; however, the available statistical software, such as Minitab, SPSS, JMP, Statgraphics SAS, and many more, are available to quickly analyze the data and draw conclusions. However, in using the software, conceptual understanding of the testing of a hypothesis is important.

Table 7.2. Nonparametric Tests

Image

Statistical tests evaluate difference in process means or ratio of inconsistencies (variance) of two treatments. Accordingly, the test statistic for mean and inconsistency are computed as follows:

Test statistic for means = (estimator – equal value in null hypothesis)/Standard deviation of estimator

(New Process Mean – Current Process Mean)/Standard error of estimate of means

Test statistic for variance = Larger Variance/Smaller Variance

Once we calculate the test statistic, we determine what the expected or theoretical test statistic is. To answer this question, we look up the table, select a test, and determine theoretical test statistic, which is described either in terms of a probability, a coefficient, or a measure of standard deviation. Then we compare the test statistic and theoretical value. If the test statistic exceeds the theoretical value, we conclude that the change is significant, or the new treatment is significantly better; thus, we reject the null hypothesis. At this point, a business decision can be made about whether to conduct more trials and accept the findings for developing new treatments.

Figure 7.2 shows regions of significance.

Image

Figure 7.2. Regions of significance

The region in between two standard deviations represents about 95 percent and is called common region. The region on the tail end that represents the probabilities associated with the α-risk is called critical region or the region of significance. When we test a hypothesis or run an experiment, we evaluate whether the new process mean falls in the region of significance. Normally, when we evaluate a difference, we simply determine based on the extent of difference, and based on our feeling we draw a conclusion about whether or not the difference is enough. In case of determining statistical significance, we always evaluate difference with respect to the variation in the process. Thus, if the difference between means of current and the new treatment exceeds two standard deviations, it is considered significant. This significance is more accurately calculated using the statistical tests from Table 7.1 and Table 7.2.

Figure 7.2 depicts probability regions for testing of hypothesis. The common region accounts for probability associated with variation by random or uncontrolled causes, and the region of significance is a complement of the common region—for example, the area under the tail ends beyond two standard deviations. In the healthcare industry, nonclinical or routine operations have associated risk similar to industrial applications; thus, the critical area can be designated beyond two standard deviations as shown in Figure 7.3. However, when the applications are critical or with severe effects of a treatment, the region of significance can start beyond three standard deviations, as shown in Figure 7.4. In other words, one must see an improvement using the new treatment by at least two standard deviations for a routine process, and an improvement of three standard deviations for a critical process. For extremely life-threatening applications, one must consider actual probabilities using statistical software along with some help from an expert.

Image

Figure 7.3. Testing of Means

Image

Figure 7.4. Testing of Means

Similar to validating improvement in means, sometimes processes or treatments are improved through optimization to improve their consistency and confidence in the treatments. In such cases, the objective is not to improve the mean performance as much as it is to reduce the standard deviation. For evaluating improvement in variances, there are tests such as F-test or χ2 test. The method testing for improved consistency is similar to the hypothesis testing, except a different test is used. Figure 7.5 shows two processes where the current process has larger variance than the new process. One can see that the new process is more predictable as its width is tighter than the current process. Similarly, Figure 7.6 shows where the new process ends up having larger variance.

Image

Figure 7.5. Testing of Variance

Image

Figure 7.6. Testing of Variance

F-Test

The standard procedure to compare the variances is the F-test. To perform an F-test, we make the following hypothesis:

Image

where

• σ2m is variance of the samples obtained from the modified process conditions, and

• σ2p is variance of the samples obtained from the present process conditions.

The test statistic that is used to compare the variances is called F-test statistic and is defined as follows:

Image

where

• S12 is the larger variance and

• S22 is the smaller variance

Calculated F-test statistic is compared with the Fcritical value from the F-Table or through software. The recommended approach is to use statistical software to evaluate two variances.

Evaluating Multiple Treatments Using ANOVA

Similar to the testing of hypothesis, Analysis of Variance or ANOVA test, developed by Sir Ronald Fisher in 1920s, allows evaluation of multiple treatments. Again, the process of using ANOVA for evaluating multiple treatments is similar to the testing of the hypothesis. Here the objective of evaluation is whether the treatments are different or not different. In ANOVA, we are testing whether the difference among means of multiple treatments is more than the variation within the treatment itself. In the case of two treatments, we evaluate difference with respect to the standard deviation. In the case of ANOVA, we evaluate variance among means with respect to the variance within the treatment using F-test. Thus, we analyze variance to evaluate difference among multiple means.

Hypotheses for ANOVA can be written as follows:

H0: μ1 = μ2 = μ3 = ... = μk

Ha: μ1 ≠ μ2 ≠ μ3 ≠ ... ≠ μk (at least one of the μ’s is different)

ANOVA can be used for testing multiple drugs for a same disease, multiple brands for the same medicine, or multiple suppliers of various items.

In performing the ANOVA, we calculate sum of squares (or variance). Total sum of squares (TSS) can be partitioned into two components: sum of squares between treatments (SSB), and sum of squares within treatments (SSW)1:

TSS = SSB + SSW

The ANOVA test depends on the following three parameters:

• Size of the difference between group means

• Sample size in each group. Generally speaking, larger samples will tend to give more reliable data.

• The variance of the dependent variable.

Conditions for Applicability of ANOVA methods include the following:

• We must ensure that observations in each group are randomly taken.

• The population variances are equal.

• The treatments must be independent of each other.

• The observations within each treatment must be independent of each other.

• The population within each treatment must be (approximately) normally distributed with roughly equal standard deviations.

When conducting ANOVA analysis, you should follow these steps:

1. Use appropriate software for performing the calculation.

2. Enter data appropriately and select the appropriate data set.

3. Perform calculation for the F-value.

4. Compare the F-value with critical F-value.

The F-ratio is calculated as follows:

Ftest= (Observed variation of the group averages)/(within the group variation)

Thus, the calculated F-value is compared with the theoretical value of F in order for variances to be significantly different. There are F-tables published in statistical books, or available on the Internet. However, it is recommended that we use statistical software for performing ANOVA for saving practitioners a lot of time. When the calculated F-value exceeds the theoretical value, the difference among the means is considered significant, and the null hypothesis is rejected. Otherwise, variance among treatment means is considered statistically insignificant, and we conclude that all means are roughly equal, and thus reject the alternate hypothesis.

Comparative Experiments: Present Versus Modified

Once we understand how to evaluate a hypothesis and draw conclusions based on the statistical evidence, we can use it for evaluating the outcome of a clinical trial, testing of new drugs, or a new treatment. To conduct a comparative experiment, you should run a two-sample experiment. The one sample is a control group representing the present process, and the other sample is an experimental group representing the modified process. We collect new data for the control group as well as the experimental group. The sample size depends upon several factors such as cost, availability, and feasibility. Depending upon the sample size, one can choose to conduct t-tests for smaller samples (less than 30), or a Z-test for larger samples (equal or greater than 30).

The comparative t-test statistic is calculated as follows:

Image

where

Imagem is the mean of the modified process;

Imagep is the mean of the present process;

• Spl is the pooled standard deviation calculated as

Image

where

• Sm is the standard deviation of the samples from the modified process, and

• Sp is the standard deviation of the samples from the present process.

Nonparametric Output

There may be a situation in which we have very few samples to run a trial and the nature of distribution is unknown. Under these circumstances, we assume that the population of the two sets (present and modified conditions) of samples shall have the similar probability distribution.

In such experiments, we rank their output and evaluate for overlapping among the data using Tukey’s No Overlap End Count technique. The two possible outcomes are either no overlap or some overlapping. When there is no overlap, then the modified process is better than the present process significantly. However, when there is an overlap, decision of significance is made based on specified number of no overlapping end counts as shown in Table 7.3. The established alpha risk (.05, 0.01 to 0.001) is chosen based on the seriousness of the implications of the incorrect decision.

Table 7.3. No Overlap End Counts

Image

Let us say that one hospital decides to conduct a monthly review of its performance with the doctors and staff and decides to evaluate its effect on the reduction of re-admission rate. We record observations for six months with and without monthly reviews. The data has been tabulated in Table 7.4. Now this data is sorted in the order from most desirable to least desirable condition. Figure 7.7 shows the sorted data with the associated process condition (no monthly reviews and monthly reviews). We find a total nonoverlapping end count of 7 (4 in most desirable and 3 in least desirable region), which exceeds that specified in the Table 7.3. Hence, we conclude with 95 percent confidence that the monthly review of performance has made an improvement.

Image

Figure 7.7. Ranking of re-admission

Table 7.4. Mean Re-admission Rates for Six Months

Image

Full Factorials Experiments

Moving beyond the Analyze phase into the Improve phase implies further reduction of variables, or evaluation of alternate solutions. To evaluate the effect of alternate solutions and the effects of its variables can be a complicated process that requires planned experimentation, called Design of Experiments (DOE). DOE is a statistical method introduced by R. A. Fisher in England in the 1920s to study the effect of multiple variables simultaneously. In his early applications, Fisher wanted to find out how much rain, water, fertilizer, sunshine, and so on are needed to produce the best crop. Similarly, we may find out the certain dose of certain medicine at a specific frequency for an “x” number of days, and so on. Pharmaceutical companies, when developing new medicines, conduct such experiments for best results in drug manufacturing, as well as in the clinical trials.

In the absence of such a technique, we would be conducting experiments by changing one “thing” at a time, which would take forever to complete an evaluation of the new treatment. The DOE methods allow us to accelerate evaluation of a treatment or drug with many components concurrently. There are several methods to conduct the design of experiments such as full factorials, fractional factorials, Taguchi methods, or special-purpose DOE methods. Fractional and Taguchi methods, which are subsets of full factorials, are used when the list of components is very long, and we want to screen out some at the beginning. Once the number of components of a treatment is reduced to two to three components, the full factorial method can be implemented. Full factorial implies all combinations of the specified components of a treatment, or variables are tried in order to draw a conclusion. If we have reduced the number of variables down to a smaller number, and the cost of experimentation is not very high, this is an excellent way to quickly come up with the final recipe with a lot of confidence. With the fractional factorial experiments, we sacrifice some confidence for economy.

To run a full factorial experiment, the number of trials is determined by the LevelVariables, where the level is the number of values of the variables one wants to try. Normally, number of levels is kept at two for containing the number of trials. The two levels represent “present” and “modified” values of variables.

To conduct a full factorial experiment, follow these steps:

1. Define the experiment objective. The objective is to determine the length of stay in the hospital based upon the patient’s age to improve patient satisfaction. Thus, the patient satisfaction is our response variable, and length of stay and age are our two independent variables.

2. Identify the key variables and their levels. The two variables are Stay and Age, which could be run at two levels. Stay will be tried at 3 and 5 days, and Age will be 50 years for patients between 30–50 years of age, and 70 years for patients between 50–70 years of age.

3. Design the experiment. To determine various combinations, a matrix is draw as shown in Figure 7.8. This shows four combinations (LevelVariables = 22). To determine the sample size for α = 5 percent, β = 10 percent, use (n is = (8φ/Δ)2) relationship, where φ is the standard deviation of the process and Δ is the required change to declare the new process better than the current one. Let the improvement be specified at Δ = 2φ; thus, the sample size for evaluation will be equal to 16. Because we are able to combine two cells per treatment, the sample size per cell could be 16/2 = 8. In most cases, the samples sizes are much larger in the healthcare industry.

Image

Figure 7.8. Experiment Design (Minitab® Statistical Software)

While running these various trials, we randomize the order of trials. In other words, we pick any combination randomly and gather data. Then, another combination, we gather data, and so on. By randomizing we are minimizing effect of other unplanned variables such as a nurse’s personality, staff’s behavior, food quality, room condition, and so on. For example, Table 7.5 represents our experiment.

Table 7.5. Randomized Order of Trials

Image

4. Data Analysis—Interactions. The experiment data is analyzed for determining the effects of Stay and Age variables. Effects for each variable are calculated as follows:

Main Effect of Age = Average (Trial 3, Trial 4) – Average (Trial 1, Trial 2)

Main Effect of Stay = Average (Trial 2, Trial 4) – Average (Trial 1, Trial 3)

Interaction Effects= Average (Trial 1, Trial 4) – Average (Trial 2, Trial 3)

Figures 7.9 and 7.10 display main and interaction effects of Age and Stay, respectively. One can see that according to this experiment, the Age and Stay have little positive and negative correlation with patient satisfaction, respectively. However, they have a strong interaction with each other. Note that older patients are more satisfied with the longer duration, but younger patients feel otherwise. Based on this analysis, one may establish two separate policies for two age groups in order to maximize patient satisfaction, instead of having one Stay policy for all age groups. The actions for implementation of the improvement arising out of the experiment can be captured in the Improvement Action Plan as illustrated in the sample forms at the end of this chapter.

Image

Figure 7.9. Main effects of Age and Stay (Minitab® Statistical Software)

Image

Figure 7.10. Interaction effects of Age and Stay (Minitab® Statistical Software)

A benefit of using statistical software is reducing the time to analyze the experiment and spending more time to develop a solution.

Control Phase

One of the main challenges in implementing Six Sigma is to realize benefits by maintaining the improved process. Many times, we find a solution; however, we have difficulty implementing the change. The impulse to keep moving on to the next project without seeing successful implementation and controls in operations must be checked. Closing a project using the Control phase helps achieve return on investment in the project and provides an opportunity to celebrate. Control Chart is the commonly used statistical tool in the Control phase, besides ensuring the document changes, communication, recognition, and reward.

Control Charts

Walter Shewhart of AT&T developed control charts in 1929 while studying variation in manufacturing. Shewhart identified variation as the enemy of quality. He recognized the two types of variation such as random and assignable, or natural and nonnatural. In order to maintain the natural behavior of the process, he identified the potential situation of disruptions based on patterns in the process and developed charts to display such occurrences. Thus, the control chart is a tool that identifies situation when a process loses its statistical control or its natural behavior.

The control chart compares variation in a process with properly established control limits, based on the inherent variation in the process. If the variation is excessive or unexpected based upon the laws of probability, it raises a flag for investigation and necessitates remedial actions. The remedial actions may include shutting down the process, or quarantining, sorting, or inspecting the input or output, such as supplies. The intent of remedial actions is to restore natural or statistical behavior of the process or an activity. Every process does not require control charts. Control charts must be used when it makes economic sense to use them.

Normal Distribution and Control Charts

A control chart compares the current performance with the probability of the outcome based on normal distribution. In other words, it is expected that about two-thirds of the time, the output must fall within one standard deviation around the target performance; approximately 95 percent of the output should fall within two standard deviations around the target, and almost 100 percent should fall within three standard deviations around the target performance. In other words, the intent of control chart is to preserve the bell-shaped distribution of the process output.

To use control charts, data is collected from samples at a predetermined frequency, plotted on a chart, and evaluated for statistical behavior. To make evaluation of the sample data, a set of rules has been developed. By applying the rules, one determines whether the statistical behavior of a process continues or is disrupted. If a process is not in statistical control, it is called out of control condition, and an investigation is initiated to understand sources of assignable variation.

Types of Control Charts

Depending upon the type of data, attribute or variable, control charts are classified into two categories: attribute control charts and variable control charts. Table 7.6 and Table 7.7 show commonly used control charts.

Table 7.6. Attribute Control Charts

Image

Table 7.7. Variable Control Charts

Image

All control charts are based on the same principle of statistical variation. Thus, they all have control limits using similar formula. It is recommended that statistical software be used for constructing control charts and highlighting out-of-control conditions automatically without remembering the rules.

Implementing a control chart requires consideration of sample size, sampling frequency, typical value, standard deviation, ownership, and response to out-of-control conditions. The weakest link in successful implementation of control charts is responding to the out-of-control conditions appropriately. The response may include shutting down the process, which operations personnel normally do not like. Another challenge is to determine where and when to implement control charts. One should select preferably the input process conditions to ensure good output performance. Control charts can be applied to processes that are critical, must be maintained at acceptable performance, and are more prone to variation, thus likely to break down. Besides, the processes that have been well understood for causation and are stabilized are good candidates to leave alone with the help of control charts.

Constructing Control Charts

When constructing a control chart, remember the following:

• Select a process parameter at its input, in-process, or output.

• Select a suitable control chart.

• Develop the check sheet to collect data.

• Collect samples and record the data.

• Enter data in the computer using statistical software; it will do the necessary computations.

• Display mean and range charts, and evaluate for out-of-control conditions.

• Review and determine necessary actions to adjust the process.

Interpreting Control Charts

Interpreting control charts correctly is necessary for adjusting the process effectively. There are eight rules for interpreting control charts after plotting a data point, ensuring normality of the process. The rules for out of control conditions are listed here:

• Any point is beyond control limits.

• Two out of three points in a row beyond two sigma.

• Four out of five points in a row beyond one sigma.

• Fifteen points in a row within one sigma.

• Eight points in a row on both sides of the centerline within two sigma.

• Nine points in a row on one side of centerline.

• Six points in a row increasing or decreasing.

• Fourteen points in a row alternating up and down.

One can see that these rules are designed to ensure randomness of the data and thus a bell-shaped distribution. The out-of-control incidence must be investigated to determine the cause, so that appropriate action can be taken. If a point is beyond the three sigma control limits, it could be due to data entry error, measurement system error, or sudden change in the process performance. The control charts rule is looking into sources of nonrandomness that may include recognizing trends, shifts, patterns, and distribution. Thus, if there is a shift, a process needs to be adjusted; if there is a trend, some component wear-out may be the cause; if there is a pattern, the operator may be the cause or some breakdown in the system; and if the distribution is disrupted, some unintentional change might have occurred normally in the inputs to the process.

Control charts have been the most misused tool in the industry due to a lack of willingness to adjust the process for ensuring continuing performance. For sake of short-term production, the long-term productivity is sacrificed. Such short-sightedness can be avoided through proper understanding of the concepts and correct interpretation of the rules.

Figure 7.11 shows an example control chart for a patient’s body temperature.

Image

Figure 7.11. An example of a control chart for a patient’s body temperature (Minitab® Statistical Software)

Documentation

In order to maintain statistical control of the new and improved process, one must ensure consistency in practice. The consistency is achieved through effective documentation, which identifies purpose, needs, critical check points, target conditions, and handling of nonconformities. Documentation of practices allows us to minimize opportunities for errors through reviews, and instills a sense of structure for compliance and consistency. A well-documented process highlights the “right” things to do for effectiveness and requires us to do our activities efficiently.

In order to document our practices effectively for driving virtual perfection in healthcare, we must understand the 4P model of process management.2 The basic premise of ISO 9001-like management systems has been to promote process thinking. The combination of Six Sigma and the management systems need to pay attention to quality inputs and preparation, to consistency of critical activities, performance targets, and ability to detect inconsistencies for continual improvement. Figure 7.12 shows the 4P model that enhances the commonly known PDCA (Plan, Do, Check, Act) model of process management:

Prepare represents ensuring good input to activities according to Ishikawa’s 4M’s (material, machines, methods and manpower or people).

Perform implies the process is well understood, and its activities are well defined for error-free and streamlined execution.

Perfect, normally the misunderstood with acceptability, relates to the target performance to be achieved. If an activity is not on target, it must be changed.

Progress allows us to strive toward the ideal performance by reducing inconsistency in the execution.

Image

Figure 7.12. An example of process thinking for ER

Table 7.8 shows examples of critical parameters to control the process.

Table 7.8. Examples of Critical Process Parameters for Control of ER Processes

Image

Training

Continual emphasis on learning better practices and training for improvement tends to provide the best return on investment. When a new drug or treatment is to be introduced in the marketplace, we must plan to learn its impact through proper training and education. Training could be achieved through self-learning, in the classroom, or using computer media. Training topics may involve caring for patients, caring for customers, improving listening skills and interpersonal skills, improvement tools and techniques, or even new research in the healthcare field.

A good training program can benefit an organization in many ways, for example:

• Better understanding of the commitment to excellence, not just the throughput

• Better knowledge of the process and expected performance

• Continual cost reduction through simplification and consistency

• Knowledge to handle unusual situations

• How to get help when needed

• Reduced resistance to changes

Leadership training is equally important as much as the training of staff or physicians. Leadership is a learned trait, and leadership skills can be improved through continuing training. Leadership training can be in communicating with employees and stakeholders, in new technology and treatments, new business models or thoughts, reengineering, Six Sigma, teamwork, time management, goal setting, performance review, recognition, and leading through personal commitment. Of course, some training in golf would not hurt, because an improvement in the game improves the personal satisfaction.

Communication

The need for open communication between leadership and employees cannot be overstated. Employees need to receive direction from their leader, for a pat on the back, and just for interaction. Too many organizations exist where the leadership avoids talking to their employees. Open communication policy through closed doors does not work. We must be really interested in listening to employees for their ideas. Employees normally deliver what is asked of them. If they have not delivered the expected outcome, it is likely that communication broke down between the leadership and employees. The employees need to hear the expectations of the leadership, and the leadership needs to build credibility with the employees by working toward common goals. Lack of communication is usually an indication of other problems, quality issues being just one of them.

Employee participation can be increased through formal and informal communication for achieving virtual perfection using Six Sigma. Establishing clear objectives and rewards for achieving excellence can lead to extraordinary performance. Working communication builds trust and demonstrates leadership’s respect for and their dependence on employees, which is very rewarding to employees and hence to the organization.

Business Review

The most significant part of sustaining the Six Sigma initiative is conducting periodic business reviews of activities, results, and celebration. If there are not enough celebrations for success going on, most likely business objectives are being missed. The Six Sigma initiative can be reviewed for progress and its impact on the performance along with the operations and financial reviews. The review must follow a standard process ensuring consistency of expectations, participation, and adequacy of the review process. Results of the review must be shared with all employees at the earliest possible time. In absence of active participation of the executives, most likely Six Sigma will be de-prioritized in the interest of fighting fires in the short term. The review leads to challenging the status quo, recognizes successes, and generates action items to drive progress.

Having implemented Six Sigma successfully, the executive management must become the best spokesperson for publicizing successes. It is critical that leadership makes the successes visible inside as well as outside the company. Participation in internal and external forums of sharing success would breed more success. Therefore, companies can use newsletters, websites, conferences, articles, or forums for publicizing their successes, and learning from peers’ experience.

Endnotes

1. Gupta, Praveen (2004), The Six Sigma Performance Handbook: A Statistical Guide to Optimized Results, McGraw Hill, NY.

2. Gupta, Praveen (2005), From PDCA to PPPP, Quality Digest, http://accelper.com/pdfs/From%20PDCA%20to%20PPPP.pdf.

Sample Healthcare Excellence Project Forms: Solving the Problem

Image
Image
Image
Image
Image
Image
Image
Image
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset