6 A Just Culture in Your Organization

How do you build a just culture in your organization?

A large number of organizations I have met believe (or perhaps just hope) that it is a matter of applying a simple formula. That all they need to do, for instance, is distinguish between human error, at-risk behavior, and recklessness—categories that imply increasing degrees of willfulness and disregard.1 I can understand that the idea of a formula, and such a three-step model of badness, is vaguely reassuring. It is like matching symptoms to a disease.

But, as we have seen throughout the book, categorizing other people's behavior, particularly on the basis of assumptions about their intentions, consciousness, and choices, is only apparently simple. Attempts to map situated and often invisible aspects of human expertise such as decision making, insight, awareness, or consciousness onto discrete categories will always be a leap of faith.

As soon as we put down lines between different categories of behavior we create a problem of boundaries. When does an error become at risk behavior? Or, for that matter, reckless? Where exactly does an act cross that line? The examples in this book should hopefully have been clear enough. These are matters of judgment and degree, not data and dichotomy. If the same act can fall into both categories at the same time, then having those categories is not very useful. The whole point of categorization is that acts are uniquely one or the other, because it is that assignment that lays out what are legitimate responses or countermeasures.2

What is particularly primitive is the belief that "human error" or "at-risk behavior" or "recklessness" are stable categories of human performance. That the recklessness, risks, or erroneousness of those acts form an inherent property of other people's performance, and that our own interpretations have nothing to do with us seeing that performance as such. But saying that others' behavior is erroneous or risky or reckless is our judgment of what other people do, not a description of the essence of their behavior.3 In other words, if we categorize behavior, we do nothing more than categorizing our own judgments.

Remember from the preface the example in which the dominant plot became that of individual wrongdoing. It involved a British surgeon who had moved to New Zealand where seven patients died during or immediately after his operations.4 The complex case could have been decomposed in any number of ways, including that of procedural shortcomings in cross-national transition of physicians, or perpetual staffing and budgetary challenges which forced the surgeon to operate with the assistance of medical students instead of qualified residents.5 Each of these stories would have its own imperative repertoire of countermeasures appended to it—organizational, financial, educational6—and could, incidentally, be more constructive to future patient safety than punishing an individual surgeon.7 The punitive countermeasures that were chosen, however, not only legitimated but (re-)produced an account of individual wrongdoing.

Thus, just responses to bad events are not a matter of matching the inherent properties of undesirable behavior with appropriate pigeonholing and a fitting punitive level. It involves the hard work of deciding what story to tell and whether to see something as reckless, as at-risk, or as erroneous. Merely supplying the categories leaves the issue unresolved. And it also creates an ethical issue: Assigning an act to a category will forever be a judgment. It will always remain somebody's attribution. Thus it will forever be contestable by those who judge things differently.

This is not a matter of relativism, it is a matter of complexity. Different descriptions decompose a complex story in different ways, and the knowledge gained (or acted upon) by any description is always relative to the perspective from which the description was made.

This means, as has been said in many places in the book, that assigning acts to categories becomes a matter of power. Who has the power to tell the story, to say that behavior is one thing and not the other? And who has the power to decide on the response? It is a power that can finesse and fudge a whole range or organizational, emotional, and personal issues. A conclusion of wrongdoing could for instance be underwritten by a hospital's risk manager's greatest fears (of liability, loss of reputation, or political clout) or by how a manager is held accountable in turn for evidence of trouble in the managed unit.

In his study of disasters, Gephart explicitly sought out these subtle forces that come from a social distribution of interests, motives, and knowledge. Making sense of an adverse event (and, deciding whether selected acts inside of it represent at-risk or reckless behavior) involves issues of power because different views of reality and their vested interests generate struggles for dominance in determining organizational action. The conflicts and acts of domination that pare diverse opinions down to a decision outcome were Gephart's focus, and hence he saw the resulting story (of an adverse event) as a political accomplishment.8

As soon as power gets involved, categorizations and responses may quickly become seen as unjust, as unfair.

Suppose nurses take to scanning the barcode label that one of their colleagues pasted on the wall behind the patient because it actually reads well, and is always easy to find as opposed to others. This may have become all but normal practice—everybody does it because everybody always has the next patient, and the next, and the medication barcode scanners are of such poor quality that they can't read anything except flat and high-contrast labels (indeed, labels pasted on a wall).

Managers may want to call such behavior "at-risk" and meet out supposedly appropriate countermeasures. But nurses may no longer see their behavior as at-risk, if they ever did. In fact, it may be a sure way to get a good scan, and not doing that could create more risks. And, of course, barcode scanning is not their main job—taking care of patients is. This means that nurses may see any punitive responses to scanning a label stuck on a wall as pretty unjust. After all, such responses may show that the manager is not aware of the unrelenting pressures and ebbing and flowing demands of nursing work, nor of the shortcomings of putatively labor-saving and safety-enhancing technology. Justice is a matter of perception.

But managers are under different pressures. Managers appropriate the power to call something "at-risk" not because of their privileged insight into the risks and realities of nursing work, but because they can and because they have to relative to the pressures and expectations of their own position in the hospital. From a manager's point of view, operational behaviors that bypass instructions or protocol, for example, could- end- up eroding productivity and reputation, and eventually impair the financial performance of their part of the organization. Or, for that matter, having to make structural or equipment changes (e.g. procuring new or better barcode scanners) involves sacrifices that are much larger than reminding people to be more careful and follow the rules.

A Process for Achieving Organizational Justice

That different versions of adverse events compete for primacy and that one may even win is neither problematic nor easily fixable. What matters for organizational justice is not so much whose version gets to rule the roost—but to what extent this is made explicit and how it is decided upon. An approach to organizational justice in the wake of adverse events would thus follow these steps (even if the categories of erroneous, at-risk, and reckless behavior are still being applied):

  1. Design the process to deal with adverse events or apparently risky acts. Where does it start, what are the following steps, where and when is a judgment made on the employee's behavior, and what are the opportunities for appeal? Recognize that an adverse event review is not a performance review. Also recognize that consistency of this process across professional groups and hospital departments is difficult, but also important in achieving a perception of fairness and justice.
  2. Decide who is involved in this process. Two important issues come up here. Recognize that if the employee's manager is in charge of the process, then the potential of career or reputational jeopardy will predictably retard the employee's honest disclosure.9 A set-up in which impartial staff take in the story and then funnel it to the manager for appropriate action has been demonstrated to generate more opportunities for learning and less fear of retribution.1011 The other issue is the extent to which domain expertise is involved in the process. Understanding the messy details of practice (including the many gaps between rules or guidelines and actual daily work) is crucial for both credibility and a sense of justice.
  3. Decide who is involved in deciding who is involved. There is something recursive about this, of course. If decisions are made top-down about 1) and 2), then any process will lack the buy-in, ownership, and constituency for employees to feel that it is something of their own creation—something to stand for and participate in to the benefit of the organization.

Caring for the second victim

There is something perhaps even more important that your organization must do for its culture to become just. And that is to look after the second victim. For most professionals, an error that leads to an incident or death is antithetical to their identities. They themselves can see it as a devastating failure to live up to their deontological commitment.12 The memory of error stays with professionals for many years.13 All of these effects are visible, and can be present strongly even before your organization does anything, or before any prosecutor might do anything. Your people might be in bad shape long before.

In fact, it could be argued that people punish themselves quite harshly in the wake of failure, and that you or your organization or society can hardly make such punishment any worse. The research is pretty clear on this: Having made an error in the execution of a job that involves error management and prevention is something that causes excessive stress, depression, anxiety, and other psychological ill-health.14

Particularly when the work involves considerable autonomy and presumptions of control over outcomes on the part of the actor (like doctors, pilots, air traffic controllers), guilt and self-blame are very common, with professionals often denying the role of the system or organization in the spawning of their error altogether and blaming themselves entirely.15,16 This sometimes includes hiding the error or its consequences from family and friends, the professionals distancing themselves from any possible support, or attempting to make atonement themselves with those who were harmed by the error.17

Criminalization, the topic of a later chapter, makes things a lot worse for the second victim. Criminalization affirms feelings of guilt and self-blame and exacerbates their effects, which are linked to poor clinical outcomes in other settings.18 It can lead to people departing on sick leave, divorcing, exiting the profession permanently, or even committing suicide.19'20 Another response, though much more rare, is an expression of anger and counter-attack, for example through the filing of a defamation lawsuit.21,22 Criminalization can also have consequences for a person's livelihood (and his or her family), as licenses to practice may be revoked. This in turn can generate a whole new layer of anxiety and stress. One pharmacist, whose medication error ended in the death of two patients, suffered from depression and anxiety to such an extent that he eventually stabbed his wife to death and injured his daughter with a knife.23

In the best case, professionals seek to process and learn from the mistake, discussing details of their error with their employer, contributing to its systematic investigation and helping with putting safety checks in place.24 The role of the organization in facilitating such coping (e.g. through peer and managerial support and appropriate structures and processes for learning from failure) is hugely important. Research on employee assistance programs has suggested that it is crucial that employees do not get constructed as if they are the source of the problem and treated as somehow "troubled" as opposed to "normal" employees.25'26 If this condition is met, employee support, and particularly peer support, appears to be one of the most important mediating variables in managing stress, anxiety, and depression in the aftermath of error, and one of the strongest predictors of coming out psychologically healthy. Guidance on setting up effective peer support and stress management programs in the wake of incidents is available in separate work.27

A Just Culture Has More Advantages

The main argument for building a just culture is that not having one is bad for both justice and safety. But there is more. Recent research28'29 has shown that not having a just culture can be bad for people's:

  • morale;
  • commitment to the organization;
  • job satisfaction;
  • willingness to do that little extra, to step outside their role.

Indeed, the idea of justice seems basic to any social relation; basic to what it means to be human, and humans among each other. We tend to endow a just culture with benefits that extend beyond making an organization safer. Look at the hope expressed by a policy document from aviation, where a "just culture operates by design to encourage compliance with the appropriate regulations and procedures, foster safe operating practices, and promote the development of internal evaluation programs."30 It illustrates the great expectations that people endow just cultures with: openness, compliance, fostering safer practices, critical self-evaluation. How all of this is supposed to happen is of course a more difficult question.

Now it may seem obvious why employees may want a just culture. They may want to feel protected from capricious management actions, or from the (as they see it) malicious intentions of a prosecutor. But this oversimplifies and politicizes things. A just culture, in the long run, benefits everyone:

  • For those who ran or regulate organizations, the incentive to have a just culture is very simple. Without it, you won't know what's going on. A just culture is necessary if you want to monitor the safety of an operation. A just culture is necessary if you want to have any idea about the capability of your people, or regulated organization, to effectively meet the problems that will come their way.
  • For those who work inside an organization, the incentive of having a just culture is not "to get off the hook," but to feel free to concentrate on doing a quality job rather than on limiting personal liability; to feel involved and empowered to contribute to safety improvements by flagging for weak spots, errors, and failures.
  • For those in society who consume the organization's product or service, just cultures are in their own long-term interest. Without them, organizations and the people in them will focus on better documenting, hiding, or defending decisions—rather than on making better decisions. They will prioritize short-term measures to limit legal or media exposure over long-term investments in safety.

A colleague recently received a phone call from a hospital vice president. A child had died a few days before from a tenfold chemotherapy overdose in their pediatric oncology unit. He led the investigation of this tragedy and had found a number of issues in processes, admixture formulation practices, and problematic new pharmacy technology that aligned to bring about the death of this child. The family was devastated. Everyone involved in the care of the child was devastated. Doing what they usually did to create and administer chemotherapy admixtures, suddenly and lethally had not worked as intended. The introduction of the new pharmacy device was deemed a substantial factor—it had replaced familiar technology "on the fly" and this was one of the first uses.

The vice president said that he did not believe any of the personnel involved should be punished. Yet, despite his organization's publicly announced plan to develop a just culture, the CEO, CMO, and HR Director insisted on firing the two pharmacists involved in formulation of the admixture and the nurse who had administered the medication. There was absolutely no way the nurse could have known that the content of IV bag was not as labeled. The impetus for dismissal actually came from their consulting ethicist, who also happened to be a lawyer. He identified the child's death as evidence of a breach of "duty ethic" and hence breach of legal duty—he deemed these three people unequivocally negligent.

How to respond to failure is, at its heart, an ethical question. We can wonder, then, whether it is smart to combine the function of ethicist and lawyer into one person, as was done in the example above. In fact, such a mix could be testimony to the confusion and difficulty of building a just culture:

  • Is "just" something that meets legal criteria (for which you need a lawyer)?
  • Or is "just" something that takes different perspectives, interests, duties, and alternative consequences into evaluation (for which you might need an ethicist)?

Not Individuals or Systems, but Individuals in Systems

In an earlier book,31 I laid out a choice between the old view and the new view of human error:

  • The old view sees human error as a cause of incidents. To do something about incidents, then, we need to do something about the particular human involved: suspend, retrain, admonish, charge him or her. Or we have to do something about humans in general: Marginalize them by putting in more automation, or rigidify their work by creating more rules and procedures.
  • The new, or systems view, sees human error as a symptom, not a cause. Human error is an effect of trouble deeper inside the system. To do something about a human error problem, then, we must turn to the system in which people work: the design of equipment, the usefulness of procedures, the existence of goal conflicts, and production pressure.

This choice between old and new view is founded in decades of research on safety and risk in complex domains. The two alternatives can serve as useful bookends in debates about the causes of mishaps, and what countermeasures you should deploy.

But it leaves an important question unattended: Can people in your organization simply blame the system when things go wrong? To many, this logical extension of the new view seems like a cop-out, like an excuse to get defective or responsible practitioners off the hook. The new view would seem almost incompatible with holding people accountable.

Indeed, says Pellegrino, systems are not enough.32 Of course we should look at the system in which people work, and improve it to the best of our ability. But safety-critical work is ultimately channeled through relationships between human beings (such as in medicine), or through direct contact of some people with the risky technology. At this sharp end, there is almost always a discretionary space into which no system improvement can completely reach. Rather than individuals versus systems, we should begin to understand the relationships and roles of individuals in systems.33

The Discretionary Space for Personal Accountability

A system creates all kinds of opportunities for action. And it also constrains people in many ways. Beyond these opportunities and constraints, we could argue that there remains a discretionary space, a space that can be filled only by an individual care-giving or technology-operating human. This is a final space in which a system really does leave people freedom of choice (to launch or not, to go to open surgery or not, to fire or not, to continue an approach or not). It is a space filled with ambiguity, uncertainty, and moral choices.

Systems cannot substitute the responsibility borne by individuals within that space. Individuals who work in those systems would not even want their responsibility to be taken away by the system entirely. The freedom (and the concomitant responsibility) that is left for them is what makes them and their work human, meaningful, a source of pride. But systems can do two things:

  • One is to be very clear about where that discretionary space begins and ends. Not giving practitioners sufficient authority to decide on courses of action (such as in many managed care systems), but demanding that they be held accountable for the consequences anyway, creates impossible and unfair double binds. Such double binds effectively shrink the discretionary space before action, but open it wide after any bad consequences of action become apparent (then it was suddenly the physician's responsibility after all). The same goes when asking dispensation for an unqualified crewmember to proceed with an instrument approach. The system is clear in its routine expectation that a commander will ask such dispensation. And if all goes well, no questions will be raised. But if problems occur on the approach, the request for dispensation suddenly becomes the commander's full responsibility. Such vagueness of where the borders of the discretionary space lie is typical, but it is unfair and unreasonable.
  • The other thing a system can do is decide how it will motivate people to conscientiously carry out their responsibilities inside of that discretionary space. Is the source for that motivation going to be fear or empowerment? Anxiety or involvement? "There has to be some fear that not doing one's job correctly could lead to prosecution," said an influential commentator in 2000. Indeed, prosecution presumes that the conscientious discharge of personal responsibility comes from fear of the consequences of not doing so. But neither civil litigation nor criminal prosecution work as a deterrent against human error. Instead, anxiety created by such accountability leads for example to defensive medicine, not high-quality care, and even to a greater likelihood of subsequent incidents.34 The anxiety and stress generated by such accountability adds attentional burdens and distracts from conscientious discharge of the main safety-critical task.35 Rather than making people afraid, systems should make people participants in change and improvement. There is evidence that empowering people to affect their work conditions, to involve them in the outlines and content of that discretionary space, most actively promotes their willingness to shoulder their responsibilities inside of it.36

Haavi Morreim reports a case in which an anesthesiologist, during surgery, reached into a drawer that contained two vials, sitting side by side.37 Both vials had yellow labels and yellow caps. One, however, had a paralytic agent, and the other a reversal agent to be used later, when paralysis was no longer needed. At the beginning of the procedure, the anesthesiologist administered the paralyzing agent, as per intention. But toward the end, he grabbed the wrong vial, administering additional paralytic instead of its reversal agent. There was no bad outcome in this case. But when he discussed the event with his colleagues, it turned out that this had happened to them too, and that they were all quite aware of the enormous potential for confusion. All knew about the hazard, but none had spoken out about it.

The question is of course why no anesthesiologist had flagged this problem before. Anxiety about the consequences of talking about possible failures cannot be excluded: It has squelched safety information before.

Even more intriguing is the possibility that there is no climate in which practitioners feel they can meaningfully contribute to the context in which they work. Those who work on the safety-critical sharp end every day, in other words, did not feel they had a channel through which to push their ideas for improvement. I was reminded of one worker who told me that she was really happy with her hospital management's open-door policy. But whenever she went through that open door, the boss was never there.

The example does raise the choice again. Do you really think you can prevent anesthesiologists from grabbing a wrong vial by making them afraid of the consequences if they do? Or do you want to prevent them from grabbing a wrong vial by inviting them to come forward with information about that vulnerability, and giving you the opportunity to help do something about the problem?

This example also confirms that holding people accountable and blaming people are two quite different things. Blaming people may in fact make them less accountable: They will tell fewer accounts, they may feel less compelled to have their voice heard, to participate in improvement efforts. This also means that blame-free or no-fault systems are not accountability-free systems. On the contrary: Such systems want to open up the ability for people to hold their account, so that everybody can respond and take responsibility for doing something about the problem.

Blame-free is not accountability-free

Equating blame-free systems with an absence of personal accountability is short-sighted and not very constructive. Blame-free means blame-free, not accountability-free. The question is not whether we want practitioners to skirt personal accountability. Few practitioners do. The question is whether we want to fool ourselves that we can meaningfully wring such accountability out of practitioners by blaming them, suing them, or putting them on trial. No single piece of evidence so far seems to demonstrate that we can.

We should convince ourselves that we can create such accountability not by blaming people, but by getting people actively involved in the creation of a better system to work in. Most practitioners will relish such responsibility. Just as most practitioners often despair at the lack of opportunity to really influence their workplace and its preconditions for the better.

Forward-looking Accountability

"He or she has taken responsibility, and resigned."

We often say this in the same sentence. We may have come to believe that quitting and taking responsibility are the same thing. Sure, they can be. But they don't have to be. In fact, holding people accountable may be exactly what we are not doing when we allow them to step down and leave a mess behind.

Accountability is often only backward-looking. This is the kind of accountability in trials or lawsuits, in dismissals, demotions, or suspensions. Such accountability tries to find a bad apple, somebody to blame for the mess. It is the kind of accountability that feeds a press (or politicians, or perhaps even a company's board), who may eagerly be awaiting signs that "you are doing something about the problem." But for you and your organization, such backward-looking accountability is pretty useless beyond getting somebody's hot breath out of your neck.

Instead, you could see accountability as looking ahead. Stories of failure that both respond to calls for accountability and allow people and organizations to learn and move forward are essentially about looking ahead. In those stories, accountability is something that brings information about needed improvements to people or groups that can do something about it. There, accountability is something that allows people and their organization to invest resources in improvements that have a safety dividend, rather than deflecting resources into legal protection and limiting liability. This is captured in what Virginia Sharpe calls "forward-looking accountability." Accountability should lay out the opportunities (and responsibilities!) for making changes so that the probability of such harm happening again goes down.

An explosion occurred at a Texas oil refinery in March 2005, as a result of an octane-boosting unit overflowing when it was being restarted. Gasoline vapors seeped into an inadequate vent system and ignited in a blast that was felt five miles away. The explosion killed 15 people. An internal company study into the accident found that four of the company's US executives should be fired for failing to prevent the explosion, and that even the company's global refinery chief had failed to heed serious warning signals. The company's "management was ultimately responsible for assuring the appropriate priorities were in place, adequate resources were provided, and clear accountabilities were established for the safe operation of the refinery," said the lead company investigator.

Corporate budget cuts had compromised worker safety at the plant, an earlier report had found, and the refinery had had to pay a record fine for worker safety violations at its site. A safety culture that "seemed to ignore risk, tolerated non-compliance and accepted incompetence" was determined as a root cause of the accident. The global refinery chief should have faced and communicated "the brutal facts that fundamentally, the refinery was unsafe and it was a major risk to continue operating it as such."38

Calls for accountability are important. And responding adequately to them is too. Sending responsible people away is of course one response. But remember from the first chapter that calls for accountability are in essence about trust. About people, regulators, the public, employees, trusting that professionals will take problems inside their practice or organization seriously and do something about them.

This means that just getting rid of a few people (even if they are in positions of greater responsibility) may not be seen as an adequate response. Nor is it necessarily the most fruitful way for an organization to incorporate lessons about failure into what it knows about itself, into how it should deal with such vulnerabilities in the future.

Notes

1 Marx D. Patient Safety and the "Just Culture": A Printer for Health Care Executives. New York: Columbia University; 2001.

2 Dekker SWA. Doctors are more dangerous than gun owners: A rejoinder to error counting. Human Factors 2007;49:177–84.

3 Becker HS. Outsiders: Studies in the Sociology of Deviance. London: Free Press of Glencoe; 1963.

4 Skegg PDG. Criminal prosecutions of negligent health professionals: The New Zealand experience. Medical Law Review 1998;6:220–46.

5 Dekker SWA. Criminalization of medical error: Who draws the line? ANZ Journal of Surgery 2007;77:831–7.

6 Dekker SWA. Just Culture: Balancing Safety and Accountability. Aldershot, England: Ashgate Publishing Co.; 2007.

7 Merry AF, McCall Smith A. Errors, Medicine and the Law. Cambridge: Cambridge University Press; 2001.

8 Gephart RP. Making sense of organizationally based environmental disasters. Journal of Management 1984;10:205–25.

9 Anderson RE. Medical Malpractice: A Physician's Sourcebook. New York: Humana Press; 2005.

10 Berlinger N. After Harm: Medical Error and the Ethics of Forgiveness. Baltimore, MD: Johns Hopkins University Press; 2005.

11 Dekker SWA, Laursen T. From punitive action to confidential reporting: A longitudinal study of organizational learning. Patient Safety & Quality Healthcare 2007;5:50–6.

12 Wolf ZR. Medication Errors: The Nursing experience. Albany, NY: Delmar; 1994.

13 Serembus JF, Wolf ZR, Youngblood N. Consequences of fatal medication errors for healthcare providers: A secondary analysis study. MedSurg Nursing 2001;10:193–201.

14 Berlinger 2005, op. cit.

15 Meurier CE, Vincent CA, Parmar DG, Nurses' responses to severity dependent errors: A study of the causal attributions made by nurses following an error. Journal of Advanced Nursing 1998;27:349–54.

16 Snook SA. Friendly Fire: The Accidental Shootdown of US Black Hawks Over Northern Iraq. Princeton, NJ: Princeton University Press; 2000.

17 Christensen JF, Levinson W, Dunn PM. The heart of darkness: The impact of perceived mistakes on physicians, Journal of General Internal Medicine 1992;7:424–31.

18 Friel A, White T, Alistair H. Posttraumatic stress disorder and criminal responsibility. Journal of Forensic Psychiatry and Psychology 2008;19:64–85.

19 Meszaros K, Fischer-Danzinger D. Extended suicide attempt: Psychopathology, personality and risk factors. Psychopathology 2000;33:5–10.

20 Tyler K. Helping employees cope with grief. HR Magazine 2003;48:54–8.

21 Sharpe VA. Accountability: Patient Safety and Policy Reform. Washington, DC: Georgetown University Press; 2004.

22 Anderson 2005, op. cit.

23 Serembus 2001, op .cit.

24 Christensen 1992, op. cit.

25 Dekker 2007, op. cit.

26 Cooper CL, Payne R. Causes, Coping, and Consequences of Stress at Work. Chichester; New York: Wiley; 1988.

27 Leonhardt J, Vogt J. Critical Incident Stress Management in Aviation. Aldershot, UK: Ashgate Publishing Co.; 2006.

28 Cohen-Charash Y, Spector PE. The role of justice in organizations: A metaanalysis. Organizational Behavior and Human Decision Processes 2001;86:278–321.

29 Conquitt JA, Conlon DE, Wesson MJ, Porter COLH, Ng KY. Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology 2001;86:425–45.

30 GAIN. Roadmap to a just culture: Enhancing the safety environment. Global Aviation Information Network (Group E: Flight Ops/ATC Ops Safety Information Sharing Working Group); 2004.

31 Dekker SWA. The Field Guide to Understanding Human Error. Aldershot, UK: Ashgate Publishing Co; 2006.

32 Pellegrino ED. Prevention of medical error: Where professional and organizational ethics meet. In: Sharpe VA, ed. Accountability: Patient Safety and Policy Reform. Washington DC: Georgetown University Press; 2004:83–98.

33 Berlinger 2005, op. cit.

34 Dauer EA, Ethical misfits: Mediation and medical malpractice litigation. In: Sharpe VA, ed. Accountability: Patient Safety and Policy Reform. Washington DC: Georgetown University Press; 2004:185–202.

35 Lerner JS, Tetlock PE. Accounting for the effects of accountability. Psychological Bulletin 1999;125:255–75.

36 Dekker 2007, op. cit.

37 Morreim EH. Medical errors: Pinning the blame versus blaming the system. In: Sharpe VA, ed. Accountability: Patient Safety and Policy Reform. Washington DC: Georgetown University Press; 2004:213–32.

38 Anon. Texas executives faulted in BP explosion. International Herald Tribune 2007 May 4;Sect. 10.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset