2 “You Have Nothing to Fear if You’ve Done Nothing Wrong”

"You have nothing to fear if you have done nothing wrong." this statement was made by a prosecutor in a European country. He responded to concerns from the aviation sector that human errors—normal, honest mistakes—were being converted into criminal behavior by his office. Some pilots and air traffic controllers were being fined or charged for rule infractions that were part and parcel of getting the job done. And they and their colleagues were getting anxious. Was data supplied in good faith, for example through incident reports, going to be used against them? Were there enough protections against the prying of a prosecutorial office?

Don't worry, said the prosecutor. Trust me. There is nothing to fear if you have done nothing wrong. I can judge right from wrong. I know a willful violation, or negligence, or a destructive act when I see it.

But does he? Does anybody?

Just Culture and the Concern with a Line

All existing definitions of just culture draw a line between acceptable and unacceptable behavior. A willful violation is not acceptable. An honest mistake is. And if what you have done is acceptable—if you have done nothing wrong—you have nothing to fear.

For example, says a proposal for air traffic control, a just culture is one in which "front line operators or others are not punished for actions, omissions or decisions taken by them that are commensurate with their experience and training, but where gross negligence, willful violations, and destructive acts are not tolerated."

Why the idea of a line is appealing

The idea of a line makes sense. If just cultures are to protect people against being persecuted for honest mistakes (when they've done nothing wrong), then some space must be reserved for mistakes that are not "honest" (in case they have done something wrong). Consequently, all proposals for a just culture emphasize the establishment of, and consensus around, some kind of line between legitimate and illegitimate behavior: "In a just culture, staff can differentiate between acceptable and unacceptable acts."11 An environment of impunity, the argument holds, would neither move people to act prudently nor compel them to report errors or deviations. After all, if there is no line, "anything goes." So why report anything? This is not good for people's morale or for the credibility of management or for learning from failure.

So calls for some kind of border that separates tolerable from culpable behavior make intuitive sense, and ideas on just culture often center on its embrace and clarity: "A 'no-blame' culture is neither feasible nor desirable. Most people desire some level of accountability when a mishap occurs. In a just culture environment the culpability line is more clearly drawn."2 Another argument for the line is that the public must be protected against intentional misbehavior or criminal acts, and that the application of justice is a prime vehicle for such protection.

A recent directive from the European Union (2003/42/EC) governs occurrence reporting in civil aviation. This directive has a qualification: A State must not institute legal proceedings against those who send in incident reports, apart from cases of gross negligence. But guess who decides what counts as "gross negligence?" The State, of course. Via its prosecutors or investigating magistrates.

The directive, as all guidance on just culture today, assumes that cases of "gross negligence" jump out by themselves. That "willful violations" represent a non-problematic category, distinct from violations that are somehow not "willful." It assumes that a prosecutor or other authority can recognize—objectively, unarguably—willful violations, or negligence or destructive acts when they show up. There is not a single proposal for just cultures—indeed, not a single appeal to the need to learn from failure in aviation—that does not build in some kind of escape clause into the realm of essentially negligent, unwanted, illegitimate behavior.

Why the idea of a line is an illusion

If we want to draw a line, we have to be clear about what falls on either side of it. Otherwise there is no point in a line—then the distinction between acceptable and unacceptable behavior would be one big blur. Willful violations, say many people, clearly fall on the "unacceptable" side of the line. Negligence does too. But what is negligence then? Look at a definition of negligence below:3

Negligence is conduct that falls below the standard required as normal in the community. It applies to a person who fails to use the reasonable level of skill expected of a person engaged in that particular activity, whether by omitting to do something that a prudent and reasonable person would do in the circumstances or by doing something that no prudent or reasonable person would have done in the circumstances. To raise a question of negligence, there needs to be a duty of care on the person, and harm must be caused by the negligent action. In other words, where there is a duty to exercise care, reasonable care must be taken to avoid acts or omissions which can reasonably be foreseen to be likely to cause harm to persons or property. If, as a result of a failure to act in this reasonably skillful way, harm/injury/damage is caused to a person or property, the person whose action caused the harm is negligent.

First, the definition is long, very long. Second, it does not capture the essential properties of "negligence," so that you can grab negligent behavior and put it on the unacceptable side of the line. Instead, the definition lays out a whole array of questions and judgments that we should make. Rather than this definition solving the problem of what is "negligence" for you, you now have to solve a larger number of equally intractable problems instead:

  • What is "normal standard?"
  • How far is "below?"
  • What is "reasonably skillful?"
  • What is "reasonable care?"
  • What is "prudent?"
  • Was harm indeed "caused by the negligent action?"

So instead of clarifying which operational behavior is "negligent," such a characterization shows just how complex the issue is. And how much of a judgment call. In fact, there is an amazing array of judgment calls to be made. Just see if you, for your own work, can (objectively, unarguably) define things like "normal in the community," "a reasonable level of skill," "a prudent person," "a foresight that harm may likely result."

What, really, is normal (objectively, unarguably)? Or prudent, or reasonable (objectively, unarguably)? And don't we all want to improve safety precisely because the activity we are engaged in can result in harm?

Of course, it is not that making such judgments is impossible. In fact, we probably do this quite a lot every day. It is, however, important to remember that judgments are exactly what they are. They are not objective and not unarguable. To think that there comes a clear, uncontested point at which everybody says, "yes, now the line has been crossed, this is negligence," is probably an illusion. What is "normal" versus "negligence" in a community, or "a reasonable level of skill," versus "recklessness" is infinitely negotiable. You can never really close the debate on this.

What is interesting is not whether some acts are so essentially negligent as to warrant serious consequences. What matters are which processes and authorities we in society (or you in your organization) rely on to decide whether acts should be seen as negligent or not.

All of these judgments can get significantly clouded by the effects of hindsight. With knowledge of outcome, it becomes almost impossible for us to go back and understand the world as it looked to somebody who did not yet have that knowledge. The so-called substitution test (which the definition of negligence above already applies) can be of some help. But even here, whether another reasonably prudent person would have done the same thing in the same circumstances becomes a whole different matter once you have a dead body as outcome. Or multiple dead bodies. All research on the hindsight bias shows that it is very difficult for us not to take the gravity of the outcome into account, somehow, when we apply the substitution test.

The Social Construction of an Offense

A few months ago, my wife and I went for dinner in a neighboring city. We parked the car along the street, amid a line of other cars. On the other side of the street, I saw a ticket machine, so I duly went over, put some cash in the machine, got my ticket and displayed it in the car windshield. When we returned from dinner, we were aghast to find a parking ticket the size of a half manila envelope proudly protruding from under one of the wipers. I yanked it away and ripped it open. Together we poured over the fine print to figure out what on earth we had violated. Wasn't there a ticket in our car windshield? It had not expired yet, so what was going on? It took another day of decoding arcane ciphers buried in the fine, to find the one pointing to the exact category of violation. It turned out that it had somehow ceased to be legal to park on that side of that particular piece of that street sometime during our dinner on that particular evening. I called a friend who lives in this city to get some type of explanation (the parking police only allowed us to listen to a taped recording, of course). My friend must have shaken his head in blend of disgust and amusement.

"Oh, they do this all the time in this town," he acknowledged. "If it hasn't been vandalized yet, you may find a sign the size of a pillow case suspended somewhere in the neighborhood, announcing that parking on the left or right side of the street is not permitted from like six o-clock until midnight on the third Tuesday of every month except the second month of the fifth year after the rule went into effect. Or something."

I felt genuinely defeated (and yes, we paid our fine). A few weeks later, I was in this city again (no, I did not park, I no longer dared to), and indeed found one of the infamous exception statements, black letters on a yellow background, hovering over the parking bays in a sidewalk. "No parking 14—17 every third of the second month," or some such abstruse decree.

This city, I decided, was a profile in the construction of offense. Parking somewhere was perfectly legal one moment, and absolutely illegal the next. The very same behavior, which had appeared so entirely legitimate at the beginning of the evening (there was a whole line of cars on that side of the street, after all, and I did buy my ticket), had morphed into a violation, a transgression, an offense inside the space of a dinner. The legitimacy, or culpability of an act, then, is not inherent in the act. It merely depends on where we draw the line. In this city, on one day (or one minute), the line is here. The next day or minute, it is there. Such capriciousness must be highly profitable, evidently. We were not the only car left on the wrong side of the road when the rules changed that evening. The whole line that had made our selection of a parking spot so legitimate was still there—all of them bedecked with happily fluttering tickets. The only ones who could really decrypt the pillow-case size signs, I thought, were the ones who created them. And they probably planned their ambushes in close synchronicity with whatever the signs declared.

I did not argue with the city. I allowed them to declare victory. They had made the rules and had evolved a finely-tuned game of phasing them in and out as per their intentions announced on those abstruse traffic signs. They had spent more resources at figuring out how to make money off of this than I was willing to invest in learning how to beat them at their own game and avoid being fined (I will take public transport next time). Their construction of an offense got to reign supreme. Not because my parking suddenly had become "offensive" to any citizen of that city during that evening (the square and surrounding streets were darker and emptier than ever before), but because the city decided that it should be so. An offense does not exist by itself "out there," as some objective reality. We (or prosecutors, or city officials) are the ones who construct the offense—the willful violation, the negligence, the recklessness.

What we see as a crime, then, and how much retribution we believe it deserves, is hardly a function of the behavior. It is a function of our interpretation of that behavior. And that can differ not only from day to day or minute to minute. This can slide over time, and differ per culture, per country.

Of course, we have evolved as civilization in large part by outlawing certain practices. Acts such as rape are assumed to be beyond any discussion: We see them as universally reprehensible. If you were to argue that rape is a crime only because we label it one, you would be deemed so ridiculous that nobody would take you seriously any longer. But then, what about rape inside of marriage? Whether that should be criminalized is immediately trickier.

So beyond this idea of a basic and sustained bedrock of what a civilization considers criminal, it is easy to show that the goal posts for what counts as a crime shift with time and with culture. The "crimes" I deal with in this book are a good example. They are acts in the course of doing normal work, "committed" by professionals—nurses, doctors, pilots, air traffic controllers, policemen, managers. These people have no motive to kill or maim or otherwise hurt, though we as society have given them the means and their work offers them plenty of opportunities. Somehow, we, as society, or as employing organization, manage to convert acts with bad outcomes into culpable or even criminal behavior that we believe should be punished as such. This, though, hinges not on the essence of the acts, but on our interpretation of them.

Decision Trees for Determining Culpability

In making those judgments, however, there is some help to be had. A number of tools, most in the form of decision trees, for determining culpability are currently in circulation. They are another form to put huge definitions such as the one above into. And they are a great start. But that is exactly what they are: A start. They still leave the analytic heavy lifting to you. They actually leave the problem of whether an action falls on this or that side of the line for you. All they will help you do is break down the problem. But are the resulting, smaller components more manageable?

One popular decision tree is that one that appears in Reason's Managing the Risks of Organizational Accidents.4 Here are some of the questions that it presents, and some of the problems that they create:

  • Were the actions and consequences as intended? This seems a simple enough question, but what, exactly, is intent? Philosophers and judicial experts alike still cannot really agree, so why would this suddenly be simple for you to decide? Asking the person whose actions they are may not be of much help either. Yes, the nurse would say, I intended to mix 20 mg/ml Xylocard. And as far as I know and can recount, that is exactly what I did. And no, I did not intend to poison a little baby girl. That the intended actions and consequences did not match up in this case did little to protect the nurse from prosecution or conviction. Other factors than "I did not mean to" play a role judgments of culpability.
  • Did the person knowingly violate safe operating procedures? People in all kinds of operational worlds knowingly violate safe operating procedures all the time. Even procedures that can be shown to have been available, workable, and correct (though here, of course, the question once again pops up: Workable? Says who? (objectively, unarguably)). Following all applicable procedures means not getting the job done in most cases. Hindsight is great for laying out exactly which procedures were relevant (and available, workable, and correct) for a particular task, even if the person doing the task would be the last in the world to think so.
  • Were there deficiencies in training or selection? Deficiencies seems like an unproblematic word, but what exactly does it mean? Again, what looks like a deficiency to one seems perfectly normal or even above standard to another. The question here is: Who gets to decide? Most people maintain that doctors who intentionally murder their patients are criminals, even if somebody could argue that this clearly shows problems with physician selection and proficiency checking (which, according to Reason's decision tree, would be a mitigating factor), or could raise issues of cultural standards related to end-of-life care and euthanasia in that particular country.

These are, once again, good questions to start with. But they do not solve the problem of determining culpability. They only redefine the problem. Another question in such a decision tree is whether there was a matter of inexperience. This is a great question too: It recalls the difference between technical and normative errors which is discussed elsewhere in this book. But again, whether something is judged a technical error (due to a lack of experience) or a normative error (due to a lack of discharging professional responsibility) is the outcome of the processes of interpretation and attribution that follow the error. And who is involved in making such interpretations? It is much less determined by the behavior that led up to the error than we might think.

Psychological Research: Culpability and Control

Decision trees are not just born out of practice. Psychological research shows that even if we are not prompted, we will evaluate actions and their consequences along various criteria. It points out that the criminal culpability of an act is likely to be constructed as a function of three things:5

  • the amount of volitional behavior control the person had (was the act freely chosen or compelled?);
  • volitional outcome control (did the actor know what was going to happen?);
  • and the actor's causal control (his or her unique impact on the outcome).

In this triad, factors that establish personal control intensify our attributions of blame and blameworthiness. If we, on the other hand, identify constraints on personal control, we allow them to mitigate blame or blameworthiness.

When we apply these criteria to any of the cases in this book, we quickly see how answers to these questions are difficult. They are not really answers, they are judgments. When it comes to volitional behavior control, did the nurse from the prologue (Mara) act on purpose or by accident? (This is like asking whether consequences and actions were intended and whether they matched.) We will read into her act more control if she had behaved purposely and knowingly. Still struggling to understand her own performance, Mara had told a lower court that she may have misread the package labeling. By the time she got to the Supreme Court, however, she indicated that this was probably not the case: She mistakenly believed that 200 mg/ml was what she needed to have. This would certainly have made sense, given the prominence of the figure 200 in the medication log, and the reminder to end up with a volume of 10 ml Xylocard in total. As a result, the Supreme Court (Verdict B 2328–05, 19 April 2006, pp. 3–4) observed how:

during the court proceedings, the ICU nurse described multiple ways how it could be that she mixed the IV drop with the wrong concentration of Xylocard. What she offered cannot therefore express what she really remembers. Rather, her accounts can be seen as attempts to find an explanation afterward. They are almost hypothetical and provide no certain conclusion as to why she did what she did.

In other words, the inability to know or remember how an "error" occurred (which is quite normal, even when it comes to our own immediately past behavior), was converted into an inability to disprove volitional behavior control. To the court, whether the nurse acted knowingly or purposely could be ruled neither out nor in.

Additionally, volitional behavior control was amplified by the absence of what are called capacity and situational constraints. The Supreme Court emphasized how Mara had 25 years of experience and ample time to prepare the mixture. There was no lack of knowledge or experience (though Mara had never prepared this particular drug for an infant). She had just come on shift, there was no stress or manpower shortage during that morning.

These conditions would also have helped the nurse foresee the consequences of her actions: "Whether the nurse's negligence stemmed from misreading, miscalculating or taking the wrong package, it is obvious that she could have read the medication log more carefully, calculated more carefully or done any other double-check that would have revealed her error and its potentially fateful consequences."6 In other words, volitional outcome control could also be established: The nurse was experienced enough and had time enough to find out what could, or would, be the consequences of her actions.

Then to causal control. With the various truths swirling around the case, there would be ample opportunity to find other contributors to the outcome that would reduce Mara's unique impact. Yet Mara's initial mea culpa corrupted later appeals to additional, and necessary, actors. Recall how the pediatrician who ended up giving the infant its overdose, for example, successfully asserted that his administrations would not have had the fatal effect they did if the drug solution had been correct—which he could only believe it was. Mitigating circumstances related to long-eroded practices in drug management in the hospital were dismissed as playing no serious role in exerting causal force on the outcome: The court admitted that "there were serious shortcomings in routines and procedures at the regional hospital, this did not remove the nurse's own responsibility for checking that her mixture was correct."7 But however lousy the workplace, its organization, or traditions, that still did not relieve an individual actor of the responsibility to not err. At least not in how the Supreme Court drew the line.

It is Not Where to Draw the Line, But Who Draws it

What matters in creating a just culture is not to come up with a definition that leaves a number of supposedly self-evident labels ("willful violation," "negligence," or people that are not "prudent," or "normal," or "reasonably skilled") on the wrong side of the law and the rest on the right side. For those labels are far from self-evident. Almost any act can be constructed into willful disregard or negligence, if only that construction comes in the right rhetoric, from the legitimated authority. Drawing a line does not solve any problem, it simply displaces it. What matters instead is to consider very carefully, and preferably make structural arrangements about, who gets to draw the line. This gets us to the next chapter:

  • Who has the authority to draw the line?
  • Who in your organization, and in your society, has the right language, and the official legitimacy to say that the line has been crossed?
  • Do these people rely on a claim to a "view from nowhere," an objective, unarguable, neutral point of view from which they can separate right from wrong?

Rather than a prosecutor saying, "you have nothing to fear if you have done nothing wrong," a more accurate portrayal would be "if I decide you have done nothing wrong, you have nothing to fear." Which in itself might mean there could be something to fear.

This is why a just culture should not give anybody the illusion that it is simply about drawing a line. Instead, it should give people clarity about who draws the line, and what rules, values, traditions, language, and legitimacy this person uses. Whether this person is a prosecutor or a manager, or even a committee of peers, is not really the point (though all have different stakes and biases in drawing their lines). The point of a just culture is to get clarity and agreement about it.

Notes

1 Ferguson J, Fakelmann R. The culture factor. Frontiers of Health Services Management 2005;22:33–40.

2 GAIN. Roadmap to a just culture: Enhancing the safety environment. Global Aviation Information Network (Group E: Flight Ops/ATC Ops Safety Information Sharing Working Group); 2004.

3 Ibid.

4 Reason JT. Managing The Risks of Organizational Accidents. Aldershot, UK: Ashgate Publishing Co.; 1997.

5 Alicke MD. Culpable control and the psychology of blame. Psychological Bulletin 2000;126:556–574.

6 HogstaDomstolen. Verdict B 2328–05. In: Court HDSS, ed. Stockholm; 2006:1–6.

7 Ibid.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset