2

Why Do Your People Break the Rules?

Why do your people break the rules? This is of course a normative question. It assumes that the rules are right to begin with. So perhaps it is not even the right question to ask! Be that as it may, you may actually have your own answers to this question. Is it because your people can’t be bothered following them? Or have your people learned that those particular rules don’t matter, or that a shortcut gets them to the goal faster without—supposedly—any harmful consequences? Or do they believe that they are above the rules; that the rules are really made for other people who are not as good at the job as they are?

The way you think about this matters. It determines how you respond to the people involved in an incident. It determines how you believe you should hold them accountable for their behavior. So before you set your mind to any particular explanation, have a look at the research that has been done into this. This research offers a number of possible reasons why your people break the rules. Some of these might overlap with your experiences and intuitions. The advantage of theories based on research is that the investigators have been forced to make their ideas and assumptions explicit. And, of course, they have had to test them. Does the theory actually hold up in practice? Does it correctly predict or explain the reasons for why people do what they do? Well, ask yourself what your experience and intuition tells you. This chapter takes you through the following:

•  Labeling theory: Your people break the rules because you label their actions that way.

•  Control theory: Your people break the rules because they get away with it.

•  Learning theory: Your people break the rules because they have learned that a shortcut doesn’t harm anything or anyone.

•  Subculture theory: Your people break the rules because they see the rules as stupid or only there for lesser mortals.

•  The bad apple theory: Some of your people break the rules because they are bad, evil, or incompetent practitioners.

•  Resilience theory: Your people break the rules because the world is more complex and dynamic than the rules. The way to get work done is to keep learning and adapting.

These different explanations, as you will see, interact and overlap in various ways. Goal conflicts, getting the job done, learning how to not get in trouble when “breaking rules,” for example, seem to play an important role in almost all of them. This also means that you can apply multiple explanations at the same time, to see them as different ways of mapping the same landscape.

LABELING THEORY

Labeling theory says your people break the rules because you label their behavior that way; you call it that. This is as simple as it is counterintuitive. Because isn’t rule breaking a real thing, something that you can establish objectively? Not quite, it turns out.

Consider an air traffic controller who gives an aircraft a clearance to descend earlier than the agreement between two neighboring air traffic sectors allows. In her world, she is not breaking a rule: she is taking into consideration the fact that in this aircraft type it is really hard to slow down and go down at the same time, that her neighboring sector is very busy given the time of day and peak traffic, that they might put the aircraft into a holding pattern if it shows up too soon. She is, in other words, anticipating, adapting, or doing her job, and expertly so. She is doing what all users (and managers) of the system want her to do.

Or consider the nurse who works in a hospital where his ward has adopted a so-called red rule that tells him to double-check certain medication administrations with a colleague before administering the drug to a patient. Suppose he has given this very medication, in this very dose, to the very same patient that morning already, after indeed double-checking it. Now it is evening, his colleague is on the other side of the ward attending to a patient with immediate needs, and because of manpower reductions in the hospital, no other nurse is to be found. The patient requires the prescribed drug at a certain interval, and cannot really wait any longer. Whether the nurse breaks a rule or is doing what needs doing to get the job done (the job everyone wants him to do) is not so objectively clear. It is often only after things go wrong that we turn such an action into a big issue, into a broken rule for which we believe we need to hold someone accountable.

The formal name for this is labeling, or labeling theory. It says that rule breaking doesn’t come from the action we call by that name, but from us calling the action by that name. Rule breaking arises out of our ways of seeing and putting things. The theory maintains that what ends up being labeled as rule breaking is not inherent in the act or the person. It is constructed (or “constituted,” as Scandinavian criminological researcher Nils Christie put it) by those who call the act by that label:

The world comes to us as we constitute it. Crime is thus a product of cultural, social and mental processes. For all acts, including those seen as unwanted, there are dozens of possible alternatives to their understanding: bad, mad, evil, misplaced honour, youth bravado, political heroism—or crime. The same acts can thus be met within several parallel systems as judicial, psychiatric, pedagogical, theological.45

VIOLATIONS SEEN FROM THIS BENCH ARE JUST YOUR IMAGINATION

I used to fly out of a little airport in the US Midwest. Between two hangars there was a wooden bench, where old geezers used to sit, watching airplanes and shooting the breeze. When nobody was sitting on the bench, you could see that it said, painted across the back rest: “Federal aviation regulation violations seen from this bench are just your imagination.”

Whereas cases of criminalizing human error show that we sometimes drag rules and laws over an area where they have no business whatsoever, the bench at this little airfield showed a more hopeful side of the negotiation, or construction, of deviance. Even if it was meant tongue-in-cheek (though when I, on occasion, sat on that bench, I did see interesting maneuvers in the sky and on the ground) we, as humans, have the capacity to see certain things in certain ways. We have the capacity to distance ourselves from what we see and label things one way or another, and even know that we do so. That also means we can choose to label it something different, to not see it as deviance—when we are sitting on that bench, for example.

But, you might protest, doesn’t rule breaking make up some essence behind a number of possible descriptions of an action, especially if that action has a bad outcome? This is, you might believe, what an investigation, a hearing, or a trial would be good for: it will expose Christie’s “psychiatric, pedagogical, theological” explanations (I had failure anxiety! I wasn’t trained enough! It was the Lord’s will!) and show that they are all patently false. You might believe that if only you look at things objectively, and use reason and rationality, you can strip away the noise, the decoys, and the excuses and arrive at the essential story: whether a rule was broken or not. And if it was, then there might, or should, be negative consequences.

Keith Ramstead was a British cardiothoracic surgeon who moved to New Zealand. There, several patients died during or immediately after his operations, and he was charged with manslaughter in three of the cases. Not long before, a professional college had pointed to serious deficiencies in the surgeon’s work and found that seven of his cases had been managed incompetently. The report found its way to the police, which subsequently investigated the cases. This in turn led to the criminal prosecution against Ramstead.

To charge professionals like Keith Ramstead with a crime is just one possible response to failure, one way of labeling the act and thus dealing with it. It is one possible interpretation of what went wrong and what should be done about it. Other labels are possible, too, and not necessarily less valid:

•  For example, one could see the three patients dying as an issue of cross-national transition: Are procedures for doctors moving to Australia or New Zealand and integrating them in local practice adequate?

•  How are any cultural implications of practicing there systematically managed or monitored, if at all?

•  We could see these deaths as a problem of access control to the profession: Do different countries have different standards for who they would want as a surgeon, and who controls access, and how?

•  It could also be seen as a problem of training or proficiency-checking: Do surgeons submit to regular and systematic follow-up of critical skills, such as professional pilots do in a proficiency check every six months?

•  We could also see it as an organizational problem: There was a lack of quality control procedures at the hospital, and Ramstead testified having no regular junior staff to help with operations, but was made to work with only medical students instead.

•  Finally, we could interpret the problem as sociopolitical: What forces are behind the assignment of resources and oversight in care facilities outside the capital?

It may well be possible to write a compelling argument for each of these explanations of failure, for each of these different labels. Each has a different repertoire of interpretations and countermeasures following from it. A crime gets punished away. Access and proficiency issues get controlled away. Training problems get educated away. Organizational issues get managed away. Political problems get elected or lobbied away.

The point is not that one interpretation is right and all the others wrong. The point is that multiple overlapping interpretations of the same act are always possible (and may even be necessary to capture its full complexity!). And all interpretations have different ramifications for what people and organizations think they should do to prevent recurrence.

Some interpretations, however, also have significant negative consequences for safety. They can eclipse or overshadow all other possible interpretations. The criminalization of human error seems to be doing exactly that. It creates many negative side effects, while blotting out other possible ways forward. This is unfortunate and ultimately unnecessary.

But as the preceding examples show, the same action can be several things at the same time. It depends on what questions you asked to begin with. Ask theological questions—as we have done for a long time when it comes to rule breaking—and you may see in it the manifestation of evil, or the weakness of the flesh. Ask psychological questions and you might see smart human adaptations, expertise, and the stuff that is all in a day’s work. Ask pedagogical questions and you may see in it somebody’s underdeveloped skills. Ask judicial or normative questions and you may see a broken rule. Actions, says labeling theory, do not contain rule breaking as their essence. We make it so, through the perspective we take, the questions we ask. As Becker, an important thinker in labeling theory, put it:

…deviance is created by society…social groups create deviance by making the rules whose infraction constitutes deviance and by applying those rules to particular persons and labeling them as outsiders. From this point of view, deviance is not a quality of the act the person commits, but rather a consequence of the application by others of rules and sanctions to an ‘offender’. The deviant is the one to whom the label has successfully been applied; deviant behaviour is behaviour that people so label.46

Labeling theory, then, says that what counts as deviant or rule breaking is the result of processes of social construction. If a manager or an organization decides that a certain act constituted “negligence” or otherwise fell on the wrong side of some line, then this is the result of using a particular language and of looking at some things rather than others. Together, this turns the act into rule breaking behavior and the involved practitioner into a rule breaker.

A few years ago, my wife and I went for dinner in a neighboring city. We parked the car along the street, amid a line of other cars. On the other side of the street, I saw a ticket machine, so I duly went over, put some cash in the machine, got my ticket, and displayed it in the car windshield. When we returned from dinner, we were aghast to find a parking ticket the size of a half manila envelope proudly protruding from under one of the wipers. I yanked it away and ripped it open. Together we poured over the fine print to figure out what on earth we had violated. Wasn’t there a ticket in our car windshield? It had not expired yet, so what was going on? It took another day of decoding arcane ciphers buried in the fine to find the one pointing to the exact category of violation. It turned out that it had somehow ceased to be legal to park on that side of that particular stretch of that street sometime during our dinner on that particular evening. I called a friend who lives in this city to get some type of explanation (the parking police answered the phone only with a taped recording, of course).

My friend must have shaken his head in blend of disgust and amusement. “Oh, they do this all the time in this town,” he acknowledged. “If it hasn’t been vandalized yet, you may find a sign the size of a pillow case suspended somewhere in the neighborhood, announcing that parking on the left or right side of the street is not permitted from like six o-clock until midnight on the third Tuesday of every month except the second month of the fifth year after the rule went into effect. Or something.”

I felt genuinely defeated (and yes, we paid our fine). A few weeks later, I was in this city again (no, I did not park; I no longer dared to), and indeed found one of the infamous exception statements, black letters on a yellow background, hovering over the parking bays in a sidewalk. “No parking 14–17 every third of the second month,” or some such vague decree.

This city, I decided, was a profile in the construction of offense. Parking somewhere was perfectly legal one moment and absolutely illegal the next. The very same behavior, which had appeared so entirely legitimate at the beginning of the evening (there was a whole line of cars on that side of the street, after all, and I did buy my ticket), had morphed into a violation, a transgression, an offense inside the space of a dinner. The legitimacy, or culpability of an act, then, is not inherent in the act. It merely depends on where we draw the line. In this city, on one day (or one minute), the line is here. The next day or minute, it is there. Such capriciousness must be highly profitable, evidently. We were not the only car left on the wrong side of the road when the rules changed that evening. The whole line that had made our selection of a parking spot so legitimate was still there—all of them bedecked with happily fluttering tickets. The only ones who could really decrypt the pillow-case size signs, I thought, were the ones who created them. And they probably planned their ambushes in close synchronicity with whatever the signs declared.

I did not argue with the city. I allowed them to declare victory. They had made the rules and had evolved a finely tuned game of phasing them in and out as per their intentions announced on those abstruse traffic signs. They had spent more resources in figuring out how to make money off of this than I was willing to invest in learning how to beat them at their own game and avoid being fined (I will take public transport next time). Their construction of an offense got to reign supreme. Not because my parking suddenly had become “offensive” to any citizen of that city during that evening (the square and surrounding streets were darker and emptier than ever before), but because the city decided that it should be so. An offense does not exist by itself “out there,” as some objective reality. We (or prosecutors, or city officials) are the ones who construct the offense—the willful violation, the negligence, the recklessness.

What you see as rule breaking action, then, and how much retribution you believe it deserves, is hardly a function of the action. It is a function of your interpretation of that action. And that can differ from day to day or minute to minute. It can change over time, and differ per culture, per country. It is easy to show that the goal posts for what counts as a rule breaking shift with time, with culture, and also with the outcome of the action. The “crimes” I deal with in this book are a good example. They are acts in the course of doing normal work, “committed” by professionals—nurses, doctors, pilots, air traffic controllers, policemen, managers. These people have no motive to kill or maim or otherwise hurt, even though their job gives them both the means and opportunities to do so. Somehow, we manage to convert normal acts that (potentially) have bad outcomes into rule breaking or even criminal behavior. The kind of behavior that we believe should be punished. This, though, says labeling theory, hinges not on the essence of the action. It hinges on our interpretation of it, on what we call the action.

CONTROL THEORY

There are several ways that control theory can help you understand why people break rules. Let’s first look at people’s control of their own actions. Control, or rational choice theory, suggests that people make a decision to break a rule based on a systematic and conscious weighing of the advantages and disadvantages of doing so. An advantage might be getting to the goal faster, or being able to take a shortcut. One important disadvantage is getting caught and facing the consequences. If the probability of getting caught is low, the theory predicts that the likelihood of breaking the rule goes up. In other words, if people get away with it, they will break the rule.

Control theory suggests that stronger external control of behavior is a way to make people follow the rules. As long as such external control is lax, or nonexistent, it will be no surprise if people don’t comply. One thing that we often need for control theory to work is the opportunity to break the rules. This is another important part of theorizing: people will not break the rules unless given the opportunity to do so.

Cameras have expanded into an area where a mistake can mean the difference between life and death: operating rooms. A medical center has installed cameras in all 24 of its operating rooms (ORs), which performed nearly 20,000 surgeries in 2013. With the national average of approximately 40 wrong-site surgeries and about a dozen retained surgical objects left in patients every week, the new pilot program at the hospital strengthens patient safety by providing hospitals with real-time feedback in their ORs. They call it remote video auditing (RVA) in a surgical setting.

RVA ensures that surgical teams take a “timeout” before they begin a procedure. The team then goes through a patient safety checklist aimed at avoiding mistakes. Each OR is monitored remotely once every two minutes to determine the live status of the procedure and ensure that surgical teams identify and evaluate key safety measures designed to prevent “never events,” such as wrong-site surgeries and medical items inadvertently left in patients. The cameras also are used to alert hospital cleaning crews when a surgery is nearing completion, which helps to reduce the time it takes to prepare the OR for the next case. To reduce the risk of infections, the monitoring system also confirms whether ORs have been cleaned thoroughly and properly overnight. In addition, all staff can see real-time OR status updates and performance feedback metrics on plasma screens throughout the OR and on smart-phone devices. In a matter of weeks, patient safety measures improved to nearly perfect scores.

The executive director of the hospital reported that within weeks of the cameras’ introduction into the ORs, the patient safety measures, sign-ins, time-outs, sign-outs, as well as terminal cleanings all improved to nearly 100 percent, and that a culture of safety and trust was now palpable among the surgical team.47

If you believe that better monitoring and surveillance give you control over what people do, and determine whether they break rules or not, then that says a number of things about both you and the people you believe need such control.

•  People are not internally motivated to follow the rules. They need prodding and checking to do so.

•  You believe, as the East Germans liked to say, that “trust is good; control is better.”

•  The reason your people don’t follow the rules is that they have made a rational calculation that the advantages of breaking the rules outweigh the disadvantages. You can change that calculus by increasing the probability of getting caught.

•  Having someone (or more likely, something) watch over them will get them to follow the rules, whether that someone (or something) is actually looking at them specifically or not.

Research has established that people who are used to being watched change their behavior as if they are monitored, even when they are not. This was the principle behind the so-called panopticum, the circular prison designed by Jeremy Bentham in the late eighteenth century. In this prison, inmates were placed in cells that circled a central observation post. From that post, the staff could watch all the inmates (hence the name panopticum, or see all). Bentham thought that this basic plan was equally applicable to hospitals, schools, sanatoriums, daycare centers, and asylums. It has, however, mostly influenced prisons around the world.

Cameras (like those used in RVA in the preceding example), recording devices (cockpit voice recorders, for example), or other monitoring systems (intelligent vehicle monitoring systems, for example) all represent panoptica as well. The idea is that they can (potentially) observe everything, anytime. Of course, they don’t. Recall from the preceding example, for instance, that the OR gets sampled by the RVA every two minutes. What if the “time-out” procedure takes 119 seconds and falls between the two samples? Those who believe in a panopticum will say that it doesn’t matter: as long as people believe that they are being watched, or that they can be watched, they will change their behavior.

But do people change their behavior in the way that you want?

•  By putting in a camera, you cannot make people more committed to doing the work the way you want them to. You might just make them more afraid of the consequences if they don’t.

•  You also make very clear that you do not trust your people, and that they can’t trust you, except that you will take action if they exceed the norms you find important.

•  Perhaps that is good enough for you. But people are good at finding ways to make it look as if they are doing what you want them to.

With this sort of intervention, your people might not change their behavior or motives as much as changing the way it looks to the outside. And it certainly does contribute to a mature relationship of trust between you and the people you are watching.

LEARNING THEORY

People do not come to work to break rules; they typically come to work to get things done. But in doing this, they need to meet different goals—at the same time. This creates goal conflicts. For example, they need to achieve a particular production target or a deadline. Or save resources. But they also need to do so safely. Or within a budget. And within regulations and rules that apply. In many cases, it is hard to do all at the same time. In the pursuit of multiple incompatible goals, and under the pressure of limited resources (time, money), something will have to give. That something can be the rules that apply to the job. Gradually people come to see rule breaking behavior or system performance as normal or acceptable. Or expected even. Here is a great example of this.

The Columbia space shuttle accident focused attention on the maintenance work that was done on the shuttle’s external fuel tank. Maintenance workers were expected to be safe, follow the rules for reporting defects, and get the job done. A mechanic working for the contractor, whose task it was to apply the insulating foam to the external fuel tank, testified that it took just a couple of weeks to learn how to get the job done, thereby pleasing upper management and meeting production schedules. An older worker soon showed him how he could mix the base chemicals of the foam in a cup and brush it over scratches and gouges in the insulation, without reporting the repair.

The mechanic found himself doing this hundreds of times, each time without filling out the required paperwork. This way, the maintenance work did not hold up the production schedule for the external fuel tanks. Inspectors often did not check. A company program that once had paid workers hundreds of dollars for finding defects had been watered down, virtually inverted by incentives for getting the job done now.

Learning theory suggests that people break a rule because they have learned that there are no negative consequences, and that there are, in fact, positive consequences. Some have called this fine tuning—fine tuning until something might break:

Experience with a technology may enable its users to make’ fewer mistakes and to employ the technology more safely, and experience may lead to changes in hardware, personnel, or procedures that raise the probability of success. Studies of industrial learning curves show that people do perform better with experience. Better, however, may mean either more safely or less so, depending on the goals and values that guide efforts to learn. If better means more cheaply, or quicker, or closer to schedule, then experience may not raise the probability of safe operations.48

Success makes subsequent success appear more probable, and failure less likely. This stabilizes people’s behavior around what you might call rule breaking. But consider it from their angle. If a mechanic in the preceding example would suddenly start filling out all the required paperwork, what would happen? Colleagues would look strangely at her or him, production would be held up, and managers might get upset. And the mechanic might quickly lose her or his job. Fine tuning, or such a normalization of deviance, turns rule breaking into the new norm. If you don’t do it, then you are actually the deviant. And your “deviance” (or “malicious compliance” with the rules) is seen as noncompliant, or nonconformant with the rules (the informal, unwritten, peer group rules) by which the system runs. Nuclear power plant operators have in fact been cited for “malicious compliance.” They stuck to the rules religiously, and everything in the operation pretty much came to a standstill.

Nick McDonald, a leading human factors researcher at Trinity College Dublin, and his colleagues investigated the “normal functioning” of civil aircraft maintenance and found much of this confirmed. “Violations of the formal procedures of work are admitted to occur in a large proportion (one third) of maintenance tasks. While it is possible to show that violations of procedures are involved in many safety events, many violations of procedures are not, and indeed some violations (strictly interpreted) appear to represent more effective ways of working.

Illegal, unofficial documentation is possessed and used by virtually all operational staff. Official documentation is not made available in a way which facilitates and optimizes use by operational staff.

The planning and organizing of operations lack the flexibility to address the fluctuating pressures and requirements of production. Although initiatives to address the problems of coordination of production are common, their success is often only partial.

A wide range of human factors problems is common in operational situations. However, quality systems, whose job it is to assure that work is carried out to the required standard, and to ensure any deficiencies are corrected, fail to carry out these functions in relation to nontechnical aspects of operations. Thus, operational performance is not directly monitored in any systematic way; and feedback systems, for identifying problems which require correction, manifestly fail to demonstrate the achievement of successful resolution to these problems.

Feedback systems to manufacturers do not deal systematically with human factor issues. Formal mechanisms for addressing human needs of users in design are not well developed.

Civil aviation, including the maintenance side of the business, is a safe and reliable industry. To this extent, the system is achieving its goals. How is this compatible with the statement given above? What we have to explain is a little paradoxical: on the one hand the system appears to malfunction in several important respects, yet this malfunctioning, being often a normal and routine fact of life in maintenance organizations, does not in itself imply that the system is not safe. On the other hand, it does indicate important vulnerabilities of the system which have been demonstrated in several well investigated incidents. Can the same set of principles explain at the same time how safety is normally preserved within a system and how it is compromised?”49

Learning theory also covers the other kind of learning identified in McDonald’s research. This is the kind that is the result of interaction with others, possibly veterans of the practice or profession. They engage in informal teaching (like in the space shuttle external tank chemicals-in-a-cup example, but also as communicated in illegal documentation commonly used in maintenance, often called black books). Informal teaching, or the teaching of a “hidden curriculum,” touches on those things that a profession would rather not talk about too openly, or that managers don’t want to hear about. It might have a “teacher” open a task with something like: Here, let me show you how it’s done. That not only hints at the difficulty of getting the job done within the formal rules, but also at goal conflicts, at the need to do multiple things at the same time. Informal teaching offers a way of managing that goal conflict, of getting around it, and getting the job done. As McDonald has shown, such skills are often hugely valued, with many in the organization not even knowing about them. A hidden curriculum shows that the world in which your people work is too complex to be adequately covered by rules. There are too many subtleties, nuances, variations, and smaller and larger difficulties. And there are too many other expectations. Following the rules is just yet another thing your people have to try to accomplish.

THE BAD APPLE THEORY

But can this help breed evil or bad practitioners? Many people believe that a just culture should pursue and sanction practitioners who come to work to do a good job yet make the occasional inadvertent error. But what about the practitioners who consistently get complaints or get things wrong? Don’t you then have a responsibility to deal with these “bad apples” decisively and effectively? Some seem to have experiences that tell them so. Let’s revisit the questions raised in the Preface—about the successes and limits of the systems approach. It was very important to convince policymakers, hospital chiefs, and even patients that the problem was not bad doctors, but systems that needed to be made safer:

Individual blame was deemed the wrong solution to the problem of patient safety; as long as specific individuals were deemed culpable, the significance of other hazards would go unnoticed. The systems approach sought to make better diagnosis and treatment of where the real causes of patient safety problems lay: in the “latent conditions” of healthcare organisations that predisposed to error. In order to promote the learning and commitment needed to secure safety, a “no-blame” culture was advocated. With the spotlight switched off individuals, the thinking went that healthcare systems could draw on human factors and other approaches to improve safety.

Almost certainly, the focus on systems has been an important countervailing force in correcting the long-standing tendency to mistake design flaws for individual pathologies. There can be no doubting the ongoing need to tackle the multiple deficits in how healthcare systems are designed and organised. Encouraging examples of just how much safety and other aspects of quality can be improved by addressing these problems continue to appear. Yet, recent years have seen increasing disquiet at how the importance of individual conduct, performance and responsibility was written out of the patient safety story… We need to take seriously the performance and behaviours of individual clinicians if we are to make healthcare safer for patients.

One study found that a small number of doctors account for a very large number of complaints from patients: 3% of doctors generated 49% of complaints, and 1% of doctors accounted for 25% of all complaints. Moreover, clinician characteristics and past complaints predicted future complaints. These findings are consistent with other recent research, including work showing that some doctors are repeat offenders in surgical never-events, and a broader literature that has explored the phenomenon of “disruptive physicians” with behaviour problems as well as those facing health or other challenges that impact on patient care.

These studies show that a very small number of doctors may contribute repeatedly not just to patient dissatisfaction, but also to harm and to difficult working environments for other healthcare professionals. Identifying and dealing with doctors likely to incur multiple complaints may confer greater benefit than any general strategy directed at clinicians in general.7

There is no doubt about the genuine concern for patient safety expressed in these ideas. People’s experiences can indeed lead them to want to get rid of certain hospital staff. One can understand the seduction of sanctioning noncompliant doctors or getting rid of the deficient practitioners—the system’s bad apples—altogether.

In 1925, German and British psychologists were convinced they had cracked the safety problem in exactly this way. Their statistical analysis of five decades had led them to accident-prone workers, misfits whose personal characteristics predisposed them to making errors and having accidents. Their data told the same stories flagged by Levitt: if only a small percentage of people are responsible for a large percentage of accidents, then removing those bad apples will make the system drastically safer.

It didn’t work. The reason was a major statistical flaw in the argument. For the accident-prone thesis (or bad apple theory) to work, the probability of error and accident must be equal across every worker or doctor. Of course it isn’t. Because they engage with vastly different problems and patient groups, not all doctors are equally likely to harm or kill patients, or get complaints. Personal characteristics do not carry as much explanatory load for why things go wrong as context does. Getting rid of Levitt’s 3% bad doctors (as measured by complaints and adverse events) may simply get rid of a group of doctors who are willing to tackle trickier, more difficult cases. The accident-prone thesis lived until World War II, when the complexity of systems we made people work with—together with its fatal statistical flaw—did it in. As concluded in 1951:

…the evidence so far available does not enable one to make categorical statements in regard to accident-proneness, either one way or the other, and as long as we choose to deceive ourselves that they do, just so long will we stagnate in our abysmal ignorance of the real factors involved in the personal liability to accidents.50

Is there any point in reinvoking a failed approach to safety that was debunked in 1951? Errors are not the flaws of morally, technically, or mentally deficient “bad apples,” but the often predictable actions and omissions that are systematically connected to features of people’s tools and tasks.29

Bad apples might still be a concern. Even the European Union has recently created a “black list”51 of putatively deficient medical practitioners. Of course some practitioners should not be allowed to treat patients or control airplanes.

•  But who let them in?

•  Who recruited them, trained them?

•  Who mentored them, promoted them, employed them, supervised them?

•  Who gave them students to work with, residents to educate?

•  Who allowed them to stay?

•  What are they doing that is actually good for other organizational goals (getting the job done where others wouldn’t, even if it means playing fast and loose with the rules)?

If we first start to worry about incompetent practice once such practitioners are comfortably ensconced and have been doing things for years or decades, we are way behind the curve. The question is not how we get rid of bad apples, but in what way we are responsible for creating them in the first place. Yet that is not the question asked by those concerned with “bad apples” in their midst:

Many countries, including the USA and the UK, have introduced periodic recertification or “revalidation” of doctors in an attempt to take a systematic, preventative, risk-based approach. In theory, relicencing should pick up doctors whose practice is unsafe, and ensure they are enabled to improve or that their licences are restricted or removed. But relicencing systems are remarkably difficult to design and operate, not least because it is hard to ensure that bad apples are detected (and appropriate action taken) while also encouraging good apples to thrive. Any regulatory system of “prior approval” involves a high regulatory overhead, typically imposing high burdens both on the regulatees (individuals and organisations) as well as on the regulators. The “good apples” may have to divert their time and resources in demonstrating that they are good, while the bad apples may find ways of evading detection. Therefore, it is perhaps unsurprising that relicencing regimes for doctors typically attract a high level of regulatee complaint, both about burden and lack of efficacy.7

For sure, there are individual differences in how competent or “fit” people are for certain kinds of work. And in some domains, structures to oversee and regulate competence may currently not be as effective as in some other worlds. But is it the “bad apple’s” responsibility for being mismatched with his or her work? Or is that a system responsibility, a managerial responsibility? Perhaps these are systems that need to be improved, and such oversight is a system responsibility. Other worlds have had less trouble asserting such oversight—from the beginning of a career to the end—and having no trouble getting practitioners in the domain to accept it (see Table 2.1).

What appears on the surface as a simple competence problem often hides a much deeper world of organizational constraints, peer and patient expectations and assumptions, as well as cultural and professional dispositions. A simple comparison between two different safety-critical fields, aviation and medicine, shows the most outward signs of vastly different assumptions about competence and its continuity, assurance, and maintenance.

For example, training on a new piece of technology, and checking somebody’s competency before she or he is allowed to use it in practice, is required in aviation. It makes no assumptions about the automatic transference of basic flying or operating skills. Similarly, skill maintenance is ensured and checked in twice-yearly simulator sessions, necessary to retain the license to fly the airplane. Failing to perform in these sessions is possible and entails additional training and rechecks.

TABLE 2.1
Different Assumptions about Competency and How to Ensure and Maintain It over Time in Aviation and Medicine

Ideas about Competence

Aviation

Medicine

Type training and check before use of new technology

Always

No

Recurrent training in simulator for skill maintenance

Multiple times a year

Still haphazard

Competency checks in simulator

Twice a year

Emergency training (both equipment and procedures)

Every year

Not often

Direct observation and checking of actual practice

Every year or even more regular

Not so often, not regular

Crew resource management training

Every year

Not standard

Standard-format briefing before any operational phase

Always

Hardly

Standardized communication/phraseology

Yes

No

Standardized procedures for accomplishing tasks

Yes

No

Standardized divisions of labor across team members

Yes

No

Extensive use of checklists

Yes

No

Duty time limitations and fatigue management

Yes

Grudgingly

Formalized and regulated risk assessment before novel operation

Yes

Hardly ever

Note:  The table is a generalization. Different specialties in medicine make different assumptions and investments in competence, and there is a slow but gradual move toward more proficiency checking, checklist use, and teamwork and communication training in medicine too.52

Other aspects of competence, such as the ability to collaborate on a complex task in a team, are tackled by the use of standard communication and phraseology, standard-format briefings before each new operational phase, standard procedures and divisions of labor for accomplishing operational tasks, and the extensive use of checklists to ensure that the work has been done. Finally, there are limits on duty time that take into account the inevitable erosion of skills and competence under conditions of fatigue. Novel operations (e.g., a new aircraft fleet or a new destination) are almost always preceded by a risk assessment before their approval and launch, by airline and regulator alike. Skills or competence or safety levels demonstrated in one situation are not simply believed to be automatically transferable to another.

Competence alone is not trusted to sustain itself or to be sufficient for satisfactory execution of safety-critical tasks. Competence needs help. Competence is not seen as an individual virtue that people either possess or do not possess on entry into the profession. It is seen as a systems issue, something for which the organization, the regulator, and the individual all bear responsibility for the entire lifetime of the operator.52 Indeed, it would seem that getting rid of the “bad apples,” or putatively incompetent practitioners at the back end, is treating a symptom. It does not begin to delve into, let alone fix, a complex set of deep causes. This, of course, is one of the main issues with a retributive just culture as a response to rule violations. It targets the symptom, not the problem. And by targeting the symptom (i.e., the rule violator), it sends a clear message to others in the organization: Don’t get caught violating our rules; otherwise you will be in trouble too. But work still needs to get done.

STUPID RULES AND SUBCULTURE THEORY

Some rules are actually stupid. That is, they do not fit the work well or at all. Or they get in the way of doing the work. Stupid rules cost time and money and don’t help control risk. This makes it unsurprising that responsible practitioners will find ways to get their work done without complying with such rules.

An aviation business had strong internal controls to manage access to its expensive spare parts inventory. This seemed to make sense. Just like hospital pharmacies have such access controls. The controls appeared reasonable, because they were protecting a valuable asset. But it did mean that maintenance and engineering staff were walking more than they were working. It would take them up to four hours of walking each day to cover the almost ten miles to collect the paperwork, the signatures, and finally the parts vital to their maintenance work. Similarly, a large construction company implemented significant new controls, systems, and processes to reduce the risk of project overrun. In isolation, all the steps looked reasonable. But together, they created more than 200 new steps and handoffs in the approval process. The result was that it took up to 270 days to obtain “approval for approval.” Information would get dated, and the process would often have to be revisited if it was to stay at all accurate. The overruns that the organization had tried to reign in had been considerably cheaper than the productivity losses and bureaucracy costs created by the new rules, systems, and processes. The medicine was literally worse than the disease.

Then there is the company that makes its people await approval for small taxi fares from a weekly executive team meeting; the firm that rejects application forms from potential customers if they are not filled in using black ink; the business that demands from its receptionists to record every cup of coffee poured for a visitor, yet allows managers to order as much alcohol as they like; and the organization that insists staff complete an ergonomic checklist and declaration when they move desks, and then introduces hot-desking so that everyone spends 20 minutes a day filling out forms. So yes, there actually are stupid rules.53

Groups of people might even develop subcultures in your organization that find ways of getting work done without letting such rules get in their way. From the outside, you might see this as a subculture that considers itself above those rules. Sometimes you might even see this as a “can-do culture.” Can-do cultures are often associated with people’s ability to manage large technical or otherwise complex systems (e.g., NASA space shuttles, financial operations, military missions, commercial aircraft maintenance) despite obvious risks and vulnerabilities. What appears to set a can-do culture apart is the professional pride that people inside the subculture derive from being able to manage this complexity despite the lack of organizational understanding, trust, and real support. “Can-do” is shorthand for “Give us a challenge and don’t give us the necessary resources; give us rules that are contradictory, silly, or unnecessary, and we can still accomplish it.” The challenge such a subculture makes its own is to exploit the limits of production, budgets, manpower, and capacity. The challenge is to outwit the bureaucracy, to have it by the nose without it noticing.

Goal conflicts are an important factor here. Over the years, people in a subculture become able to prove to themselves and their organization that they can manage such contradictions and pressures despite resource constraints and technical and procedural shortcomings. They also start to derive considerable professional pride from their ability to do so. Subculture theory says that people on the inside identify more strongly with each other than with those in the organization above, below, or around them. They might talk disparagingly of their own managers, or of “B” team members, as people who don’t get it, who don’t understand, who can’t keep up. Subcultures are also powerful in the kind of coaching and informal teaching that shows newer workers how to get work done.

Having worked with a number of air traffic control organizations (or air traffic navigation service providers, as they are formally known), I have learned that there are commonly “A” and “B” teams (or at least that) both inside and across different control towers and centers. Everybody basically knows who is part of which team, and such teams develop strong subcultural identities. These identities express themselves in the style in which people work and collaborate. An “A” team will typically have what is known as a “can-do culture.” One evening, in one such team, the supervisor handed a controller a list of flights. A number of lines were highlighted with a marker pen. On being asked why these flights were so special, the controller explained that the accepted capacity for the coming hour was greater than the “official” allowable number of flights. She also explained that this was typical. The highlighted flights were the ones departing from the main airport that evening: if anything went wrong, the controller explained, those were the ones they would keep on the ground first. That way, any disruptions in a system that was being asked to run over its stated capacity could be dealt with “safely.” For the controller and the supervisor, this was not only entirely nonproblematic. They were proud to be able to locally solve, or deal with, such a capacity challenge. They were proud of being able to create safety through their practice, despite the pressures, despite the lack of resources. Paradoxically, there may be some resistance to the introduction of tools and technical support (and even controller capacity) that would make people’s jobs easier—tools or extra people that would provide them with more resources to actually meet the challenges posed to them. This resistance is born in part of professional pride: put in the tools, and the main source of professional pride would be threatened.

It might be seductive to tackle the problem of subcultures by trying to disband or repress that entire culture. But you might be attacking the goose that lays your golden eggs. “A” teams, or can-do subcultures, have become able, better than others, to generate more production, more capacity, more results. And they do so with the same or fewer resources than others, and probably with fewer complaints. What is interesting is that people in the subculture start to make the organization’s problems (or your problems) their own problems. They want to solve the production challenge, they want to meet the targets, they want to overcome your organization’s capacity constraints. And if they don’t, subculture members might feel that they have disappointed not just their organization or its management, but also themselves and their immediate peers. They feel this even though these are not their personal targets to meet or problems to overcome. But they feel such a strong identification with that problem, and derive so much personal pride from solving it, within the norms and expectations of their subculture, that they will behave as if it is indeed their own problem.

When a subculture plays fast and loose with your rules it’s because it has to. You, after all, expect people to achieve goals other than rule compliance too—such as producing, being efficient, fast, and cheap. And the nature of your operation allows them to compare themselves with others, by production numbers or other measurable targets, for instance.

In one nuclear power plant I visited, for example, the daily kilowatts produced by each of its three reactors were shown in bright lights over the entrance to the site. Reactors tend to develop individual “personalities” and thus allow the teams managing them to develop distinct cultures in how they get the most out of their reactor. This despite the fact that the nuclear industry has, since the Three Mile Island incident in 1979, embraced the notion (and cultivated the public image) that it is safe and productive precisely (if not only) because its people are compliant with the rules.

A smarter way to address a subculture problem (if that is what you think you have) is to look at the systems, rules, and processes that have created the conditions for subcultures to emerge in the first place. Take an honest, close look at the front end where the rules enter the system, where people have to do your organization’s work while also complying with all those rules. Take a close look at the rules that you are asking your people to follow while expecting them to live up to your explicit and implicit production or other targets. For example, look for53

•  Overlap in the rules. Rules often look at risks in isolation. There might be rules for safety, but also rules for security, for example. Or rules for process safety and personal safety. In certain cases these can conflict and create a compliance disaster. Subcultural responses to this are those that sort of show compliance with both, or at least make it look that way—based on what people themselves have learned about the nature of the risks they face in their work.

•  Too many cooks. Rules that people need to comply with come from many places and directions, both internal and external to your organization. If these are not coordinated, overlap and contradiction, as well as sheer volume of rules, can all go up.

•  Lack of independence. Rule-making and rule enforcement in most organizations are governed by the same unit. This gives the rule enforcers a serious stake in their compliance, because they themselves made the rule.

•  Micro-management. Rules are often unnecessarily detailed or overspecified. The goal might be to meet all the possible conditions and permutations in which they apply. But of course this is impossible. The world of work is almost always more complex than the rules we can throw at it.

•  Irrelevance. Rules can become obsolete as work progresses or new technologies become available. Decision makers should not “fire and forget” the rules into an organization. Consider putting a “sunset” or expiration data on the rules you create: don’t leave them as rigid markers of long-gone management decisions.

•  Rule creep. Driven in part by those who enforce them and those who believe in how they control risk, rules tend to creep. They seep into areas and activities that were not their original target, asking people to comply with things that may leave them puzzled.

•  Excessive reporting or recording requirements. Some processes, systems, and rules demand excessive information. This becomes especially disruptive when the compliance and reporting demands are not well-coordinated, leading to overlap or duplication.

•  Poor consultation. Many rules get made and enforced by those who don’t do the actual work themselves, or no longer do. This is what consultation with those who do the work is supposed to solve. But consultation often gets done poorly or merely (and ironically) to “comply” with consultation rules and requirements.

•  Overreactions. High-visibility problems or incidents tend to drive managers to decisions: something needs to be done. Or they need to be seen to do something. New rules and procedures are a common way to do so. But it doesn’t mean they help on the ground, at the coal face, at all.

•  Slowness. The granting of approvals (e.g., getting a lock-out/tag-out for doing work on a process plant) can take too long relative to immediate production goals. This can encourage the formation of can-do subcultures that get your work done in spite of you and your rules.

I was visiting a remote construction site not long ago. All workers, spread out over a flat, hot field, were wearing their hard hats. These were the rules, even though there was nothing that could fall from above, other than the sky itself. Any signs of noncompliance might have to be sought in more subtle places. I noticed how workers’ hard hats had all kinds of stickers and decals and codes on them. Some indicated the authorization to use mobile phones in certain restricted areas; others showed that the wearer was trained in first aid. Then I spotted a supervisor who had “GSD” pasted on the back of his hard hat. I asked what it stood for, expecting some arcane permission or authority.

“Oh, that means he ‘Gets Stuff Done,’” I was told. And “stuff” was actually not the word used. Intrigued, I spent some more time with the team. This was a can-do man. A supervisor leading a subcultural “A” team, which G..S..D… They certainly complied with the organization’s often unspoken wish to G..S..D…, if not with everything else.

RESILIENCE THEORY

This brings us to resilience theory. This theory says that work gets done—and safely and productively so—not because people follow rules or prescriptive procedures, but because they have learned how to make trade-offs and adapt their problem solving to the inevitable complexities and contradictions of their real world. They typically do this so smoothly, so effectively, that the underlying work doesn’t even go noticed or appreciated by those in charge. Leaders may persist in their belief that work goes well and safely because their people have listened to them and are following the rules:

When an organization succeeds, its managers usually attribute this success to themselves, or at least to their organization. The organization grow[s] more confident of their managers’ skill, and of existing programmes and procedures.48

The smoothness and unintrusive way in which people in the organization adapt and work can offer managers the impression that safety is preserved because of stable application of rules. Even though—largely invisibly—the opposite may be true.

In day-to-day operations, these adaptive capacities tend to escape attention as well as appreciation, also for the contribution to safety. As put by Weick and Sutcliffe (2001)54: “Safety is a dynamic non-event…. When nothing is happening, a lot is happening.” Adaptive practices (resilience) are rarely part of an organization’s official plan or deliberate self-presentation, maybe not even part of its (managerial) self-understanding. Rather it takes place “behind the rational facades of the impermanent organization.”55

When nothing is happening, that is because a lot is happening. A lot more than people complying with rules in any case. In trying to understand the creation and nature of safety in organizations, resilience theory asks the question “why does it work?” rather than “why does it fail?” Things go right, resilience theory argues, not because of adequate normative prescription from above, and people complying with rules, but because of the capacity of your people to recognize problems, adapt to situations, interpret procedures so that they match the situation, and contribute to margins and system recovery in degraded operating conditions.56

The question of “why things go right” is strangely intriguing to safety people, as they are mostly interested in cases in which safety was lost, not present. But most work goes well, after all. Only a tiny proportion goes wrong. It would be ignorant to believe that rules are broken only in that small proportion, and that things otherwise go right because people strictly comply with all the rules. And you probably know this already, even if you didn’t realize it. Just consider the following:

•  Virtually all organizations reward seniority. The longer a person has practiced a profession (flying a plane, operating on patients, running a reactor), the more he or she is rewarded—in status, economically, rostering, and such. This is not only because he or she has simply stuck it out for longer. Organizations reward seniority because with seniority comes experience. And with experience comes judgment, insight, and expertise—the kind of discretion that allows people to safely make trade-offs and adaptations so as to meet multiple organizational and task goals simultaneously, even if these conflict.

•  The work-to-rule strike (one option for industrial action) shows that if people religiously follow all the rules, the system comes to a grinding halt. In air traffic control, this is easy to demonstrate, for instance. Nuclear power has a different phrase for this: malicious compliance. But how can compliance be malicious, if compliance is exactly what the organization demands? And how can following all the rules bring work to halt? It can, says resilience theory, because real work is done at the dynamic and negotiable interface between rules that need following and complex problems and nuanced situations that need solving and managing. In this negotiation, something needs to give. Rules get situated, localized, interpreted. And work gets done.

When it asks, “why does it work?” resilience theory sees people not as a problem to control, but as a resource to harness. Things work not because people are constrained, limited, controlled, and monitored, but because they are given space to do their work with a measure of expertise, discretion—innovation even. There is an interesting link between the trust that should be part of your just culture and your organization’s understanding of how work actually gets done:

Firms with a lack of trust in their employees limit their use of judgment or discretion. Large organizations, in particular, tend to frown on discretion, meaning that many employees—when faced with a divergent choice between doing the right thing and doing what the rule says—will opt for the latter. They prefer the quiet life of “going by the book,” even if they doubt the book’s wisdom.53

Cultures of compliance seem to have become more popular than cultures of trust, learning, and accountability. A culture of compliance is a culture that puts its faith in fixed rules, bureaucratic governance, paper trails, and adherence to protocol and hierarchy rather than local expertise, innovation, and common sense. A 30-year veteran signal engineer at a railway company told me not long ago how he is about to resign from his job. He could no longer put up with the bureaucratic “crap” that made him fill out a full 40 minutes worth of paperwork before he could get onto the tracks to fix a signaling issue. The irony riled him: he was complying with checklists and paper-based liability management in the name of safety, while hundreds of innocent passengers were kept at risk during that very time, because of an unfixed signal! He scoffed at the paperwork, its duplication and stupidity, as it was all written by those who’d never been on the track themselves. Would it be “just” to sanction this signal engineer—who has been on and off the tracks for three decades—to enter a track without all the paperwork done so that at least he might assure the safety of the company’s customers?

The signal engineer is not alone. Similar cultures of compliance have risen almost everywhere, with consequences for productivity and safety. In Australia, for example, the time required for employees to comply with self-imposed (as opposed to government-imposed) rules has become a big burden: staff members such as the signal engineer spend an average 6.4 hours on compliance each week, and middle managers and executives spend an average of 8.9 hours per week. Whereas the compliance bill for government-imposed rules came to $94 billion in 2014, the cost of administering and complying with self-imposed rules and regulations internal to an industry or organization came to $155 billion. As a proportion of the gross domestic product, each Australian working man and woman is spending eight weeks each year just covering the costs of compliance. Some do benefit, however. The proportion of compliance workers (those whose job it is to write, administer, record, and check on compliance with rules and regulations) has grown from 5.6% of the total workforce in 1996 to 9.2% in 2014. One in every eleven employed Australians now works in the compliance sector.53

If you don’t trust your employees, then how can you ever build a just culture? Remember, a just culture is a culture of trust, learning, and accountability. Organizations such as those described in the preceding text might have only one of those three: accountability. They hold their people accountable for following the rules, for going by the book (or making it convincingly look as if they do). But they don’t really trust their people, and they are not learning much of value from them. After all, the problems are all solved already. That is what the book is for. Apply the book; solve the problem in the way someone else has already figured out for you. But what if there is a creeping, yet persistent drift in how the operation starts to mismatch the rules that were once designed for it? This has happened in a number of cases that became spectacular accidents57 and is probably happening in many places right now. In a culture without trust, without the sort of empowerment that recognizes your people as resources to harness, your people might not come forward with better or safer ideas for how to work. You don’t learn, and you might just end up paying for it:

Top executives can usually expect both respect and obedience from their employees and managers. After all, the executives have the power to fire them. But whether or not the leaders earn their trust is a different issue, and executives ignore the difference at their peril. Without trust, the corporate community is reduced to a group of resentful wage slaves and defensive, if not ambitious, managers. People will do their jobs, but they will not offer their ideas, or their enthusiasm, or their souls. Without trust the corporation becomes not a community but a brutish state…58

Fortunately, many leaders intuitively recognize that top-down, reductionist safety management through rule compliance is not enough. And that it may get in the way of doing work safely or doing work at all. They understand that there is something more that they need to recognize and nurture in their people that goes beyond their acknowledged or handed-down safety capabilities. They understand that sometimes imagination is a more important human faculty than reason. They understand that trust is perhaps less something they have, and more something they do. They understand that trust is an option, a choice for them to pursue or make. It is something they themselves help create, build, maintain—or undermine and break as part of the relationships they have with other people. They understand that it is not so much people who are trustworthy or not. It is, rather, their relationship with people that makes it so.58 Here are some ways to begin rebuilding that trust, to invest in your people as a resource, rather than as a problem53:

•  First get your people to cleanse their operations from stupid rules. Ask them what the absolute dumbest thing is that you asked them to do today in their work. If you really want to know and show no prejudice, they will tell you (indeed, they will give you their account). This shows your trust by allowing them judge what is dumb and what is useful. And you will learn some valuable things as well: things that may keep you from a productive and just culture.

•  Then change the way you see people and hold them accountable. Hold them accountable not for compliance, because then you still see your people as a problem to control. Rather, hold them accountable for their performance, for their imagination, for their ideas. Hold them accountable by creating all kinds of ways for them to contribute their accounts, their stories, of how to make things work, of how to make things go right. That way, you see your people as essential resources to harness.

•  Then challenge your people throughout the organization (including staff departments, frontline supervision, line management) to ask “what must go right?” instead of “what could go wrong?” That allows you to focus any remaining rules on what really matters. Make sure you ask whether you really need the rule or whether there is a better way to achieve the desired outcome. There probably is, but the challenge for you is to trust and allow your people to come up with it.

You would be in good company if you can pull this off. As Thomas Edison, inventor and founder of General Electric said, “Hell, there are no rules here… we’re trying to accomplish something.”53

CASE STUDY

HINDSIGHT AND SHOOTING DOWN AN AIRLINER

There is a very important aspect to your judgment of your people’s “rule breaking.” And that is knowing the outcome of their actions. This knowledge has a huge influence on how you will (but of course shouldn’t) judge the quality of their actions.

Years ago a student of mine was rushing to the scene of a terrible accident that happened to the airline he worked for right after takeoff on a dark, early morning. The crash killed dozens of people, including his pilot colleague. My student told my class that as he was sitting in his car, gripping the steering wheel in tense anticipation of what he was going to find, he repeated to himself: “Remember, there was no crash. An accident hasn’t happened. These guys had just come to work and were going to spend a day flying. Remember…” He was bracing himself against the visual onslaught of the logical endpoint of the narrative—the debris of an airplane and human remains scattered through a field and surrounding trees. It would immediately trigger questions such as, How could this happen? What actions or omissions led up to this? What did my colleague miss? Who is to blame?

My student was priming himself to bracket out the outcome because otherwise all of his questions and assessments were going to be driven solely by the rubble strewn before his feet. He wanted to avoid becoming the prisoner of a narrative truth whose starting point was the story ending. The story ending was going to be the only thing he would factually see: a smoking hole in the ground. What preceded it, he would never see, he would never be able to know directly from his own experience. And he knew that when starting from the outcome, his own experiential blanks would be woven back into time all too easily, to form the coherent, causal fabric of a narrative. He would end up with a narrative truth, not a historical one, a narrative that would be biased by knowing its outcome before it was even told. His colleague didn’t know the outcome either, so to be fair to him, my student wanted to understand his colleagues’ actions and assessments not in the light of the outcome, but in the light of the normal day they had ahead of them.

We assume that if an outcome is good, then the process leading up to it must have been good too—that people did a good job. The inverse is true too: we often conclude that people may not have done a good job when the outcome is bad.

This is reflected, for example, in the compensation that patients get when they sue their doctors. The severity of their injury is the most powerful predictor of the amount that will be awarded. The more severe the injury, the greater is the compensation. As a result, physicians believe that liability correlates not with the quality of the care they provide, but with outcomes over which they have little control.59

Here is the common reflex: the worse the outcome, the more we feel there is to account for. This is strange: the process that led up to the outcome may not have been very different from when things would have turned out right. In fact, we often have to be reminded of the idea that an outcome does not, or should not, matter in how we judge somebody’s performance. Consider the following quote: “If catnapping while administering anesthesia is negligent and wrongful, it is behavior that is negligent and wrongful whether harm results or not.”60 Also, quality processes can still lead to bad outcomes because of the complexity, uncertainty, and dynamic nature of work.

THE HINDSIGHT BIAS

If we know that an outcome is bad, then this influences how we see the behavior that led up to it. We will be more likely to look for mistakes. Or even negligence. We will be less inclined to see the behavior as “forgivable.” The worse the outcome, the more likely we are to see mistakes, and the more things we discover that people have to account for. Here is why.

•  After an incident, and especially after an accident (with a dead patient, or wreckage on a runway), it is easy to see where people went wrong, what they should have done or avoided.

•  With hindsight, it is easy to judge people for missing a piece of data that turned out to be critical.

•  With hindsight, it is easy to see exactly the kind of harm that people should have foreseen and prevented. That harm, after all, has already occurred. This makes it easier for behavior to reach the standard of “negligence.”

The reflex is counterproductive: like physicians, other professionals and entire organizations may invest in ways that enable them to account for a bad outcome (more bureaucracy, stricter bookkeeping, practicing defensive medicine). These investments may have little to do with actually providing a safe process.

Yet the hindsight bias is one of the most consistent and well-demonstrated biases in psychology. But incident reporting systems or legal proceedings—systems that somehow have to deal with accountability—have essentially no protections against it.

Lord Anthony Hidden, the chairman of the investigation into the devastating Clapham Junction railway accident in Britain, wrote, “There is almost no human action or decision that cannot be made to look flawed and less sensible in the misleading light of hindsight. It is essential that the critic should keep himself constantly aware of that fact.”61

If we don’t heed Anthony Hidden’s warning, the hindsight bias can have a profound influence on how we judge past events. Hindsight makes us

•  Oversimplify causality (this led to that) because we can start from the outcome and reason backwards to presumed or plausible “causes”

•  Overestimate the likelihood of the outcome (and people’s ability to foresee it), because we already have the outcome in our hands

•  Overrate the role of rule or procedure “violations.” Although there is always a gap between written guidance and actual practice (and this almost never leads to trouble), that gap takes on causal significance once we have a bad outcome to look at and reason back from

•  Misjudge the prominence or relevance of data presented to people at the time

•  Match outcome with the actions that went before it. If the outcome was bad, then the actions leading up to it must have been bad too—missed opportunities, bad assessments, wrong decisions, and misperceptions

If the outcome of a mistake is really bad, we are likely to see that mistake as more culpable than if the outcome had been less bad. If the outcome is bad, then there is more to account for. This can be strange, because the same mistake can be looked at in a completely different light (e.g., without knowledge of outcome) and then it does not look as bad or culpable at all. There is not much to account for. So hindsight and knowledge of outcome play a huge role in how we handle the aftermath of mistake. Let us look here at one such case: it looks normal, professional, plausible, and reasonable from one angle. And culpable from another. The hinge between the two is hindsight: knowing how things turned out.

The case study shows the difference between foresight and hindsight nicely—and how our judgment of people’s actions depends on it.

Zvi Lanir, a researcher of decision making, tells a story of a 1973 encounter between Israeli fighter jets and a Libyan airliner. He tells the story from two angles: from that of the Israeli Air Force, and then from that of the Libyan airliner.62 This works so well that I do that here too. Not knowing the real outcome, Israeli actions make good sense, and there would be little to account for. The decision-making process has few if any bugs.

•  The incident occurred during daylight, unfolded during 15 minutes, and happened less than 300 kilometers away from Israeli headquarters.

•  The people involved knew each other personally, were well acquainted with the terrain, and had a long history of working together through crises that demanded quick decisions.

•  There was no evidence of discontinuities or gaps in communication or the chain of command.

•  The Israeli Air Force commander happened to be in the Central Air Operations Center, getting a first-hand impression as events advanced.

•  The Israeli chief of staff was on hand by telephone for the entire incident duration too.

The process that led up to the outcome, in other words, reveals few problems. It is even outstanding: the chief of staff was on hand to help with the difficult strategic implications; the Air Force commander was in the Operations Center where decisions were taken, and no gaps in communication or chain of command occurred. Had the outcome been as the Israelis may have suspected, then there would have been little or nothing to account for. Things went as planned, trained for, expected, and a good outcome resulted.

But then, once we find out the real outcome (the real nature of the Libyan plane), we suddenly may find reason to question all kinds of aspects of that very same process. Was it right or smart to have the chief of staff involved? What about the presence of the Air Force commander? Did the lack of discontinuities in communication and command chain actually contribute to nobody saying “wait a minute, what are we doing here?” The same process—once we learned about its real outcome—gets a different hue. A different accountability. As with the doctors that are sued: the worse the outcome, the more there is to account for. Forget the process that led up to it.

A NORMAL, TECHNICAL PROFESSIONAL ERROR

At the beginning of 1973, Israeli intelligence received reports on a possible suicide mission by Arab terrorists. The suggestion was that they would commandeer a civilian aircraft and try to penetrate over the Sinai Desert with it, to self-destruct on the Israeli nuclear installation at Dimona or other targets in Beer Sheva. On February 21, the scenario seemed to be set in motion. A sandstorm covered much of Egypt and the Sinai Desert that day.

At 13:54 hours (1:54 P.M.), Israeli radar picked up an aircraft flying at 20,000 feet in a northeasterly direction from the Suez bay. Its route seemed to match that used by Egyptian fighters for their intrusions into Israeli airspace, known to the Israelis as a “hostile route.” None of the Egyptian war machinery on the ground below, supposedly on full alert and known to the Israelis as highly sensitive, came into action to do anything about the aircraft. It suggested collusion or active collaboration.

Two minutes later, the Israelis sent two F-4 Phantom fighter jets to identify the intruder and intercept it if necessary. After only a minute, they had found the jet. It turned out to be a Libyan airliner. The Israeli pilots radioed down that they could see the Libyan crew in the cockpit, and that they were certain that the Libyans could see and identify them (the Shield of King David being prominently displayed on all Israeli fighter jets).

At the time, Libya was known to the Israelis for abetting Arab terrorism, so the Phantoms were instructed to order the intruding airliner to descend and land on the nearby Refidim Airbase in the south of Israel. There are international rules for interception, meant to prevent confusion in tense moments where opportunities for communication may be minimal and opportunities for misunderstanding huge. The intercepting plane is supposed to signal by radio and wing-rocking, while the intercepting aircraft must respond with similar signals, call the air traffic control unit it is in contact with, and try to establish radio communication with the interceptor.

The Libyan airliner did none of that. It continued to fly straight ahead, toward the northeast, at the same altitude. One of the Israeli pilots then sided up to the jet, flying only a few meters beside its right cockpit window. The copilot was looking right at him. He then appeared to signal, indicating that the Libyan crew had understood what was going on and that they were going to comply with the interceptors. But it did not change course, nor did it descend.

At 14:01, the Israelis decided to fire highly luminescent tracer shells in front of the airliner’s nose, to force it to respond. It did. The airliner descended and turned toward the Refidim Airbase. But then, when it had reached 5000 feet and lowered its landing gear, the airliner’s crew seemed to change its mind. Suddenly it broke off the approach, started climbing again, putting away the landing gear, and turned west. It looked like an escape.

The Israelis were bewildered. A captain’s main priority is the safety of his or her passengers: doing what this Libyan crew was doing showed none of that concern. So maybe the aircraft had been commandeered and the passengers (and crew) were along for the ride, or perhaps there were no passengers onboard at all. Still, these were only assumptions. It would be professional, the right thing to do, to double-check. The Israeli Air Force commander decided that the Phantoms should take a closer look, again.

At 14:05, one of the Phantoms flew by the airliner within a few meters and reported that all the window blinds were drawn. The Air Force commander became more and more convinced that it may have been an attempted, but foiled, terrorist attack. Letting the aircraft get away now would only leave it to have another go later.

At 14:08, he gave the order for the Israeli pilots to fire at the edges of the wings of the airliner, so as to force it to land. The order was executed. But even with the tip of its right wing hit, the airliner still did not obey the orders and continued to fly westward. The Israelis opened all international radio channels, but could not identify any communication related to this airliner. Two minutes later, the Israeli jets were ordered to fire at the base of the wings. This made the airliner descend and aim, as best it could, for a flat sandy area to land on. The landing was not successful. At 14:11, the airliner crashed and burned.

A NORMATIVE, CULPABLE MISTAKE

Had the wreckage on the ground revealed no passengers, and a crew intent on doing damage to Israeli targets, the decisions of the relevant people within the Israeli Air Force would have proven just and reasonable. There would be no basis for asserting negligence. As it turned out, however, the airliner was carrying passengers. Of 116 passengers and crew, 110 were killed in the crash.

The cockpit voice recorder revealed a completely different reality, a different “truth.” There had been three crew members in the cockpit: a French captain, a Libyan copilot, and a French flight engineer (sitting behind the two pilots). The captain and the flight engineer had been having a conversation in French, while enjoying a glass of wine. The copilot evidently had no idea what they were talking about, lacking sufficient proficiency in French. It was clear that the crew had no idea that they were deviating more than 70 miles from the planned route, first flying over Egyptian and later Israeli war zones.

At 13:44, the captain first became uncertain of his position. Instead of consulting with his copilot, he checked with his flight engineer (whose station has no navigational instruments), but did not report his doubts to Cairo approach. At 13:52, he got Cairo’s permission to start a descent toward Cairo International Airport. At 13:56, still uncertain about his position, the captain tried to receive Cairo’s radio navigation beacon, but got directions that were contrary to those he had expected on the basis of his flight plan (as the airport was now gliding away further and further behind him).

Wanting to sort out things further, and hearing nothing else from Cairo approach, the crew continued on their present course. Then, at 13:59, Cairo came on the radio to tell the crew that they were deviating from the airway. They should “stick to beacon and report position.” The Libyan copilot now reported for the first time that they were having difficulties in getting the beacon.

At 14:00, Cairo approach asked the crew to switch to Cairo control: a sign that they believed the airliner was not within range to land, close to the airport. Two minutes later the crew told Cairo control that they were having difficulties receiving another beacon (Cairo NDB, or non-directional beacon, with a certified range of about 50 kilometers), but did not say they were uncertain of their position. Cairo control asked the aircraft to descend to 4000 feet.

Not much later, the copilot reported that they had “four Mikoyan and Gurevich—Russian-built fighter airplanes (MIGs)” behind them, mistaking the Israeli Phantoms for Soviet-built Egyptian fighter jets. The captain added that he guessed they were having some problems with their heading and that they now had four MIGs behind them. He asked Cairo for help in getting a fix on his position. Cairo responded that their ground-based beacons were working normally, and that they would help find the airliner by radar.

Around that same time, one of the Phantoms had hovered next to the copilot’s window. The copilot had signaled back, and turned to his fellow crewmembers to tell them. The captain and flight engineer once again engaged in French about what was going on, with the captain angrily complaining about the Phantom’s signals, that this was not the way to talk to him. The copilot did not understand.

At 14:06 Cairo control advised the airliner to climb to 10,000 feet again, as they were not successful in getting a radar fix on the airplane (it was way out of their area and probably not anywhere near where they expected it to be). Cairo had two airfields: an international airport on the west side and a military airbase on the east. The crew likely interpreted the signals from the MIGs as them having overshot the Cairo international airport, and that the fighter jets had come to guide them back. This would explain why they suddenly climbed back up after approaching the Refidim airbase. Suspecting that they had lined up for Cairo East (the military field), now with fighters on their tail, the crew decided to turn west and find the international airport instead.

At 14:09, the captain snapped at Cairo control that they were “now shot by your fighter,” upon which Cairo said they were going to tell the military that they had an unreported aircraft somewhere out there but did not know where it was. When they were shot at again, the crew panicked, accelerating their speaking in French. Were these Egyptians crazy? Then, suddenly, the copilot identified the fighters as Israeli warplanes. It was too late, with devastating consequences.

HINDSIGHT AND CULPABILITY

The same actions and assessments that represent a conscientious discharge of professional responsibility can, with knowledge of outcome, become seen as a culpable, normative mistake.

With knowledge of outcome, we know what the commanders or pilots should have checked better (because we now know what they missed, for example, that there were passengers on board and that the jet was not hijacked). After the fact, there are always opportunities to remind professionals what they could have done better (Could you not have checked with the airline? Could your fighters not have made another few passes on either side to see the faces of passengers?). Again, had the airliner not contained passengers, nobody would have asked those questions. The professional discharge of duty would have been sufficient if that had been the outcome. And, conversely, had the Israelis known that the airliner contained passengers, and was not hijacked but simply lost, they would never have shot it down.

Few in positions to judge the culpability of a professional mistake have as much (or any) awareness of the debilitating effects of hindsight. Judicial proceedings, for example, will stress how somebody’s behavior did not make sense, how it violated narrow standards of practice, rules, or laws.

Jens Rasmussen once pointed out that if we find ourselves (or a prosecutor) asking, “How could they have been so negligent, so reckless, so irresponsible?” it is not because the people in question were behaving bizarrely. It is because we have chosen the wrong frame of reference for understanding their behavior. The frame of reference for understanding people’s behavior, and judging whether it made sense, is their own normal work context, the context they were embedded in. This is the point of view from where decisions and assessments are sensible, normal, daily, unremarkable, expected. The challenge, if we really want to know whether people anticipated risks correctly, is to see the world through their eyes, without knowledge of outcome, without knowing exactly which piece of data will turn out critical afterward.

THE WORSE THE OUTCOME, THE MORE TO ACCOUNT FOR

If an outcome is worse, then we may well believe that there is more to account for. That is probably fundamental to the social nature of accountability. We may easily believe that the consequences should be proportional to the outcome of somebody’s actions. Again, this may not be seen as fair: recall the example from the beginning of this chapter. Physicians believe that liability is connected to outcomes that they have little control over, not to the quality of care they provided. To avoid liability, in other words, you don’t need to invest in greater quality of care. Instead, you invest in defensive medicine: more tests, covering your back at every turn.

The main question for a just culture is not about matching consequences with outcome. It is this: Did the assessments and actions of the professionals at the time make sense, given their knowledge, their goals, their attentional demands, their organizational context? Satisfying calls for accountability here would not be a matching of bad outcome with bad consequences for the professionals involved. Instead, accountability could come in the form of reporting or disclosing how an assessment or action made sense at the time, and how changes can be implemented so that the likelihood of it turning into a mistake declines.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset