3

Safety Reporting and Honest Disclosure

A basic premise of a just culture is that it helps people report safety issues without fear of the consequences. It is almost an article of faith in the safety community that reporting helps learning, and that such learning helps improve safety. This seems borne out by the many incidents and accidents that, certainly in hindsight, seem to have been preceded by sentinel events. There is an implicit understanding that reporting is critical for learning. And learning is critical for improving safety (or, if anything, for staying just ahead of the changing nature of risk). The safety literature, however, has been modest in providing systematic evidence for the link among reporting, learning, and safety—if anything because we might never know the accidents that a safety report and subsequent improvements prevented from happening. For the purposes of this book, however, let us work off the premise that both honest disclosure and nonpunitive reporting have important roles to play in the creation of a safe organization. And in the creation of a just culture.

This chapter looks at safety reporting in more detail, and examines its links to a just culture inside of your organization. It also considers the regulatory and legal environment that surrounds your organization: How does that influence what you can and need to do inside? It also discusses disclosure, the obligations for it as well as the risks it creates for practitioners, and some possible protections that organizations might put in place. Chapter 4 deals in more detail with the surrounding environment, particularly the criminalization of “human error.”

I once overheard a conversation between two air traffic controllers. They were talking about an incident in their control center. They discussed what they thought had happened, and who had been involved. What should they do about it?

“Remember,” said one controller to the other, “Omertà!”

The other nodded, and smiled with a frown.

I said nothing but wondered silently: “Omertà”?

Surely this had something to do with the Mafia. Not with professional air traffic controllers.

Or any other professionals.

Indeed, a common definition of omertà is “code of silence.” It seals people’s lips. It also refers to a categorical prohibition to collaborate with authorities. These controllers were not going to talk about this incident. Not to anybody else, or anybody in a position of authority in any case. Nor were they going to collaborate voluntarily with supervisors, managers, investigators, and regulators.

I live my professional life in occasional close contact with professional groups—firefighters, pilots, nurses, physicians, police, nuclear power plant operators, inspectors, air traffic controllers. A “code of silence” is enforced and reproduced in various ways.

A senior captain with a large, respectable airline, who flies long-distance routes, told me that he does not easily volunteer information about incidents that happen on his watch. If only he and his crew know about the event, then they typically decide that that knowledge stays there. No reports are written; no “authorities” are informed.

“Why not?” I wanted to know.

“Because you get into trouble too easily,” he replied. “The airline can give me no assurance that information will be safe from the prosecutor or anybody else. So I simply don’t trust them with it. Just ask my colleagues. They will tell you the same thing.”

I did. And they did.

Professionals under these circumstances seem to face two bad alternatives:

•  Either they report a mistake and get into some kind of trouble for it (they are stigmatized, reprimanded, fired, or even prosecuted).

•  Or they do not report the mistake and keep their fingers crossed that nobody else will do so either (“Remember: omertà!”).

The professionals I talked to know that they can get into even worse trouble if they don’t report and things come out anyway. But to not talk, and hope nobody else does either, often seems the safest bet. From the two bad alternatives, it is the less bad one.

I once spoke at a meeting at a large teaching hospital, attended by hundreds of healthcare workers. The title of the meeting was “I got reported.” The rules of the country where the meeting was held say that it is the nurse’s or doctor’s boss who determines whether an incident should be reported to the authorities. And the boss then does the reporting. “I got reported” suggests that the doctor or nurse is at the receiving end of the decision to report: a passive nonparticipant. A casualty, perhaps, of forces greater than themselves, and interests other than their own. The nurse or doctor may have to go to his or her boss to report a mistake. But what motives do they have to do so? The formal account of what happened, and what to do about it, ultimately rests in the hands of the boss.

A FEW BAD APPLES?

We could think that professionals who rely on “omertà” are simply a few bad apples. They are uncooperative, unprofessional exceptions. Most professions, after all, carry an obligation to report mistakes and problems. Otherwise their system cannot learn and improve. So if people do not want to create safety together, there must be something wrong with them.

This, of course, would be a convenient explanation. And many rely on it. They will say that all that people need to do is report their mistakes. They have nothing to fear. Report more! Then the system will learn and get better. And you will have a part in it. Indeed, almost every profession I have worked with complains about a lack of reporting.

Yet I am not surprised that people sometimes don’t want to report. The consequences can be negative. Organizations can respond to incidents in many ways. In the aftermath of an incident or accident, pressures on the organization can be severe. The media wants to know what went wrong. Politicians may too. They all want to know what the organization is going to do about it. Who made a mistake? Who should be held responsible? Even a prosecutor may become interested. National laws (especially those related to freedom of information) mean that data that people voluntarily submit about mistakes and safety problems can easily fall into wrong hands. Also, I can understand that people sometimes don’t want to report because they have lost trust in the system, or in their manager, or in their organization, to do anything with their reports and the concerns in them.

Not reporting is hardly about a few bad apples. It is about structural arrangements and relationships between parties that either lay or deny the basis for trust. Trust is necessary if you want people to share their mistakes and problems with others. Trust is critical. But trust is hard to build and easy to break.

GETTING PEOPLE TO REPORT

Getting people to report can be difficult. Keeping up the reporting rate once the system is running can be equally difficult, though often for different reasons. Getting people to report is about two major things:

•  Maximizing accessibility

•  Minimizing anxiety

The means for reporting must be accessible. If you have reporting forms, they need to be easily and ubiquitously available, and should not be cumbersome to fill in or send up. Computer-based systems, of course, can be made as user-friendly or unfriendly as the fantasy of their developer allows. But what about anxiety? Initially, people will ask questions like

•  What will happen to the report?

•  Who else will see it?

•  Do I jeopardize myself, my career, my colleagues?

•  Does this make legal action against me easier?

You should ask yourself whether there is a written policy that explains to everybody in the organization what the reporting process looks like; what the consequences of reporting could be; and what rights, privileges, protections, and obligations people may expect. Without a written policy, ambiguity can persist. And ambiguity means that people will be less inclined to share safety-critical or risk-related information with you.

Getting people to report is about building trust: trust that the information provided in good faith will not be used against those who reported it. Such trust must be built in various ways. An important way is by structural (legal) arrangement. Making sure people have knowledge about the organizational and legal arrangements surrounding reporting is very important: disinclination to report is often related more to uncertainty about what can happen with a report than by any real fear about what will happen. One organization, for example, has handed out little credit-sized cards to its employees to inform them about their rights and duties around an incident.

Another way to build trust is by historical precedent: making sure there is a good record for people to lean on when considering whether to report an event or not. But as mentioned previously, trust is hard to build and easy to break: one organizational or legal response to a reported event that shows that divulged information can somehow be used against the reporter can destroy months or years of building goodwill.

WHAT TO REPORT?

The belief is that reporting contributes to organizational learning. It is to help prevent recurrence by making systemic changes that aim to redress some of the basic circumstances that went awry. This means that any event that has the potential to shed some light on (and help improve the conditions for safe practice) is, in principle, worth reporting and investigating. But that still does not create very meaningful guidance.

Many professions have codified the obligation to report. The Eurocontrol Safety and Regulatory Requirement (ESARR 2), for example, told air traffic controllers and their organizations that “all safety occurrences need to be reported and assessed, all relevant data collected and lessons disseminated.” Saying that all safety occurrences need to be reported is easy. But what counts as a “safety occurrence?” This can be open for interpretation: the missed approach of the 747 in the case study about honest mistakes was, according to the pilot, not a safety occurrence. It was not worth reporting. But according to his bosses and regulators, it was. And the fact that he did not report it made it all the more so.

Professional codes about reporting, then, should ideally be more specific than saying that “all safety occurrences” should be reported. What counts as a clear opportunity for organizational learning for one, perhaps constitutes a dull and not report worthy event to somebody else. Something that could have gone terribly wrong, but did not, is not necessarily a clear indication of reportworthiness either. After all, in many professions things can go terribly wrong the whole time (“I endanger my passengers every day I fly!”). But that does not make reporting everything particularly meaningful.

Which event is worthy of reporting and investigating is, at its heart, a judgment. First, it is a judgment by those who perform safety-critical work at the sharp end. Their judgment about whether to report something is shaped foremost by experience—the ability to deploy years of practice into gauging the reasons and seriousness behind a mistake or adverse event.

To be sure, those years of experience can also have a way of blunting the judgment of what to report. If all has been seen before, why still report? What individuals and groups define as “normal” can glide, incorporating more and more nonconformity as time goes by and as experience mounts. In addition, the rhetoric used to talk about mistakes can serve to “normalize” (or at least deflect) an event away from the professionals at that moment. A “complication” or “noncompliant patient” is not so compelling to report (though perhaps worth sharing with peers in some other forum), as when the same event were to be denoted as, for example, a diagnostic error.

Whether an event is worth reporting, in other words, can depend on what language is used to describe that event in the first instance. This has another interesting implication: In some cases a lack of experience (either because of a lack of seniority or inexperience with a particular case, or in that particular department) can be immensely refreshing in questioning what is “normal” (and thus what should be reported or not).

Investing in a meeting where different stakeholders share their examples of what is worth reporting could be useful. It could result in a list of examples that can be handed to people as partial guidance on what to report. But in the end, given the uncertainties about how things can be seen as valuable by other people, and how they could have developed, the ethical obligation might well be: “If in doubt, report.”

But then, what delimits an “event?” The reporter needs to decide where the reported event begins and ends. She or he needs to decide how to describe the roles and actions of other participants who contributed to the event (and to what extent to identify other participants, if at all). Finally, the reporter needs to settle on a description that offers the organization a chance to understand the event and find leverage for change. Many of these things can be structured beforehand, for example, by offering a reporting form that gives guidance and asks particular questions (“need-to-know” for the organization to make any sense of the event) as well as ample space for free-text description.

KEEPING THE REPORTS COMING IN

Keeping up the reporting rate is also about trust. But it is even more about involvement, participation, and empowerment. Building enough trust so that people do not feel put off to send in reports in the first place is one thing. Building a relationship with participation and involvement that will actually get and keep people to send in reports is quite another.

Many people come to work with a genuine concern for the safety and quality of their professional practice. If, through reporting, they have an opportunity to actually contribute to visible improvements, then few other motivations or exhortations to report are necessary. Making a reporter part of the change process can be a good way forward, but this implies that the reporter wants (or dares) to be identified as such, and that managers have no problems with integrating employees in their work for improved safety and quality.

Sending feedback into the department about any changes that result from reporting can also be a good strategy. But it should not become the stand-in for doing anything else with the reports. Many organizations get captured by the belief that reporting is a virtue in itself: if only people report mistakes, and their self-confessions are distributed back to the operational community, then things will automatically improve and people will feel motivated to keep reporting. This does not work for long. Active engagement with that which is reported, and perhaps even with those who report, is necessary. Active, demonstrable intervention that acts on the reported information is too.

REPORTING TO MANAGERS OR TO SAFETY STAFF?

In many organizations, the line manager is the recipient of reports. This makes (some) sense: the line manager probably has responsibility for safety and quality in the primary processes, and should have the latest information on what is or is not going well. But this practice has some side effects.

•  One is that it hardly renders reporters anonymous (given the typical size of a department), even if no name is attached to the report.

•  The other is that reporting can have immediate line consequences (an unhappy manager, consequences for one’s own chances to progress in career).

•  Especially in cases where the line manager her- or himself is part of the problem the reporter wishes to identify, such reporting arrangements all but stop the flow of useful information.

I remember studying one organization that had shifted from a reporting system run by line managers to one run by safety-quality staff. Before the transition, employees actually turned out very ready to confess an “error” or “violation” to their line manager. It was almost seen as an act of honor. Reporting it to a line organization—which would see an admission of error as a satisfactory conclusion to its incident investigation—produced rapid closure for all involved. Management would not have to probe deeper, as the operator had seen the error of his or her ways and had been reprimanded and told or trained to watch out better next time.

For the operators, simply and quickly admitting an error avoided even more or deeper questions from their line managers. Moreover, it could help avert career consequences, in part by preventing information from being passed to other agencies (e.g., the industry’s regulator). Fear of retribution, in other words, did not necessarily discourage reporting. In fact, it encouraged a particular kind of reporting: a mea culpa with minimal disclosure that would get it over with quickly for everybody. “Human error” as cause seemed to benefit everyone—except organizational learning.

As one employee told us: “I didn’t tell the truth about what took place, and this was encouraged by the line manager. He had made an assumption that the incident was due to one factor, which was not the case. This helped me construct and maintain a version of the story that was more favorable for us (the frontline employees).”

Perhaps the most important reason to consider a reporting system that is not run just by line management is that it can radically improve organizational learning. Here is what one line manager commented after having been given a report by an operator.

The incident has been discussed with the operator concerned, pointing out that priorities have to be set according to their urgency. The operator should not be distracted by a single problem and neglect the rest of his working environment. He has been reminded of applicable rules and allowable exceptions to them. The investigation report has been made available to other operators by posting it on the internal safety board.

Such countermeasures really do not represent the best in organizational learning. In fact, they sound like easy feel-good fixes that are ultimately illusory. Or simply very short-lived. Opening up a parallel system (or an alternative one) can help. The reports in this system should go to a staff officer, not a line manager (e.g., a safety or quality official), who has no stakes in running the department. The difference between what gets reported to a line manager and that is written in confidential reports can be significant. Also, the difference in understanding, involvement, and empowerment that the reporter feels can be significant.

It is very good that a colleague, who understands the job, performs the interviews. They asked me really useful questions and pointed me in directions that I hadn’t noticed. It was very positive compared to before. Earlier you never had the chance to understand what went wrong. You only got a conclusion to the incident. Now it is very good that the report is not published before we have had the chance give our feedback. You are very involved in the process now and you have time to go through the occurrence. Before you were placed in the hot chair and you felt guilty. Now, during interviews with the safety staff, I never had the feeling that I was accused of anything.

Of course, keeping a line-reporting mechanism in place can be productive for continuous improvement work, especially if things need to be brought to the attention of relevant managers immediately. But you should perhaps consider a separate, parallel confidential reporting system if you don’t already have one. Line-based and staff-based (or formal and confidential) reporting mechanisms offer different kinds of leverage for change. Not mining both data sources for information could mean your organization is losing out on improvement opportunities.30

THE SUCCESSFUL REPORTING SYSTEM: VOLUNTARY, NONPUNITIVE, AND PROTECTED

In summary, near-miss reporting systems that work in practice do a few things really well.63 They are

•  Voluntary

•  Nonpunitive

•  Protected

VOLUNTARY

The language in the Eurocontrol rules suggests that reporting should be compulsory. All safety occurrences need to be reported. Many organizations have taken it to mean just that, and have told their employees that they are obligated to report safety occurrences. But this might actually make little sense. Why does research show that a voluntary system is better, leaving it to people’s own judgment to report or not? Mandatory reporting implies that everybody has the same definition of what is risky, what is worthy of reporting. This, of course, is seldom the case. The airline captain who maintained omertà on his own flight deck could—in principle—be compelled to report all incidents and safety occurrences, as Eurocontrol would suggest. But the captain probably has different ideas about what is risky and worth reporting than his organization does.

So who gets to say what is worthy of reporting? If the organization claims that right, then they need to specify what they expect their professionals to report. This turns out to be a pretty hopeless affair. Such guidance is either too specific or too general. All safety occurrences, like Eurocontrol suggests? Sure, then of course practitioners will decide that what they were involved in wasn’t really a safety occurrence. Well, the organization may come back and put all kinds of numeric borders in place. If it is lower than 1000 feet, or closer than 5 miles or longer than 10 minutes, then it is a safety occurrence. But a line means a division, a dichotomy. Things fall on either side. What if a really interesting story unfolds just on the other side, the safe side of that border—at 9 minutes and 50 seconds? Or 1100 feet? Or 5.1 miles? Of course, you might say, we have to leave that to the judgment of the professional. Ah! That means that the organization does not decide what should be reported! It is the professional again. That is where the judgment resides. Which means it is voluntary (at least on the safe side of those numbers), even though you think it is compulsory.

The other option is to generate a very specific list of events that will need to be reported. The problem there is that practitioners can then decide that on some little nuance or deviation their event does not match any of the events in the list. The guidance will have become too specific to be of any value. Again, the judgment is left up to the practitioner, and the system once again becomes voluntary rather than compulsory.

Also, making reporting mandatory implies some kind of sanction if something is not reported, which destroys the second ingredient for success: having a nonpunitive system. It may well lead to a situation in which practitioners will get smarter at making evidence of safety-critical events go away (so they will not get punished for not reporting them). As said previously, practitioners can engage in various kinds of rhetoric or interpretive work to decide that the event they were involved in was not a near miss, not an event worthy of reporting or taking any further action on.64 Paradoxically then, a mandatory system can increase underreporting—simply because the gap between what the organization expects to be reported and what gets reported gets stretched to its maximum.

NONPUNITIVE

Nonpunitive means that the reporter is not punished for revealing his or her own violations or other breaches or problems of conduct that might be construed as culpable. In other words, if people report their honest mistakes in a just culture, they will not be blamed for them. The reason is that an organization can benefit much more by learning from the mistakes that were made than from blaming the people who made them. So people should feel free to report their honest mistakes.

The problem is, often they don’t.

Often they don’t feel free, and they don’t report.

This is because reporting can be risky. Many things can be unclear:

•  How exactly will the supervisor, the manager, or the organization respond?

•  What are the rights and obligations of the reporter?

•  Will the reported information stay inside of the organization? Or will other parties (media, prosecutor) have access to it as well?

The reason why people fail to report is not because they want to be dishonest. Nor because they are dishonest. The reason is that they fear the consequences. And often with good reason.

•  Either people simply don’t know the consequences of reporting, so they fear the unknown, the uncertainty

•  Or the consequences of reporting really can be bad, and people fear invoking such consequences when they report information themselves

Although the first reason may be more common, either reason means that there is serious work to do for your organization. In the first case, that work entails clarification. Make clear what the procedures and rules for reporting are, what people’s rights and obligations are, and what they can expect in terms of protection when they report. In the second case it means trying to make different structural arrangements, for example, with regulators or prosecutors, with supervisors or managers, about how to treat those who report. This is much more difficult, as it involves the meshing of several different interests.

Not punishing that which gets reported makes great sense, simply because otherwise it won’t get reported. This of course creates the dilemma for those receiving the report (even if via some other party, e.g., a quality and safety staff): they want to hear everything that goes on but cannot accept that it goes on. The willingness to report anything by any other nurse would have taken a severe beating. The superficially attractive option is to tell practitioners (as much guidance material around reporting suggests) that their reports are safe in the hands of their organization unless there is evidence of bad things (such as negligence or deliberate violations). Again, such guidance is based on the illusion of a clear line between what is acceptable and what is not—as if such things can be specified generically, beforehand. They can’t.

From the position of a manager or administrator, one way to manage this balance is to involve the practitioners who would potentially report (not necessarily the one who did report, because if it’s a good reporting system, that might not be known to the manager; see later). What is their assessment of the event that was reported? How would they want to be dealt with if it were their incident, not their colleague’s? Remember that perceived justice lies less in the eventual decision than in who and what is involved in making that decision. For a manager, keeping the dialogue open with her or his practitioner constituency must be the most important aim. If dialogue is killed by rapid punitive action, then a version of the dialogue will surely continue elsewhere (behind the back of the manager). That leaves the organization none the wiser about what goes on and what should be learned.

PROTECTED

Finally, successful reporting systems are protected. That means that reports are confidential rather than anonymous. What is the difference? “Anonymity” typically means that the reporter is never known, not to anybody. No name or affiliation has to be filled in anywhere. “Confidentiality” means that the reporter fills in name and affiliation and is thus known to whomever gets the report. But from there on, the identity is protected, under any variety of industrial or organizational or legal arrangements. If reporting is anonymous, two things might happen quickly. The first is that the reporting system becomes the trash can for any kind of vitriol that practitioners may accumulate about their job; their colleagues; or their hospital during a workday, workweek, or career. The risk for this, of course, is larger when there are few meaningful or effective line management structures in place that could take care of such concerns and complaints. However, senseless and useless bickering could clog the pipeline of safety-critical information. Signals of potential danger would get lost in the noise of grumble. So that’s why confidentiality makes more sense. The reporter may feel some visibility, some accountability even, for reporting things that can help the organization learn and grow.

A second problem with an anonymous reporting system is that the reporter cannot be contacted if the need arises for any clarifications. The reporter is also out of reach for any direct feedback about actions taken in response to the report. The NASA Aviation Safety Reporting System (ASRS) recognized this quickly after finding reports that were incomplete or could have been much more potent in revealing possible danger if only this or that detail could be cleared up. As soon as a report is received, the narrative is separated from any identifying information (about the reporter and the place and time of the incident) so that the story can start to live its own life without the liability of recognition and sanction appended to it. This recipe has been hugely successful. ASRS receives more than 1000 reports each week.65 Of course, gathering data is not the same as analyzing it, let alone learning from it. Indeed, such reporting systems can become the victims of their own success: the more data you get, the more difficult it can be to make sense of it all, certainly if the gathering of data outpaces the analytic resources available to you.

WHAT IF REPORTED INFORMATION FALLS INTO THE WRONG HANDS?

Most democracies have strong freedom-of-information legislation. This allows all citizens from the outside access, in principle, to nonconfidential information. Such transparency is critical to democracy, and in some countries, freedom-of-information is even enshrined in the constitution. But the citizen requesting information can easily be an investigating journalist, a police investigator, or a prosecutor. Freedom-of-information is really an issue when the organization itself is government-owned (as many hospitals or air traffic service providers or even some airlines are). Moreover, safety investigating bodies are also government organizations and thus subject to such legislation. This can make people unwilling to collaborate with safety investigators.

The potential for such exposure can create enormous uncertainty. And uncertainty typically dampens the willingness of people to report. People become anxious to leave information in files with their organization. In fact, the organization itself can become anxious to even have such files. Having them creates the risk that names of professionals end up in the public domain. This in turn can subject safety information to oversimplification and distortion and misuse by those who do not understand the subtleties and nuances of the profession.

THE DIFFERENCE BETWEEN DISCLOSURE AND REPORTING

For the purposes of just culture, it is useful to make a distinction between reporting and disclosure (Table 3.1)1:

•  Reporting is the provision of information to supervisors, oversight bodies, or other agencies. Reporting means given a spoken or written account of something that you have observed, participated in, or done to an appointed party (supervisor, safety manager). Reporting is thought necessary because it contributes to organizational learning. Reporting is not primarily about helping customers or patients, but about helping the organization (e.g., colleagues) understand what went wrong and how to prevent recurrence.

•  Disclosure is the provision of information to customers, clients, patients, and families. The ethical obligation to disclose your role in adverse events comes from a unique, trust-based relationship with the ones who rely on you for a product or service. Disclosure can be seen as a marker of professionalism. Disclosure means making information known, especially information that was secret or that could be kept secret. Information about incidents that only one or a few people were involved in, or that only professionals with inside knowledge can really understand, could qualify as such.

TABLE 3.1
The Difference between Disclosure and Reporting for Individuals and Organizations

Reporting

Disclosure

Individual

Providing a written or spoken account about observation or action to supervisors, managers, safety/quality staff

Making information known to customers, clients, patients

Organization

Providing information about employees’ actions to regulatory or other (e.g., judiciary) authorities when required

Providing information to customers, clients, patients, or others affected by the organization’s or employees’ actions

As with any complex problem, neither reporting nor disclosure is a constant guarantee for a just culture. Reporting, for example, can be used (like in the preceding examples) as a way to deny responsibility and engagement with the problem. When certain protections are in place, reporting can perhaps even be used to insulate oneself from accountability. Disclosure, too, can sometimes lead to injustice. Disclosure beforehand to clients or patients or customers can be used to immunize an organization (or individuals in it) from legal or other recourse after something has gone wrong. The spread of subprime mortgages in the 2000s, for example, led to complex transactions with clients to whom everything was disclosed that the law asked for—though in inscrutable fine print. After the bubble burst, and people started losing homes, banks fended off lawsuits by arguing that they had done and disclosed everything the law required, and that people had confirmed with their signature that they understood it all. That can be the harm and injustice of disclosure.

•  Practitioners typically have an obligation to report to their organization when something went wrong. As part of the profession, they have a duty to flag problems and mistakes. After all, they represent the leading edge, the sharp end of the system: they are in daily contact with the risky technology or business. Their experiences are critical to learning and continuous improvement of the organization and its work.

•  Many practitioners also have an obligation to disclose information about things that went wrong to their customers, clients, and patients. This obligation stems from the relationship of trust that professionals have with those who make use of their services.

•  Organizations also have an obligation to disclose information about things that went wrong. This obligation stems from the (perhaps implicit) agreement that companies have with those who make use of their services or are otherwise affected by their actions.

•  Organizations (employers), the judiciary, and regulators have an obligation to be honest about the possible consequences of failure, so that professionals are not left in the dark about what can happen to them when they do report or disclose.

•  One could also propose that organizations have a (legal) obligation to report certain things to other authorities (judiciary, regulatory).

Disclosure and reporting can clash. And different kinds of reporting can also clash. This can create serious ethical dilemmas that both individual professionals and their employing organizations need to think about.

•  If an organization wants to encourage reporting, it may actually have to curtail disclosure. Reporters will step forward with information about honest mistakes only when they feel they have adequate protection against that information being misused or used against them. This can mean that reported information must somehow remain confidential, which rules out disclosure (at least of that exact information).

•  Conversely, disclosure by individuals may lead to legal or other adverse actions (even against the organization), which in turn can dampen people’s or the organization’s willingness to either report or disclose.

•  If organizations report about individual actions to regulatory or judicial authorities, this too can lower the willingness to report (and perhaps even disclose) by individuals, as they feel exposed to unjust or unwelcome responses to events they have been involved in.

A representative of the regulator had been sent out for a field visit as a customer of an organization I once worked with. She had observed things in the performance of one practitioner that, according to the rules and regulations, weren’t right. Afterwards, she contacted the relevant managers in the organization and let them know what she had seen. The managers, in turn, sent a severe reprimand to the professional.

It really strained trust and the relationship between practitioners and management: by their reaction, it was clear that reporting was not encouraged. The regulator would not have been happy either to find out that her visit was being used as a trigger to admonish an individual practitioner instead of resolving more systemic problems. It was as if the managers were offloading their responsibility for the problems observed onto the individual practitioner.

OVERLAPPING OBLIGATIONS

The difference between disclosure and reporting is not as obvious or problematic in all professions.

•  Where individual professional contact with clients is very close, such as in medicine, reporting and disclosure are two very different things.

•  Where the relationship is more distant, such as in air traffic control, the distinction blurs because for individual air traffic controllers there is not immediately a party to disclose to. The air traffic control organization, however, can be said to have an obligation to disclose.

If organizational disclosure or reporting does not occur, then the mistakes made by people inside that organization may no longer be seen as honest, and the organization can get in trouble as a result. This goes for individuals, too. It may have played a role in the case in Chapter 2, as it plays a role in many other cases. Not providing an account of what happened may give other people the impression that there is something to hide. And if there is something to hide, then what happened may not be seen as an “honest” mistake.

The killing of a British soldier in Iraq by a US pilot was a “criminal, unlawful act,” tantamount to manslaughter, a British coroner ruled. The family of Lance Corporal Hull, who died in March 2003, was told at the inquest in Oxford, England, that it was “an entirely avoidable tragedy.” His widow, Susan, welcomed the verdict, saying it was what the family had been waiting four years for. Hull said she did not want to see the pilot prosecuted, but felt she been “badly let down” by the US government, which consistently refused to cooperate.

Susan Hull had also been told by the UK Ministry of Defense that no cockpit tape of the incident existed. This was proven untrue when a newspaper published the tape’s contents and when it was later posted on the Internet. It showed how Hull was killed when a convoy of British Household Cavalry vehicles got strafed by two US A10 jets. The British Ministry of Defense issued an apology over its handling of the cockpit video, while the US Department of Defense denied there had been a cover-up and remained adamant that the killing was an accident.

The coroner, Andrew Walker, was damning in his appraisal of the way the Hull family had been treated. “They, despite request after request, have been, as this court has been, denied access to evidence that would provide the fullest explanation to help understand the sequence of events that led to and caused the tragic loss of LCorp Hull’s life,” he said. “I have no doubt of how much pain and suffering they have been put through during this inquisition process and to my mind that is inexcusable,” he said.66

Nondisclosure in the wake of an incident often means that a mistake will no longer be seen as honest. And once a mistake is considered dishonest, people may no longer care as much about what happens to the person who made that mistake, or to the party (e.g., the organization) responsible for withholding the information. This is where a mistake can get really costly—both financially and in terms of unfavorable media exposure, loss of trust and credibility, regulatory scrutiny, or even legal action.

I recall one adverse event in which the family was very upset, not only about what had happened, but also about the organization not being seen as forthcoming. The family had been invited by the organization to stay in a nice hotel for some of the legal proceedings. Feeling injured and let down, they ordered as much expensive room service as possible and then threw it all away. Having been hurt by the organization, they wanted to hurt the organization as much as possible in return.

Nondisclosure is often counterproductive and expensive. Silence can be interpreted as stone-walling, as evidence of “guilty knowledge.” It is well known that lawsuits in healthcare are often more a tool for discovery than a mechanism for making money. People don’t generally sue (in fact, very few actually do). But when they do, it is almost always because all other options to find out what happened have been exhausted.17 Paradoxically, however, lawsuits still do not guarantee that people will ever get to know the events surrounding a mishap. In fact, once a case goes to court, “truth” will likely be the first to suffer. The various parties may likely retreat into defensive positions from which they will offer only those accounts that offer them the greatest possible protection against the legal fallout.

THE RISKS OF REPORTING AND DISCLOSURE

John Goglia, a former member of the National Transportation Safety Board, recently wrote how Southwest Airlines settled a whistleblower lawsuit.67 It was filed by a mechanic who said he was disciplined for finding and reporting two cracks in the fuselage of a Boeing 737-700 while performing a routine maintenance check.

Southwest Airlines agreed to remove the disciplinary action from the mechanic’s file and to pay him $35,000 in legal fees. The lawsuit was filed under the whistle-blower protections, whose statute provides an appeal process for airline workers who are fired or otherwise disciplined for reporting safety information. The settlement was reached after a January 8, 2015 Department of Labor Administrative Judge dismissed Southwest’s motion for summary judgment and granted in part the mechanic’s motion for summary judgment.

The judge’s decision summarizes the allegations as follows: “On the evening of July 2, 2014, the [mechanic] was assigned by [Southwest] to perform a [maintenance] check on a Southwest Boeing 737-700 aircraft, N208WN. This maintenance check is part of Southwest’s Maintenance Procedural Manual (MPM). This check requires a mechanic to follow a task card which details the tasks to be accomplished. The task card requires the mechanic to walk around the aircraft to visually inspect the fuselage. During his inspection, the [mechanic] discovered two cracks on the aircraft’s fuselage and documented them. Discovery of these cracks resulted in the aircraft being removed from service to be repaired. Thereafter, the mechanic was called into a meeting with his supervisors to discuss the issue of working outside the scope of his assigned task. He was then issued a ‘Letter of Instruction’ advising the mechanic that he had acted outside the scope of work in the task card and warning him that further violations could result in further disciplinary actions. The mechanic alleged in his whistleblower complaint that the letter from Southwest was calculated to, or had the effect of, intimidating [him] and dissuading him and other Southwest [mechanics] from reporting the discovery of cracks, abnormalities or defects out of fear of being disciplined.”

Southwest responded to the mechanic’s allegations claiming that the mechanic went outside the scope of his duties when he observed the cracks and reported them. The airline further claimed that its Letter of Instruction was issued because the mechanic worked “outside the scope of his task” and not because he reported a safety problem. It further claimed that the letter was not a disciplinary action and the mechanic was not entitled to whistleblower protection. The administrative judge sided with the mechanic in dismissing Southwest’s claims and finding that the mechanic engaged in activities protected by the whistleblowing statute and that Southwest was aware of it. Although no final decision was reached on the merits of the mechanic’s case, the settlement followed close on the heels of the judge’s decision.

THE ETHICAL OBLIGATION TO REPORT OR DISCLOSE

Being honest in reporting a safety issue, as in the preceding case, can lead to problems in the relationship with the employer. But what about not being honest? What about not acknowledging or apologizing for a mistake? This often causes relationships to break down, too—more so than the mistake or mishap itself. This makes honesty all the more important in cases where there is a prior professional relationship, such as that of a patient and doctor. In a specific example, the Code of Medical Ethics (Ethical Opinions E-8.12) of the American Medical Association has said since 1981 that

It is a fundamental requirement that a physician should at all times deal honestly and openly with patients… Situations occasionally occur in which a patient suffers significant medical complications that may have resulted from the physician’s mistake or judgment. In these situations, the physician is ethically required to inform the patient of all the facts necessary to ensure understanding of what has occurred… Concern regarding legal liability which might result following truthful disclosure should not affect the physician’s honesty with a patient.

This is unique, because few codes exist that specifically tell professionals to be honest. It also spells out in greater detail which situations (“that may have resulted from a physician’s mistake or judgment”) are likely to fall under the provisions of the Code. So this is a good start. But it still leaves a large problem. What does “honesty” mean? Being honest means telling the truth. But telling the truth can be reduced to “not lying.” If it is, then there is still a long distance to full disclosure. What to say, how much to say, or how to say it, often hinges more on the risks that people see with disclosure than with what a code or policy tells them to do.

THE RISK WITH DISCLOSURE

If structural arrangements and relationships inside an industry, or a profession, are such that all bets are off when you tell your story, then people will find ways to not disclose, or to disclose only in ways that protect them against the vagaries and vicissitudes of the system. Another reason may be that practitioners are simply not well prepared to disclose. There might be no meaningful training of practitioners in how to disclose and punch through the pain, shame, guilt, and embarrassment of an incident.

Paralyzed by shame or lacking their own understanding of why the error occurred, physicians may find a bedside conversation too awkward. They may also be unwilling or unable to talk to anyone about the event, inhibiting both their learning and the likelihood of achieving resolution. Such avoidance and silence compound the harm.33

Programs in various countries are fortunately teaching practitioners about open disclosure, often through role play. These are generally thought to have good results.68 It is a good way to complement or reduce the influence of a profession’s “hidden curriculum.” This hidden curriculum can be seen as a sort of spontaneous system of informal mentoring and apprenticeship that springs up in parallel to any formal program. It teaches, mainly through example, students and residents how to think and talk about their own mistakes and those of their colleagues. They learn, for example, how to describe mistakes so that they are no longer mistakes. Instead, they can become

•  “Complications”

•  The result of a “noncompliant” patient

•  A “significantly avoidable accident”

•  An “inevitable occasional untoward event”

•  An “unfortunate complication of a usually benign procedure”

Medical professionals, in the hidden curriculum, also learn how to talk about the mistakes among themselves, while reserving another version for the patient and family and others outside their immediate circle of professionals. A successful story about a mistake is one that not only (sort of) satisfies the patient and family, but also one that protects against disciplinary measures and litigation.

Many, if not all, professions, have a hidden curriculum. Perhaps it teaches professionals the rhetoric to make a mistake into something that no longer is a mistake. Perhaps it teaches them that there is a code of silence, an omertà, that proscribes collaborating truthfully with authorities or other outside parties.

THE PROTECTION OF DISCLOSURE

The protection of disclosure should first and foremost come from structural arrangements made by the organization or profession. One form of protecting disclosure is that of “I’m sorry laws.” According to these laws (now implemented in, for example, the US states of Oregon and Colorado), doctors can say to patients that they are sorry for the mistake(s) they committed. This does not offer them immunity from lawsuits or prosecution, but it does protect the apology statement from being used as evidence in court. It also does not prevent negative consequences, but at least that which was disclosed cannot be used directly against the professional. Such protection is not uncontroversial, of course. If you make a mistake, you should not only own up to it but also face the consequences, some would say. Which other professions have such cozy protections? This is where ethical principles can start to diverge.

WHAT IS BEING HONEST?

The question that comes up is this: Is honesty a goal in itself? Perhaps honesty should fulfill the larger goals of

•  Learning from a mistake to improve safety

•  Achieving justice in its aftermath

These are two goals that serve the common good. Supposedly pure honesty can sometimes weaken that common good. Should we always pursue honesty, or truth-telling, because it is the “right” thing to do, no matter what the consequences could be?

Dietrich Bonhoeffer, writing from his cell in the Tegel prison in Berlin in 1943 Nazi Germany, drafted a powerful essay on this question. He was being held in part on suspicion of a plot to overthrow Hitler, a plot in which he and his family were actually deeply involved.17 If he were to tell the truth, he would have let murderers into his family. If he were to tell the truth, he would have to disclose where other conspirators were hidden. So would not telling this make him a liar? Did it make him, in the face of Nazi demands and extortions, immoral, unethical? Bonhoeffer engaged in nondisclosure, and outright deception.

The circumstances surrounding truth-telling in professions today is not likely as desperate and grave as Bonhoeffer’s (he was executed in a concentration camp just before the end of the war, in April 1945). But his thoughts have a vague reflection in the fears of those who consider disclosing or reporting today. What if they tell the whole truth—rather than a version that keeps the system happy and them protected? Bonhoeffer makes a distinction between the morality and epistemology of truth-telling that may offer some help here.

•  The epistemology of truth-telling refers to the validity or scope of the knowledge offered. In that sense, Bonhoeffer did not tell the truth (but perhaps the nurse in the Xylocard case that follows did).

•  The morality of truth-telling refers to the correct appreciation of the real situation in which that truth is demanded. The more complex that situation, the more troublesome the issue of truth-telling becomes (the nurse in the Xylocard case may not have done this, but perhaps should have).

Bonhoeffer’s goal in not disclosing was not self-preservation, but the protection of the conspiracy’s efforts to jam the Nazi death machine, thereby honoring the perspective of the most vulnerable. That his tormentors wanted to know the truth was unethical, much more so than Bonhoeffer’s concealment of it.

Translate this into some of the case studies that you find throughout this book. It may be less ethical for prosecutors or judges in positions of power to demand the full truth than it would have been for these professionals to offer only a version of that truth.

As we will see in Chapter 4, going to court, and demanding honesty there, becomes a different issue altogether. Once adversarial positions are lined up against each other in a trial, where one party has the capacity to wreak devastating consequences onto another, the ethics of honesty get a whole new dynamic. Demanding honesty in these cases can end up serving only very narrow interests, such as the preservation of a company’s or hospital’s reputation, or their protection from judicial pressure. Or it deflects responsibility from the regulator (particularly in countries where they employ the prosecutor) after allowing a company to routinely hand out dispensations from existing rules. Wringing honesty out of people in vulnerable positions is neither just nor safe. It does not bring out a story that serves the dual goal of satisfying calls for accountability and helping with learning. It really cannot contribute to just culture. Let’s look at a case study that brings this out quite wrenchingly.

CASE STUDY

A NURSE’S ERROR BECAME A CRIME

Let me call the nurse Mara.

It was on a Friday in March that I first met her. I had no idea what she would look like—an ICU nurse in her late 40s, out of uniform. This could be anybody.

As I bounded up the stairs, away from the train platform, and swept around the corner of the overpass, there she was. It had to be her. Late 40s, an intensive care nurse of 25 years, a wife, a mother of three.

But now a criminal convict. An outcast. A black sheep. On sick leave, perversely with her license to practice still in the pocket.

We exchanged a glance and then embraced.

What else was I to do, to say? The telephone conversation from the night before fresh in my mind, here she was for real. Convicted twice of manslaughter in the medication death of a three-month-old girl. Walking free now, as her case was pending before the Supreme Court.

I stepped back and offered: “This sucks, doesn’t it?”

She nodded, eyes glistening.

It was her all right. There hadn’t been many other people around in any case.

“I recognized you from a video of a lecture you held,” she explained as we turned to go down the stairs to meet her lawyer.

“And how kind of you to travel up all this way.”

“Well, it’s the least I could do,” I said.

Snow was everywhere. Unyielding, huge piles, blanketing the little town. The lawyer’s address was distinguished. The most prominent address in the town, in fact. An imposing building in stately surroundings, spacious offices, high ceilings, the quiet reverence and smell of an old library, archaic dress, archaic language.

The lawyer prattled on, clearly proud that he had, once again, scored the Big One: a hearing in the Supreme Court. I don’t know whether proud lawyers make good lawyers. What I wanted to know was the planned substance of the defense, as assisting with that was my only card. A lot of the banter was inconsequential to me, much of it incomprehensible to Mara—a foreign language.

As I looked over to where Mara sat, I could not help but find her so out of place. A fish on the shore, gasping, trying to make sense of its surroundings as the burden of a final crawl for survival started sinking in. How on earth could a normal, diligent nurse who had practiced her entire adult life, ever have expected to become the lead character in somebody’s lofty law offices for a prelude to an appearance at the nation’s highest court? It must have felt surreal to her. She certainly looked it.

As it turned out (how naïve I am), there is no substance to speak of in a defense before the Supreme Court, because it’s all form. Mara began to discover this too, haltingly, stumblingly, and increasingly disgusted.

“All I want is the truth to come out,” she repeated.

“It won’t,” the lawyer found himself explaining over and over. “This is not about truth. It’s about procedure and legal interpretation, and whether it has been correctly followed and applied. All we want is to get you off the hook. What we have to show is that the course of justice so far has been improper—the truth is secondary.”

“But what about all the other people involved?” Mara appeared to become anguished. “The pediatrician, the prescription that magically disappeared days after the death, the nurse who administered the medication, the doctors who didn’t really diagnose, the lousy routines at the hospital, what about them? The truth is that they are all part of this too!”

The lawyer turned to ice. “They are not on trial now, are they? This is about you. You are the only one. As soon as we bring them up in the Supreme Court, they will ask me, ‘So where are those co-defendants then, counselor? We thought this case was about the nurse, not all these others.’ So don’t bring it up, I plead with you; don’t bring it up.”

Mara seemed exasperated. If justice was like this, disinterested in truth, directed through dogmatic decisions by outsiders that limited what was relevant from the events that got her here, with people putatively helping her by telling her not to argue for her case, then why bother at all? Was it worth it? Justice was supposed to be about getting out the real story, what really happened. That would be just. Justice would be about righting what was wrong, and about preventing it from happening again. That would be just too. Yes, she made a mistake; yes, a baby died. She knew that. But she also knew that the entire system in which she worked was rotten, porous, and ready to kill again.

But it was plain to me that Mara knew why she was here. It wasn’t just because of her, because of her role, because of her fate, or because everybody was suddenly gathering around invigorated efforts to make healthcare safer.

She knew who was paying her lawyer, and it wasn’t she. Fewer than 1% of cases presented actually get heard by the Supreme Court in my adopted country, and hers was among them. It must have mattered, somehow. The country had taken interest. The union certainly had too. Should medical practitioners involved in a patient’s death be subject to the criminal justice system? Or should they be dealt with through the established professional channels: the medical disciplinary board? A great deal was at stake; that much was obvious to Mara. Realizing that she may have used the stronger solution of the medicine, she had volunteered her possible contribution to the baby’s death to her boss a few days after it had happened. Her boss duly reported the event to the relevant agency, but somebody also leaked it to the local press. Mara never found out who. Her role was played up, and a prosecutor happened to read it in the morning paper. After months of uncertainty—Mara even called up the prosecutor herself one day to get clarity about her intentions—charges were brought. A local court found her guilty of manslaughter. The conviction was upheld by a higher court, which toughened the punishment. Now the case was headed for the Supreme Court. Would people in healthcare ever volunteer information about incidents again? Was this the death knell for nascent medical event reporting systems? Was patient safety going to be dealt a serious setback?

We wandered back into town, in search of a cup of coffee.

When we had slipped into the warmth of a bakery, shaken the snow off our shoulders, and sat down near a window, I cocked my head, glanced at her, and sighed, wonderingly. She must feel like a vehicle, sent out to test-drive the law, I mused. If the country and its healthcare system would get their day in court, if they were going to create clarity on the rules for dealing with medical error, then this was not going to help Mara. The black sheep would be herded through one more splendid spectacle of public judgment, but it was no longer about the sheep, if it ever was. It was about the principle. And she was merely its embodiment. When it was all over, whatever the outcome, she would have been used up, her purpose to larger interests played out, expired. A mere piece of detritus mangled through a criminal justice system in its quest for new turf, disposed once the flag had been planted. She would be remembered only in faint echoes of schadenfreude (thank God it wasn’t me) and a waning trail of half-hearted collegial compassion (“We’re so sorry for you, Mara”). The disillusionment with her work setting, her colleagues, her union, the justice system—the world—was etched on her face. Vindication would remain elusive. The truth would not come out.

But is there truth in the aftermath of a little girl’s medication death? Or are there only versions?

AT THE SUPREME COURT

A few weeks after the meeting with the lawyer, I saw Mara again, this time in the ornate halls of the Supreme Court. High ceilings soared up, away from two tables, one for the defense and one for the prosecution. They were arranged in front of a regal podium decked out with a row of seats. When the justices had filed in and sat down facing both teams, the prosecutor reached for his version of the truth. I remember his craftiness, his cultural conformity to the conflict-avoidance of my adopted country. He was sitting down, not standing up. He was reading from a prepared script, not ad-libbing or grandstanding in front of his audience. His tone was measured, quiet, reverential. This is, I suppose, what court proceedings are supposed to do: separate emotion from substance, sublimate conflict into negotiation, turn revenge into ritual.

Mara sat over at the other table, flanked by her lawyer, with hands in her lap, eyes cast downwards. As she sat there, the prosecutor’s opening statement started rolling over her, his gentle voice reverberating around the hall unamplified.

Except it wasn’t a statement. It was a story.

“The baby was born on the 24th of February in the regional hospital,” he intoned. He recalled the happiness of the child’s parents and mentioned details to paint a picture of family bliss, soon to be disrupted by a treatment gone awry. “She had a normal birthweight, but showed some signs of seizures in her arm after delivery. Three days later, the seizures had become worse. She was given Fenemal, a cramp reducer. After stabilizing on the 5th of March, she was discharged. But less than a month later, the seizures came back. The infant was rushed to the emergency room and taken in for observation. Her Fenemal dose got increased to 5 milligrams per milliliter and she was discharged again two days later. The day after, her mother called the hospital. After consultation, the baby’s Fenemal dose was increased again—over the phone—to a twice daily 2-milliliter portion of the 5 milligram per milliliter mixture. On the 22nd of April the baby was brought in as part of a routine checkup. Everything was normal.”

He paused.

To recount his version of the truth, the prosecutor had created a narrative. Narratives are strong. He must have picked that up in class once, or in one of his many street fights. Or perhaps a story, or liking a story, understanding a story, is simply what makes us all human. Mara must have heard versions of the story hundreds of times now, I thought. She must have turned it over and over in her mind infinitely, picking away at her role, plaguing herself by retrospectively finding opportunities to not make a mistake, to not become the centerpiece of this imbroglio.

Act One was over. The justices looked at the prosecutor, silently. Spellbound or bored silly? It was difficult to tell. Time to set the stage for a plot twist. Act Two. A different ward: the intensive care unit (ICU). A new medication. And, of course, the introduction of the villain: the nurse.

“On the 12th of May, the baby was admitted with a new bout of seizures, and sent to the ICU. Her Fenemal was increased to 2.5 milliliters twice daily, and she even received a bolus dose of Fenemal. But the seizures continued. The baby was then given Xylocard, a lidocaine-based medication, at 2 milliliters per milligram. The seizures subsided. She was discharged again on the 16th of May, off Xylocard, and back on the previous dose of Fenemal. But on the 18th of May, her mother called the hospital to say that her baby was suffering a new onset of seizures, now lasting about five minutes each. In the evening, the child was taken to the hospital by ambulance and admitted to the pediatric ward. New seizures made that she was transferred to the ICU later that evening.”

With the baby back on the scene of the crime-to-come, everything was ready for Mara to make her entry. The lines of the two lead roles could now converge.

“Early in the morning of Sunday, 19th of May, Mara showed up for work. There were not many patients in the ICU; things were quiet. The baby was doing better now. In preparation for her transfer back to the pediatric ward, Mara went to the medication room to mix the Xylocard solution.”

He paused and picked up the two little cartons in front of him on the table. Then he waved them around.

“There, in the cabinet, were two packages: one containing an injection dose of 20 mg/ml Xylocard, and one with a 200 mg/ml Xylocard solution intended for intravenous, or IV, drop. Misreading the packages, Mara took the 200 mg/ml to prepare the baby’s drop, instead of the 20 mg/ml, as was prescribed.”

The chief justice motioned that she wished to see the packages. They were handed over. Passed from justice to justice, they were handled for what they were in that context: pieces of evidence in a manslaughter trial. The justices studied the packages with what looked like mild interest, but could just as well have been muffled puzzlement. What kind of evidence was this anyway? This was not just a common criminal instrument—a knife, a handgun, a fraudulent contract—these were pieces of highly specialized medication, excised from their normal surroundings of thousands of normal, similar-looking packages that form the backdrop of a nurse’s daily life. Now these two boxes looked quite out of place, floating along the court’s elevated regal bench, examined by people with little idea of what it all meant. Questions must have mounted, one on top of the other. What was it with these peculiar Greek neologisms, why were all these boxes white with green or light-blue lettering, and what were these befuddling volume–weight fusions?

The prosecutor continued. Not much longer now. Act Three. A rapid climax.

“That afternoon, back in the pediatric ward, the baby was hooked up to the new Xylocard drop, the one that Mara had mixed. But instead of subsiding, the infant’s seizures quickly got worse. A pediatrician was called and tried to intervene. But nothing helped. Not long after, the baby was declared dead. Postmortem examination showed that she had died from lidocaine poisoning.”

A story that makes sense, that is plausible, that has a powerful narrative arc and casts characters in recognizable roles of hero, victim, villain, and bystander can present a rather believable truth. And the prosecutor’s story did. His plot painted a normal hospital, a normal, innocent little patient, attended to by normal physicians, suddenly all confronted by the sinister and totally unnecessary turn of events on a Sunday morning in May—the fatal denouement of Mara’s mix-up. Quite impeccable. Quite logical.

A CALCULATION GONE AWRY

But does that make it true? Consider another truth, the sort of “truth” that Mara had hoped in vain to bring out in the open on this day. After clocking in on the morning of May 19, she received a little briefing from the night ICU nurse. The original prescription had been unclear and not signed by the doctor who wrote it. The hospital (even the ICU) was equipped with a computerized prescription system, but the physician had been sitting at a terminal that happened to not be connected to the printer. Rather than moving to another terminal and print out a prescription, he wrote one by hand instead. Earlier that night the nurse had mixed a Xylocard solution with another physician’s help, trying to divine the prescription. Now, in the morning, the doctor himself was asleep somewhere in the hospital, and, given that it was a quiet Sunday, nurses would not become popular by waking him up to ask a simple clarification. The night nurse showed the unsigned prescription and her medication log entry to Mara:

“40 ml + Xylocard 200 mg = 10 ml = 4 mg/ml, total of 50 ml”

“Remember, 10 milliliters Xylocard,” the doctor had said to the night nurse, who now relayed this to Mara. The infant, in other words, had received a total of 200 milligrams of Xylocard by mixing two 100 mg/5 ml syringes (the standard injection package: 100 milligrams of Xylocard dissolved in 5 milliliters of fluid) into a 40-ml glucose solution. But in the ICU, syringes were never used for IV drops, because they contain a weak solution. Syringes were for direct injection only. The ICU used vials, with a stronger solution, for IV drops. But Pediatrics did not even have vials. They dealt with children, with little bodies that needed no strong solutions. Pediatrics routinely discharged the prepackaged syringes into an IV drop instead. The ICU seldom had little infants, though, and no tradition of using syringes for preparation of IV drops.

Later that day, when the night nurse had long gone home, Mara noticed that the infant’s drop was running low and decided to prepare a new one. The baby would be transferred back to Pediatrics, but the move had gotten delayed. Remembering the “10 milliliters” reference from the doctor, and reading 200 mg off the medication log (as the prescription was unclear), she took two boxes that each contained a 5-ml vial with 200 mg/ml Xylocard. 10 milliliters total, and the figure of 200 mg—this was what the medication log said. She prepared the solution and wrote in the log

“Xylocard 200 mg/ml = 10 ml = 4 mg/ml”

Mara showed her calculations to another nurse and also to the Pediatrics personnel who came to collect the infant. The Pediatrics staff did raise a question, but it focused only on the dose of 4 mg/ml, not on the solution from which it supposedly would come. Five days earlier, when the infant had been in Pediatrics too, she had been on 2 mg/ml, not 4 mg/ml. The ICU confirmed to Pediatrics that 4 mg/ml was now the prescribed dose. The baby was to receive 10 milliliters of the solution that was supposed to contain 1 milligram of Xylocard for each milliliter.

But did it?

That night, Mara tossed in her bed. Her youngest son awoke a few times, rendering his mother restless. In the darkened bedroom, the events of the day came back to her. As far as she knew, the baby had lived; she had gone off shift before anything went awry. But something did not quite add up. Why had the night nurse, normally so assiduous, accepted such a messy and unsigned prescription? She had even had to call for help from a physician to mix the thing. And what about that log entry of hers? It had read “Xylocard 200 mg,”,= but did that make sense? Xylocard 200 mg was meaningless by itself. 200 mg per what? Per…?

Mara sat up with a start.

Could it be true that she had taken two vials, instead of two syringes? They both contained 5 ml of fluid each, so any combination of two would amount to the 10 milliliters the doctor had wanted. The two packages were side by side in the cabinet which was so neatly organized on alphabet. But two vials meant…

She quickly ran the numbers in her head, peering into the darkness. Two 5-ml vials both containing 200 mg/ml Xylocard would have amounted to 2000 mg Xylocard, or 40 mg/ml, not 4! This would add up to a lot for a little infant. Too much maybe. In that case her medication log entry didn’t make sense either. Take 10 milliliters with each milliliter containing 200 mg, and you would not get 4 mg/ml. You’d get an order of magnitude more. Ten times more. Forty.

Why had nobody caught it? She had had people double-check! Pediatrics had checked! Also, an entry about the solution would have had to be made on the IV drop before it went into the child—another double-check. What had happened? She would try to figure this out as soon as she was at work again.

On her next shift, Mara asked about the little girl. “She has died” was the answer. Her heart must have sunk. But determined to figure out what had gone wrong, and if she may have had any role in it, Mara went to the binder with prescriptions and flipped back to Saturday night. Where was it? Where was the prescription, that messy, unsigned prescription, that her predecessor night-nurse had interpreted as “200 mg” Xylocard, setting her, Mara, up for a possible mistake?

The prescription was gone. It wasn’t there. It had disappeared and would never be found again.

Years later, only a few weeks before the hearing at the Supreme Court, Mara would plead with her lawyer to bring up the missing prescription. He yielded not an inch.

“How can you bring up something that doesn’t exist?” he asked.

“But,” Mara countered, “we are not allowed to prepare medications without a prescription, there has to be a prescription, and in this case there was too. Somebody took it out!”

The lawyer sighed and was silent.

“Look,” he said after a while. “This is not the time to introduce new evidence. And even if it was, you can’t produce as evidence something that you don’t have. It’s that simple.”

Mara’s world must have spun around her. She was locked up inside a Kafkaesque entanglement that had erased any resemblance to the real world. Her mind must have cast around for anything stable, anything recognizable, anything sensible. Instead it was finding nothing to grab onto, no lifeline, no help. And no “truth.”

“MEA CULPA

What there was, and what had been introduced as evidence, of course, was her own medication log entry. The one that said that 10 milliliters of fluid, with each milliliter containing 200 milligrams of stuff, would amount to a measly 4 milligrams of the stuff per milliliter in a 50-milliliter IV drop. It wouldn’t. It would yield ten times as much. She had recorded her own miscalculation—putting the truth out there, for all to see.

Not long after learning of the baby’s death, complying with reporting procedures in the hospital, she wrote to her superior:

When I was going to mix Xylocard at around 10:45 that morning, I looked at the prescription and got Xylocard 20 mg/ml. I read both the package and the vial and recall that it said 20 mg/ml. I looked at what was prescribed and what I should prepare. So I got 20 mg/ml which I mixed with glucose 5% 40 ml.

I asked another nurse to double-check but did not show her the empty vials. Then Pediatrics came to get the infant, … and they took my prepared solution with them to hook it up in their ward. When the infant left us at 11:07, there was still about 3 ml in the previous drop, which had run through the night of the 18–19th of May.

The following night, I awoke and suddenly realized that a vial normally contains 1000 mg/5 ml. And I had thought that I drew a solution of 20 mg/ml. When I was working the following Wednesday, I got to hear that the infant had died. I then understood that it could have been my mistake in making the solution, as there are no vials of 20 mg/ml.

Stories of mistake can be so simple. “My mistake,” Mara had recorded. Mea culpa. To many others in the hospital, such an unprovoked admission must have been a godsend. Not that they would ever say, of course. They would not have to. The legal aftermath itself would prove them right. Mara was in the dock. Again and again. Nobody else.

Not that this would necessarily feel natural to anyone involved in the saga as it unfolded. Take a story as experienced from another nurse’s point of view. When the infant started to show an increase in seizures and other problems after being hooked up to Mara’s IV preparation in Pediatrics, nurses called the attending physician.

He responded by phone: “Up the flow, give her more.”

They did. The problems got worse.

They called again. “Give her more, give her a bolus dose” was the instruction again. They did.

But this did not seem to help at all—in fact, things were going from bad to worse very quickly now. The attending anesthetist was now called by phone, but nobody answered. Another was found by calling through the intercom, but nobody showed. Only minutes later did the attending pediatrician show up in person. He ordered another bolus dose of Xylocard, but this had no effect either. The baby now needed 100% oxygen but she started vomiting into the mask, exacerbating her respiratory problems. The pediatrician ordered another bolus dose of Xylocard, thinking that this would finally stop the seizures. Then, during one attack, the girl presented with respiratory failure. The pediatrician responded by intubating the baby, and cleaned the airways by suction. Then the anesthetist arrived. The baby was ventilated but the suction tube proved too narrow for her passages to be cleared. Another bolus dose of Xylocard got pumped into the IV port. Finally, a thicker tube was found and inserted, clearing her airway. It was all too late. The infant went into circulatory shock. Adrenaline, Atropine, and Tribonat were given; heart massage administered; even the defibrillator was pulled out. To no avail. The baby was declared dead not long after. A postmortem showed that the girl had ended up with 43 micrograms of lidocaine per gram of her blood. The therapeutic dose is less than 6 micrograms per gram of blood.

Even if Mara had mixed from the 20 mg/ml syringes and not the 200 mg/ml vials, the infant would still have ended up with twice the therapeutic dose due to the volley of bolus shots during her final moments. Yet that is but one “truth” too. See the world from the pediatrician’s perspective and another sensible story swims into view. The initial symptoms of lidocaine poisoning can include (yes) seizures. So the symptoms of too much Xylocard and too little Xylocard would have been similar, setting the physician onto a compelling plan to continue. Strong initial cues suggested his response was the right one. They had been confirmed before: this baby had responded well to the treatment of her seizures with lidocaine. The dose had been upped before, with good therapeutic consequences. He knew all this. And, for that matter, he never knew that the IV drop was administering the drug at 10 times the ordered rate. The quality of his assessments and decisions could impossibly be rated against that knowledge—knowledge he did not possess at the time. That much would be “true.”

But what did the doctors actually know? I remember Mara countering this even before the final trial. Did they ever diagnose the source of the spams? Mara would ask. No, they didn’t. Did they have any idea why the child responded better to Xylocard than to Fenemal, even if Xylocard is not mainly intended to deal with seizures? Did anybody ever think to call in a neurologist? No. Did they ever ask themselves why the baby would suddenly develop such intense symptoms after getting back to Pediatrics on Sunday afternoon? Not that Mara knew. Did they ever recognize their own role in the slippage of prescription routines? In taking a nap at work on a quiet Sunday morning and being really grumpy when awoken for no apparent good reason? In not bothering to get up and mosey 10 feet to another computer to print out a prescription for Xylocard, rather settling for a bunch of handwritten squiggles instead? In not showing up for many, many critical minutes when a little baby was suffocating in her own vomit, wasting away on some drip? And then giving order after order after order of poisoning lidocaine? No, not that Mara would be aware. And who took that prescription away after the baby died? Where was it? And whose idea was it to start swapping a baby between Pediatrics and the ICU, a ward designed in every way for taking care of big people, not little ones? Were any of those “truths” ever going to be brought out?

CRIMINAL LAW AND ACCIDENTAL DEATH

A legal system holds people accountable. But it does not allow people to hold their account. Mara had become a hostage of legal procedure and protocol, and she decried the shackles on what she was granted to say and when. At every turn in the legal plot, she went in to battle the limits, to break through the constraints. She wanted permission to give her account. She just wanted the “truth” to come out. But at every end, she came out broken herself. Her account would still be inside of her—biting, festering. And increasingly bitter and partisan.

A legal system constructs an account from its own pick of the evidence. It makes its own story. It is interesting that society may turn increasingly to their legal systems to hand out that story, to provide accountability after a terrible outcome. There must be something in that account that we find terribly attractive; more enticing than what the people have to say who were actually there. Mara, for example.

Of course, we could dismiss their accounts as exculpatory, as subjective, biased, ulterior. Still struggling to understand her own performance, Mara had told a lower court that she may have misread the package labeling. By the time she got to the Supreme Court, however, she indicated that this was probably not the case: she mistakenly believed that 200 mg/ml was what she needed to have. This would certainly have made sense, given the prominence of the figure 200 in the medication log, and the reminder to end up with a volume of 10 ml Xylocard in total. But look at how the Supreme Court chose to interpret the various accounts that Mara had tried to provide. Put up as a last-grasp attempt to exonerate herself, to “find an explanation afterward,” the Supreme Court painted Mara as ditzy when it came to getting an account of what had happened that Sunday in May:

During the court proceedings, the ICU nurse described multiple ways how it could be that she mixed the IV drop with the wrong concentration of Xylocard. What she offered cannot therefore express what she really remembers. Rather, her accounts can be seen as attempts to find an explanation afterward. They are almost hypothetical and provide no certain conclusion as to why she did what she did.69

Whatever Mara offered, the sheer variety of her accounts had disqualified her as a purveyor of truth. In her stead, the Supreme Court was happy to provide the “certain conclusion” so sorely lacking from Mara’s story. They speculated why Mara did what she did: she “misread, miscalculated, or took the wrong package” from the shelf—all because of “negligence.” Mara did what she did (whatever it was), because she was careless. “She could have read the medication log more carefully, calculated more carefully or done any other double-check that would have revealed her error and its potentially fateful consequences.”69 But she did not. She was negligent. In the absence of a story from Mara that made sense, people turned to the legal system to serve them a story with a cause and a culprit. The cause was misreading, miscalculating, or grasping wrong due to negligence, and the culprit was Mara. Instead of listening to the protagonist, people legitimated a particular institution to get at the “truth” and mete out supposedly appropriate consequences. They may have thought, as many increasingly do, that this legitimated authority could deliver the veridical account—what really happened. For the one who was there could not be trusted to deliver an account that “expressed what she really remembered.” She, after all, could “provide no certain conclusion as to why she did what she did.”

Of course, judicial proceedings do rely on the insider account as part of their evidence base. Mara was given a voice—here and there. But she never called the shots. She spoke when spoken to: merely proffering a hunch of answers to often inane questions gurgling from a tightly scripted ritual:

“So what did you read on this package, did you read anything at all, or did you take fluid directly from the vial?” the prosecutor in the Higher Court had insisted.

“I looked at both the package and the vial,” Mara had replied.

“What did you see?”

“I don’t know. I wrote 200 mg per ml, but I don’t know.” “You don’t know.”

“No.”

It sounded exasperated—feigned or real: “You don’t know.” If Mara did not know, then who would? She had been there, after all. Again, the inability to give that final account, that deeper insight into the workings of her own mind that day, was taken as reticence, as foot-dragging. “You don’t know” was taken, as it often is by the time adversarial positions are lined up in a criminal trial, not as “you really don’t know,” but as “you don’t want to tell us.”

I recall sitting in the lawyer’s office with Mara when she offered the explanation in which she really believed she had taken the right package, the one she was supposed to take (as that was always the one she prepared IV drops from). There was no misreading; that had been a wrong explanation. But the Supreme Court justices would have none of that. They would not see the latest account as a genuine attempt of the insider to articulate what had happened, but as a ditch from the debris, as a ducking of responsibility.

RATIONAL SYSTEMS THAT PRODUCE IRRATIONAL OUTCOMES

And so we turn to our legal system to furnish us with the truth. Deploying rational techniques like those of a trial, rather than institutional authority (such as that of the Church) putatively allows us to arrive at true accounts and appropriate moral rules. But intense attempts at deploying rationality, sociologist Max Weber warned over a century ago, quickly deliver the opposite. The output of supposedly rational institutions is often—quite naturally, necessarily—irrational. There were many, both inside and outside the healthcare system, who thought just that about Mara’s verdict. When a nurse herself reported a mistake, in an honest effort to abide by the rules and perhaps help prevent recurrence, it made no sense at all to have her end up convicted of manslaughter for the very mistake she voluntarily divulged. This was irrational. Even more poignantly, why she? Singling out Mara for this adverse outcome of a discontinuous, wandering processes of care delivery that counted many contributions from many contributors made no sense whatsoever. And then, this was not the first or only medication adverse event ever, not a uniquely egregious occurrence. In the same year that Mara was first charged, more than 300 severe medication errors were reported to the country’s health authority. Adverse medication events are “normal.” They are the rule, or at least part of it, baked into the very fabric of delivering assorted compositions of volumes and weights and rates of substances through various means. This, moreover, is accomplished through a thoroughly discontinuous process, where gaps in the delivery of healthcare open up because of changes of medium (e.g., from oral to written to oral prescriptions or dosage orders), handovers from one caregiver to another between shifts, movement of patients between wards, transferal of the caretaking physician, or other interruptions in workflow. Patients, prescriptions, orders, medications, and healthcare workers all cross departments, shift responsibilities, flow through hierarchies, and traverse levels of care as a matter of routine. It would be easy, then, and quite rational, to show that Mara’s adverse event was part of a systemic feature of healthcare delivery. So how a supposedly rational judicial process could come to the exact opposite conclusion is something that Weber would not have found surprising. The accounts of human error that a legal system produces can be so bizarre precisely because of its application of reason: the ways judicial proceedings rationalize the search for and consideration of evidence, closely script turn-taking in speech and form of expression, and limit what is “relevant” are institutionally constrained in their deferral to domain expertise, and necessarily exclude the notion of an “accident” because there is no such legal concept.

When you come up close, close enough to grasp how case content becomes subjugated by judicial form, close enough to hear the doubts of the victims about the wisdom of having a trial in the first place, close enough to taste the torment of the accused, to feel the clap of manacles around the expression of their own account, to experience the world from the dock and sense the unforgiving glare it attracts, a more disturbing reality becomes discernible. In the view from below, there is a deep helplessness: an account is created by nonexperts who select bits and pieces, in a process that runs its own course and over which there is very little—if any—external control. To those present when the controversial event happened, and who may now be in the dock (as well as to many of their co-practitioners), the resulting account may well be bizarre, irrational, absurd. And profoundly unfair.

THE SHORTEST STRAW

Mara had hoped that the process in the Supreme Court would end up bringing out a real version after all the acrimony in lower courts. It did not. Instead of truth, she got an upheld conviction. Instead of vindication, she got something that she could not possibly consider “true” anymore.

Sitting in the twilight of her living room on a rainy day late in August, months after the hearing, I began to believe that her psychological devastation was due not just to the Supreme Court upholding the guilty verdict, including its heavier penalty. This may not even have been the chief source of her anguish. With her license to practice still intact, and the sentence turned into conditional time, it had few overt practical consequences (not that she could, or wanted to practice in the ICU ever again, by the way). No, I started to sense rather a resignation, a disillusion, a dizzying realization that progress toward truth is not a movement from a less to a more objectively accurate description of the world. She might have hoped that we all could learn the truth behind the death of the little girl. But there is no such truth to find, to arrive at, to dig out. No final account, no last word—only versions, jostling for supremacy, media-light, popular appeal, legal sustainability. And her version had consistently drawn the shortest straw. Again and again.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset