5 Report, Disclose, Protect, Learn

I overheard a conversation of two air traffic controllers recently. They were talking about an incident in their control center. They discussed what they thought had happened, and who had been involved. What should they do about it?

"Remember," said one controller to the other, "Omertà!"

The other nodded, and smiled with a frown.

I said nothing but wondered silently: "Omertà"?

Surely this had something to do with the mafia. Not with professional air traffic controllers.

Or any other professionals.

Indeed, a common definition of omertà is "code of silence." It seals people's lips. It also refers to a categorical prohibition to collaborate with authorities. These controllers were not going to talk about this incident. Not to anybody else, or anybody in a position of authority in any case. Nor were they going to voluntarily collaborate with supervisors, managers, investigators, regulators.

I live my professional life in occasional close contact with professional groups—firefighters, pilots, nurses, physicians, police, nuclear power plant operators, inspectors, air traffic controllers. I see a "code of silence" enforced and reproduced in various ways.

A senior captain with a large, respectable airline, who flies long-distance routes, told me that be does not easily volunteer information about incidents that happen on his watch. If only he and his crew know about the event, then they typically decide that that knowledge stays there. No reports are written, no "authorities" are informed.

"Why not?" I wanted to know.

"Because you get into trouble too easily," he replied. "The airline can give me no assurance that information will be safe from the prosecutor or anybody else. So I simply don't trust them with it. Just ask my colleagues. They will tell you the same thing."

I did. And they did.

Professionals under these circumstances seem to face two bad alternatives:

  • either they report a mistake and get in some kind of trouble for it (they get stigmatized, they get a reprimand, or they get fired or even prosecuted);
  • or they do not report the mistake and keep their fingers crossed that nobody else will do so either ("remember: omertà!").

The professionals I talked to know that they can get into even worse trouble if they don't report and things come out anyway. But to not talk, and hope nobody else does either, often seems the safest bet. From the two bad alternatives, it is the least bad.

I once spoke at a meeting at a large teaching hospital, attended by hundreds of healthcare workers. The title of the meeting was "I got reported." The rules of the country where the meeting was held say that it is the nurse's or doctor's boss who determines whether an incident should be reported to the authorities. And the boss then does the reporting. "I got reported" suggests that the doctor or nurse is at the receiving end of the decision to report: a passive non-participant. A casualty, perhaps, of forces greater than themselves, and interests other than their own. The nurse or doctor may have to go their boss to report a mistake. But what motives to do so have they! The formal account of what happened, and what to do about it, ultimately rests in the hands of the boss.

A few bad apples?

We could think that professionals who rely on "omertà" are simply a few bad apples. They are uncooperative, unprofessional exceptions. Most professions, after all, carry an obligation to report mistakes and problems. Otherwise their system cannot learn and improve. So if people do not want to create safety together, there must be something wrong with them.

This, of course, would be a convenient explanation. And many rely on it. They will say that all that people need to do is report their mistakes. They have nothing to fear. Report more! Report more! Then the system will learn and get better. And you will have a part in it. Indeed, every profession I have worked with complains about a lack of reporting.

"If only people would report more," supervisors or regulators sigh.

"Our biggest problem is under-reporting," a healthcare specialist commented. "And we don't even know how big that problem is," she added in twisted tautology.

Yet often I am not surprised that people do not want to report. The consequences of disclosing mistakes can be quite dreadful. Organizations can respond to mistakes in many ways. In the aftermath of an incident or accident, pressures on the organization can be severe. The media wants to know what went wrong. Politicians may too. They all want to know what the organization is going to do about it. Who made a mistake? Who should be held responsible? Even a prosecutor may get interested. National laws (especially those related to freedom of information) means that data that people voluntarily submit about mistakes and safety problems can easily fall into wrong hands. Reporting and disclosure can be dangerous.

Not reporting is hardly about a few bad apples. It is about structural arrangements and relationships between parties that either lay or deny the basis for trust. Trust is necessary if you want people to share their mistakes and problems with others. Trust is critical. But trust hard to build, and easy to break.

The Obligation to Report

Many professions have codified the obligation to report. The Eurocontrol Safety and Regulatory Requirement (ESARR 2), for example, tells air traffic controllers and their organizations that "all safety occurrences need to be reported and assessed, all relevant data collected and lessons disseminated." There is an implicit understanding that reporting is critical for learning. And learning is critical for constantly improving safety (or, if anything, for staying just ahead of the constantly changing nature of risk).

Saying that all safety occurrences need to be reported is easy. But what counts as a "safety occurrence?" This can be open for interpretation: The missed approach of the 747 was, according to the pilot, not a safety occurrence. It was not worth reporting. But according to his bosses and regulators, it was. And the fact that he did not report it, made it all the more so.

Professional codes about reporting, then, should ideally be more specific than saying that "all safety occurrences" should be reported. What counts as a clear opportunity for organizational learning for one, perhaps constitutes a dull and unreportworthy event to somebody else. Something that could have gone terribly wrong, but did not, is not necessarily a clear indication of reportworthiness either. After all, in many professions things can go terribly wrong the whole time ("I endanger my passengers every day I fly!"). But that does not make reporting everything particularly meaningful.

Reporting is important. But what to report?

The point of reporting is to contribute to organizational learning. It is to help prevent recurrence by making systemic changes that aim to redress some of the basic circumstances in which work went awry. This means that any event that has the potential to shed some light on (and help improve) the conditions for safe practice is, in principle, worth reporting and investigating. But that still does not create Very meaningful guidance.

Which event is worthy of reporting and investigating is, at its heart, a judgment. First, it is a judgment by those who perform safety-critical work at the sharp end. Their judgment about whether to report something is shaped foremost by experience—the ability to deploy years of practice into gauging the reasons and seriousness behind a mistake or adverse event.

To be sure, those years of experience can also have a way of blunting the judgment of what to report. If all has been seen before, why still report? What individuals and groups define as "normal" can glide, incorporating more and more non-conformity as time goes by and as experience mounts. In addition, the rhetoric used to talk about mistakes can serve to "normalize" (or at least deflect) an event away from the professionals at that moment. A "complication" or "non-compliant patient" is not so compelling to report (though perhaps worth sharing with peers in some other forum), as when the same event were to be denoted as, for example, a diagnostic error.

Whether an event is worth reporting, in other words, can depend on what language is used to describe that event in the first instance. This has another interesting implication: In some cases a lack of experience (either because of a lack of seniority, or because of inexperience with a particular case, or in that particular department) can be immensely refreshing in questioning what is "normal" (and thus what should be reported or not).

Investing in a meeting where different stakeholders share their examples of what is worth reporting could be useful. It could result in a list of examples that can be handed to people as partial guidance on what to report. But in the end, given the uncertainties about how things can be seen as valuable by other people, and how they could have developed, the ethical obligation might well be: "If in doubt, report."

But then, what delimits an "event"? The reporter needs to decide where the reported event begins and ends. She or he needs to decide how to describe the roles and actions of other participants who contributed to the event (and to what extent to identify other participants, if at all. Finally, the reporter needs to settle on a level of descriptive resolution that offers the organization a chance to understand the event and find leverage for change. Many of these things can be structured beforehand, for example by offering a reporting form that gives guidance and asks particular questions ("need-to-know" for the organization to make any sense of the event) as well as ample space for free-text description.

Getting People to Report

Getting people to report is difficult. Keeping up the reporting rate once the system is running can be equally difficult, though often for different reasons. Getting people to report is about two major things:

  • maximizing accessibility;
  • minimizing anxiety.

The means for reporting must be accessible. If you have reporting forms, they need to be easily and ubiquitously available, and should not be cumbersome to fill in or send up.

Anxiety can initially be significant:

  • What will happen to the report?
  • Who else will see it?
  • Do I jeopardize myself, my career, my colleagues?
  • Does this make legal action against me easier?

As an organization you should ask yourself whether there is a written policy that explains to everybody in the organization what the reporting process looks like, what the consequences of reporting could be, what rights, privileges, protections, and obligations people may expect? Without a written policy, ambiguity can persist. And ambiguity means that people will disclose less.

Getting people to report is about building trust: Trust that the information provided in good faith will not be used against those who reported it. Such trust must be built in various ways. An important way is by structural (legal) arrangement. Making sure people have knowledge about the organizational and legal arrangements surrounding reporting is very important: Disinclination to report is often related more to uncertainty about what can happen with a report, than by any real fear about what will happen. One organization, for example, has handed out little credit-sized cards to its employees to inform them about their rights and duties around an incident.

Another way to build trust is by historical precedent: Making sure there is a good record for people to lean on when considering whether to report an event or not. But trust is hard to build and easy to break: One organizational or legal response to a reported event that shows that divulged information can somehow be used against the reporter can destroy months or years of building goodwill.

Keeping the Reports Coming in

Keeping up the reporting rate is also about trust. But it is even more about involvement, participation, and empowerment. Building enough trust so that people do not feel put off to send in reports in the first place is one thing. Building a relationship with participation and involvement that will actually get and keep people to send in reports is quite another.

Many people come to work with a genuine concern for the safety and quality of their professional practice. If, through reporting, they have an opportunity to actually contribute to visible improvements, then few other motivations or exhortations to report are necessary. Making a reporter part of the change process can be a good way forward, but this implies that the reporter wants (or dares) to be identified as such, and that managers have no problems with integrating employees in their work for improved safety and quality.

Sending feedback into the department about any changes that result from reporting is also a good strategy. But it should not become the stand-in for doing anything else with the reports. Many organizations get captured by the belief that reporting is a virtue in itself: If only people report mistakes, and their self-confessions are distributed back to the operational community, then things will automatically improve and people will feel motivated to keep reporting. This does not work for long. Active engagement with that which is reported, and with those who report, is necessary. Active, demonstrable intervention that acts on reported information is too.

Reporting to managers or to safety staff?

In many organizations, the line manager is the recipient of reports. This makes (some) sense: The line manager probably has responsibility for safety and quality in the primary processes, and should have the latest information on what is or is not going well. But this practice has some side effects:

  • One is that it hardly renders reporters anonymous (given the typical size of a department), even if no name is attached to the report.
  • The other is that reporting can have immediate line consequences (an unhappy manager, consequences for one's own chances to progress in career).
  • Especially in cases where the line manager herself or himself is part of the problem the reporter wishes to identify, such reporting arrangements all but stop the flow of useful information.

I remember studying one organization that bad shifted from a reporting system run by line managers, to one run by safety-quality staff.1'2 Before the transition, employees actually turned out very ready to confess an "error" or "violation" to their line manager. It was almost seen as an act of honor. Reporting it to a line organization—which would see an admission of error as a satisfactory conclusion to its incident investigation—produced rapid closure for all involved. Management would not have to probe deeper as the operator had seen the error of his or her ways and had been reprimanded and told or trained to watch out better next time.

For the operator, simply and quickly admitting an error avoided even more or deeper questions from their line managers. Moreover, it could help avert career consequences, in part by preventing information from being passed to other agencies (e.g. the industry's regulator). Fear of retribution, in other words, did not necessarily discourage reporting. In fact, it encouraged a particular kind of reporting: A mea culpa with minimal disclosure that would get it over with quickly for everybody. "Human error" as cause seemed to benefit everyone—except organizational learning.

As one employee told us: "I didn't tell the truth about what took place, and this was encouraged by the line manager. He had made an assumption that the incident was due to one factor, which was not the case. This helped me construct and maintain a version of the story which was more favorable for us (the frontline employees)."

Perhaps the most important reason to consider a reporting system that is not just run by line management is that it can drastically improve organizational learning. Here is what one line manager commented after having been given a report by an operator:

The incident has been discussed with the concerned operator, pointing out that priorities have to be set according to their urgency. The operator should not be distracted by a single problem and neglect the rest of his working environment. He has been reminded of applicable rules and allowable exceptions to them. The investigation report has been made available to other operators by posting it on the internal safety board.

Such countermeasures really do not represent the best in organizational learning. In fact, they sound like easy feel-good fixes that are ultimately illusory. Or simply very short-lived.

Opening up a parallel system (or an alternative one) can really help. The reports in this system should go to a staff officer, not a line manager (e.g. a safety or quality official), who has no stakes in running the department. The difference between what gets reported to a line manager and that which is written in confidential reports can be significant. The difference that the reporter feels in understanding, involvement, and empowerment can also be significant:

It is very good that a colleague, who understands the job, performs the interviews. They asked me really useful questions and pointed me in directions that I hadn't noticed. It was very positive compared to before. Earlier you never had the chance to understand what went wrong. You only got a conclusion to the incident. Now it is very good that the report is not published before we have had the chance give our feedback. You are very involved in the process now and you have time to go through the occurrence. Before you were placed in the hot chair and you felt guilty. Now, during interviews with the safety staff, I never had the feeling that I was accused of anything.

Of course, keeping a line-reporting mechanism in place can be very productive for a department's continuous improvement work. Especially if things need to be brought to the attention of relevant managers immediately. But you should perhaps consider a separate, parallel confidential reporting system if you don't already have one. Both line-based and staff-based (or formal and confidential) reporting mechanisms offer several kinds of leverage for change. Not mining both data sources for improvement information could be a waste for your organization.3

The Successful Reporting System: Voluntary, Non-punitive, and Protected

In summary, near-miss reporting systems that work in practice do a couple of things really well.4 They are:

  • voluntary
  • non-punitive
  • protected.

Voluntary

The language in the Eurocontrol rules suggests that reporting should be compulsory. All safety occurrences need to be reported. A lot of organizations have taken it to mean just that, and have told their employees that they are obligated to report safety occurrences. But this might actually make little sense. Why does research show that a voluntary system is better, leaving it to people's own judgment to report or not? Mandatory reporting implies that everybody has the same definition of what is risky, what is worthy of reporting. This, of course, is seldom the case. The airline captain who maintained omertà on his own flight deck could—in principle—be compelled to report all incidents and safety occurrences, as Eurocontrol would suggest. But the captain probably has different ideas about what is risky and worth reporting than his organization.

So who gets to say what is worthy of reporting? If the organization claims that right, then they need to specify what they expect their professionals to report. This turns out to be a pretty hopeless affair. Such guidance is either too specific or too general. All safety occurrences, like Eurocontrol suggests? Sure, then of course practitioners will decide that what they were involved in wasn't really a safety occurrence. Well, the organization may come back and put all kinds of numeric borders in place. If it is lower than 1,000 feet, or closer than five miles or longer than ten minutes, then it is a safety occurrence. But a line means a division, a dichotomy. Things fall on either side. What if a really interesting story unfolds just on the other side, the safe side of that border—at nine minutes and 50 seconds? Or 1,100 feet? Or 5.1 miles? Of course, you might say, we have to leave that to the judgment of the professional. Ah! That means that the organization does not decide what should be reported! It is the professional again. That is where the judgment resides. Which means it is voluntary (at least on the safe side of those numbers), even though you think it is compulsory.

The other option is to generate a very specific list of events that will need to be reported. The problem there is that practitioners can then decide that on some little nuance or deviation, their event does not match any of the events in the list. The guidance will have become too specific to be of any value. Again, the judgment is left up to the practitioner, and the system once again becomes voluntary rather than compulsory.

Also, making reporting mandatory implies some kind of sanction if something is not reported. Which destroys the second ingredient for success: Having a non-punitive system. It may well lead to a situation where practitioners will get smarter at making evidence of safety-critical events go away (so they will not get punished for not reporting them). As said above, pactitioners can engage in various kinds of rhetoric or interpretive work to decide that the event they were involved in was not a near miss, not an event worthy of reporting or taking any further action on.5 Paradoxically then, a mandatory system can increase underreporting. Simply because the gap between what the organization expects to be reported and what gets reported gets stretched to its maximum.

Non-punitive

Non-punitive means that the reporter is not punished for revealing own violations or other breaches or problems of conduct that might be construed as culpable. In other words, if people report their honest mistakes in a just culture, they will not be blamed for them. The reason is that an organization can benefit much more by learning from the mistakes that were made, than from blaming the people who made them. So people should feel free to report their honest mistakes.

The problem is, often they don't.

Often they don't feel free, and they don't report.

This is because reporting can be risky. Many things can be unclear:

  • How exactly will the supervisor, the manager, the organization respond?
  • What are the rights and obligations of the reporter?
  • Will the reported information stay inside of the organization? Or will other parties (media, prosecutor) have access to it as well?

The reason why people fail to report is not because they want to be dishonest. Nor because they are dishonest. The reason is that they fear the consequences. And often with good reason:

  • either people simply don't know the consequences of reporting, so they fear the unknown, the uncertainty;
  • or the consequences of reporting really can be bad, and people fear invoking such consequences when they report information themselves.

While the first reason may be more common, either reason means that there is serious work to do for your organization. In the first case, that work entails clarification. Make clear what the procedures and rules for reporting are, what people's rights and obligations are, and what they can expect in terms of protection when they report. In the second case it means trying to make different structural arrangements, for example with regulators or prosecutors, with supervisors or managers, about how to treat those who report. This is much more difficult, as it involves the meshing of a lot of different interests.

Not punishing that which gets reported makes great sense, simply because otherwise it won't get reported. This of course creates the dilemma for those receiving the report (even if via some other party, e.g. a quality and safety staff): They want to hear everything that goes on but cannot accept that goes on. The willingness to report anything by any other nurse would have taken a severe beating. The superficially attractive option is to tell practitioners (as much guidance material around reporting suggests) that their reports are safe in the hands of their organization unless there is evidence of bad things (like negligence or deliberate violations). Again, such guidance is based on the illusion of a clear line between what is acceptable and what is not—as if such things can be specified generically, beforehand. They can't.

From the position of a manager or administrator, one way to manage this balance is to involve the practitioners who would potentially report (not necessarily the one who did report, because if it's a good reporting system, that might not be known to the manager—see below). What is their assessment of the event that was reported? How would they want to be dealt with if it were their incident, not their colleague's? Remember that perceived justice lies less in the eventual decision than in who and what is involved in making that decision. For a manager, keeping the dialogue open with her or his practitioner constituency must be the most important aim. If dialogue is killed by rapid punitive action, then a version of the dialogue will surely continue elsewhere (behind the back of the manager). That leaves the organization none the wiser about what goes on and what should be learned.

Protected

Then, finally, successful reporting systems are protected. That means that reports are confidential rather than anonymous. What is the difference? "Anonymity" typically means that the reporter is never known, not to anybody. No name or affiliation has to be filled in anywhere. "Confidentiality" means that the reporter fills in name and affiliation and is thus known to whoever gets the report. But from there on, the identity is protected, under any variety of industrial or organizational or legal arrangements. If reporting is anonymous, two things might happen quickly. The first is that the reporting system becomes the garbage can for any kind of vitriol that practitioners may accumulate about their job, their colleagues, their hospital during a workday, workweek, or career. The risk for this, of course, is larger when there are few meaningful or effective line management structures in place that could take care of such concerns and complaints. However, senseless and useless bickering could clog the pipeline of safety-critical information. Signals of potential danger would get lost in the noise of grumble. So that's why confidentiality makes more sense. The reporter may feel some visibility, some accountability even, for reporting things that can help the organization learn and grow.

A second problem with an anonymous reporting system is that the reporter cannot be contacted if the need arises for any clarifications. The reporter is also out of reach for any direct feedback about actions taken in response to the report. The NASA Aviation Safety Reporting System (ASRS) recognized this quickly after finding reports that were incomplete or could have been much more potent in revealing possible danger if only this or that detail could be cleared up. As soon as a report is received, the narrative is separated from any identifying information (about the reporter and the place and time of the incident) so that the story can start to live its own life without the liability of recognition and sanction appended to it. This recipe has been hugely successful. ASRS receives more than 1,000 reports a week.6 Of course, gathering data is not the same as analyzing it, let alone learning from it. Indeed, such reporting systems can become the victims of their own success: The more data you get, the more difficult it can be to make sense of it all, certainly if the gathering of data outpaces the analytic resources available to you.

What if reported information falls into the wrong hands?

The nurse in the prologue honestly reported her contribution to the death of the infant to her supervisor. As a result, she was convicted as a criminal and is today without a job, or much of a life. Information about the incident was leaked to the media, and thereby into the hands of a prosecutor who happened to read about it in the local newspaper.

In many countries, it does not even have to go so haphazardly. Most democracies have strong freedom-of-information legislation. This allows all citizens from the outside access, in principle, to all non-confidential information. Such transparency is critical to democracy, and in some countries, freedom-of-information is even enshrined in the constitution. But the citizen requesting information can easily be an investigating journalist, a policeman, or a prosecutor. Freedom-of-information is really an issue when the organization itself is government-owned (and hospitals or air traffic control centers in many countries still are). Moreover, safety investigating bodies are also government organizations, and thus subject to such legislation. This can make people unwilling to collaborate with safety investigators.

The potential for such exposure can create enormous uncertainty. And uncertainty typically dampens the willingness of people to report. People become anxious to leave information in files with their organization. In fact, the organization itself can become anxious to even have such files. Having them creates the risk that names of professionals end up in the public domain. This in turn can subject safety information to oversimplification and distortion and misuse by those who do not understand the subtleties and nuances of the profession.

Some countries have succeeded in exempting safety data in very narrow cases from freedom-of-information legislation. The Air Law in Norway, for example, states in Article 12–24 about the "Prohibition on use as evidence in criminal proceedings" that "Information received by the investigating authority may not be used as evidence in any subsequent criminal proceedings brought against the persons who provided the evidence." Of course, this does not keep a prosecutor or judge from actually reading a final accident report (as that is accessible to all citizens), but it does prevent statements provided in good faith from being used as evidence. Similar legislation exists, though in other forms, in various countries. Many States in the US, for example, protect safety data collected through incident reporting against access by potential claimants. Most require a subpoena or court order for release of the information.7

One problem with this, of course, is that it locks information up even for those who can rightfully claim access, and who have no vindictive intentions. Imagine a patient, for example, or a victim of a transportation accident (or the family), whose main aim is to find out something specific about what happened to their relative. The protection of reporting, in other words, can make such disclosure (see the next chapter) more difficult. So when you contemplate formally protecting reported safety information, you should carefully consider these potential consequences.

The Difference between Disclosure and Reporting

Disclosure is different from reporting:8

  • Reporting is the provision of information to supervisors, oversight bodies, or other agencies. Reporting means giving a spoken or written account of something that you have observed, participated in, or done to an appointed party (supervisor, safety manager). Reporting is thought necessary because it contributes to organizational learning. Reporting is not primarily about helping customers or patients, but about helping the organization (e.g. colleagues) understand what went wrong and how to prevent recurrence.
  • Disclosure is the provision of information to customers, clients, patients, and families. The ethical obligation to disclose your role in adverse events comes from a unique, trust-based relationship with the ones who rely on you for a product or service. Disclosure is seen as a marker of professionalism. Disclosure means making information known, especially information that was secret, or information that could be kept secret. Information about incidents that only one or a few people were involved in, or that only professionals with inside knowledge can really understand, could qualify as such.
Reporting Disclosure
Individual Providing written or spoken account about observation or action to supervisors, managers, safety/quality staff Making information known to customers, clients, patients
Organization Providing information about employees’ actions to regulatory or other (e.g. judiciary) authorities when required Providing information to customers, clients, patients, or others affected by organization's or employee's actions

Table 5.1 The difference between disclosure and reporting for individuals and organizations

  • Practitioners typically have an obligation to report to their organization when something went wrong. As part of the profession, they have a duty to flag problems and mistakes. After all, they represent the leading edge, the sharp end of the system: They are in daily contact with the risky technology or business. Their experiences are critical to learning and continuous improvement of the organization and its work.
  • Many practitioners also have an obligation to disclose information about things that went wrong to their customers, clients, patients. This obligation stems from the relationship of trust that professionals have with those who make use of their services.
  • Organizations also have an obligation to disclose information about things that went wrong. This obligation stems from the (perhaps implicit) agreement that companies have with those who make use of their services or are otherwise affected by their actions.
  • Organizations (employers), the judiciary, and regulators have an obligation to be honest about the possible consequences of failure, so that professionals are not left in the dark about what can happen to them when they do report or disclose.
  • One could also propose that organizations have a (legal) obligation to report certain things to other authorities (judiciary, regulatory).

Disclosure and reporting can clash. And different kinds of reporting can also clash. This can create serious ethical dilemmas that both individual professionals and their employing organizations need to think about:

  • If an organization wants to encourage reporting, it may actually have to curtail disclosure. Reporters will step forward with information about honest mistakes only when they feel they have adequate protection against that information being misused or used against them. This can mean that reported information must somehow remain confidential, which rules out disclosure (at least of that exact information).
  • Conversely, disclosure by individuals may lead to legal or other adverse actions (even against the organization), which in turn can dampen people's or the organization's willingness to either report or disclose.
  • If organizations report about individual actions to regulatory or judicial authorities, this too can bring down the willingness to report (and perhaps even disclose) by individuals, as they feel exposed to unjust or unwelcome responses to events they have been involved in.

A representative of the regulator bad been sent out for a field visit as a customer of an organization I once worked with. She bad observed things in the performance of one practitioner that, according to the rules and regulations, weren't right. Afterwards, she contacted the relevant managers in the organization and let them know what she bad seen. The managers, in turn, sent a severe reprimand to the professional.

It really strained trust and the relationship between practitioners and management: Reporting was not encouraged by their reaction. The regulator would not have been happy either to find out that their visit was being used as a trigger to admonish an individual practitioner instead of resolving more systemic problems. It was as if the managers were offloading their responsibility for the problems observed onto the individual practitioner.

The difference between disclosure and reporting is not as obvious or problematic in all professions:

  • Where individual professional contact with clients is very close, such as in medicine, reporting and disclosure are two very different things.
  • Where the relationship is more distant, such as in air traffic control, the distinction blurs because for individual air traffic controllers there is not immediately a party to disclose to. The air traffic control organization, however, can be said to have an obligation to disclose.

If organizational disclosure or reporting does not occur, then the mistakes made by people inside that organization may no longer be seen as honest, and the organization can get in trouble as a result. This goes for individuals too. It may have played a role in the case of the previous chapter, as it plays a role in many other cases.

The Importance, Risk, and Protection of Disclosure

Not providing an account of what happened may mean there is something to hide. And if there is something to hide, then what happened is probably not an "honest" mistake.

The killing of a British soldier in Iraq by a US pilot was a "criminal, unlawful act," tantamount to manslaughter, a British coroner ruled. The family of Lance Corporal of Horse Matty Hull, who died in March 2003, were told at the inquest in Oxford, England, that it was "an entirely avoidable tragedy." His widow, Susan, welcomed the verdict, saying it was what the family had been waiting four years for. Hull said she did not want to see the pilot prosecuted, but felt she been "badly let down" by the US government, which consistently refused to cooperate.

Susan Hull had also been told by the UK Ministry of Defence that any cockpit tape of the incident did not exist. This was proven untrue when a newspaper published the tape's contents and when it was later posted on the internet. It showed how Hull was killed when a convoy of British Household Cavalry vehicles got strafed by two US A10 jets. The British Ministry of Defence issued an apology over its handling of the cockpit video, while the US Department of Defense denied there had been a cover-up and remained adamant that the killing was an accident.

The coroner, Andrew Walker, was damning in his appraisal of the way the Hull family had been treated. "They, despite request after request, have been, as this court has been, denied access to evidence that would provide the fullest explanation to help understand the sequence of events that led to and caused the tragic loss of L Corp Hull's life," he said. "I have no doubt of how much pain and suffering they have been put through during this inquisition process and to my mind that is inexcusable," he said.9

Not disclosing often means that a mistake will no longer be seen as honest. Once a mistake is seen as dishonest, people may no longer care as much about what happens to the person who made that mistake, or to the party (e.g. the organization) responsible for withholding the information. This is where a mistake can get really costly—financially and in terms of unfavorable media exposure, loss of trust and credibility, regulatory scrutiny or even legal action.

I recall one adverse event where the family was very upset, not only about what had happened, but about the organization not being seen as forthcoming. The family had been invited by the organization to stay in a nice hotel for some of the legal proceedings. Feeling injured and let down, they ordered as much expensive room service as possible, and then threw it all away. Having been hurt by the organization, they wanted to hurt the organization as much as possible in return.

Non-disclosure is often counterproductive and expensive. Silence can get interpreted as stone-walling, as evidence of "guilty knowledge." It is well-known that lawsuits in healthcare are often more a tool for discovery than a mechanism for making money. People don't generally sue (in fact, very few actually do). But when they do, it is almost always because all other options to find out what happened have been exhausted.10 Paradoxically, however, lawsuits still do not guarantee that people will ever get to know the events surrounding a mishap. In fact, once a case goes to court, "truth" will likely be the first to suffer. The various parties may likely retreat into defensive positions from which they will offer only those accounts that offer them the greatest possible protection against the legal fallout.

The ethical obligation to disclose

Not being honest, or not apologizing for a mistake, is what often causes relationships to break down, rather than the mistake or mishap itself. This makes honesty all the more important in cases where there is a prior professional relationship,11 such as patient–doctor.

In a specific example, the Code of Medical Ethics (Ethical Opinions E-8.12) of the American Medical Association says since 1981 that:

It is a fundamental requirement that a physician should at all times deal honestly and openly with patients... Situations occasionally occur in which a patient suffers significant medical complications that may have resulted from the physician's mistake or judgment. In these situations, the physician is ethically required to inform the patient of all the facts necessary to ensure understanding of what has occurred... Concern regarding legal liability which might result following truthful disclosure should not affect the physician's honesty with a patient.

This is unique, because few codes exist that specifically tell professionals to be honest. It also spells out in greater detail which situations ("that may have resulted from a physician's mistake or judgment") are likely to fall under the provisions of the Code. So this is a good start. But it still leaves a large problem. What does "honesty" mean? Being honest means telling the truth. But telling the truth can be reduced to "not lying." If it is, then there is still a long distance to full disclosure. What to say, how much to say, or how to say it, often hinges more on the risks that people see with disclosure, than with what a code or policy tells them to do.

The risk with disclosure

If structural arrangements and relationships inside an industry, or a profession, are such that all bets are off when you tell your story, then people will find ways to not disclose, or to only disclose in ways that protect them against the vagaries and vicissitudes of the system.

Nancy Berlinger describes how medical education has a "hidden curriculum."12 This hidden curriculum can be seen as the sort of spontaneous system of informal mentoring and apprenticeship that springs up in parallel to any formal program. It teaches, mainly through example, students and residents how to think and talk about their own mistakes and those of their colleagues. They learn, for example, how to describe mistakes so that they no longer are mistakes. Instead, they can become:

  • "complications;"
  • the result of a "non-compliant" patient;
  • a "significantly avoidable accident;"
  • an "inevitable occasional untoward event;"
  • an "unfortunate complication of a usually benign procedure."

Medical professionals, in the hidden curriculum, also learn how to talk about the mistake among themselves, while reserving another version for the patient and family and others outside their immediate circle of professionals. A successful story about a mistake is one that not only (sort of) satisfies the patient and family, but one that also protects against disciplinary measures and litigation.

Many, if not all, professions, have a hidden curriculum. Perhaps it teaches professionals the rhetoric to make a mistake into something that no longer is a mistake. Perhaps it teaches them that there is a code of silence, an omertà, that proscribes collaborating truthfully with authorities or other outside parties.

The protection of disclosure

The protection of disclosure should first and foremost come from structural arrangements made by the organization or profession. One form of protecting disclosure is that of "I'm sorry laws." According to these laws (now implemented in for example the US States of Oregon and Colorado), doctors can say to patients that they are sorry for the mistake(s) they committed. This does not offer them immunity from lawsuits or prosecution, but it does protect the apology statement from being used as evidence in court. It also does not prevent negative consequences, but at least that which was disclosed cannot be used directly against the professional (as it was with the ICU nurse in the prologue). Such protection is not uncontroversial, of course. If you make a mistake, you should not only own up to it but also face the consequences, some would say. Which other professions have such cozy protections? This is where ethical principles can start to diverge.

What is Being Honest?

So what really is honesty, or telling the truth in reporting? We have seen a lot of different signals in the past two chapters and before:

  • Giving an honest report of her role in the death of an infant got the nurse in the Xylocard case trouble.
  • When managers are in charge of a reporting system, practitioners likely give them one story of what happened. Whether that is the "truth" or not is almost irrelevant: It is about making the aftermath of a mistake or incident as painless for everybody as possible.
  • When taken to court, practitioners may tell yet another story. Again, whether that is the honest truth or not (something that legal systems often quite erroneously claim they can get out of people) is not the point: It is rather about minimizing the spiraling negative consequences of being put on trial.
  • Honestly disclosing to a family what happened to a patient is often under pressure from what a caregiver learned in the hidden curriculum.

The question that comes up is this: Is honesty a goal in itself? Perhaps honesty should fulfill the larger goals of:

  • learning from a mistake to improve safety; and
  • achieving justice in its aftermath.

These are two goals that serve the common good. Supposedly pure honesty can sometimes weaken that common good. For example, both justice and safety were hurt when the nurse from the prologue was put on trial as a result of her honest reporting. Honesty, or truth-telling—should we always pursue it because it is the "right" thing to do, no matter what the consequences could be?

Dietrich Bonhoeffer, writing from his cell in the Tegel prison in Berlin in 1943 Nazi Germany, drafted a powerful essay on this question. He was being held on suspicion of a plot to overthrow Hitler, a plot in which he and his family were actually deeply involved.13 If he were to tell the truth, he would have let murderers into his family. If he were to tell the truth, he would have to disclose where other conspirators were hidden. So would not telling this make him a liar? Did it make him, in the face of Nazi demands and extortions, immoral, unethical? Bonhoeffer engaged in nondisclosure, and outright deception.

The circumstances surrounding truth-telling in professions today is not likely as desperate and grave as Bonhoeffer's (he was executed in a concentration camp just before the end of the war, in April 1945). But his thoughts have a vague reflection in the fears of those who consider disclosing or reporting today. What if they tell the whole truth—rather than a version that keeps the system happy and them protected? Bonhoeffer makes a distinction between the morality and epistemology of truth-telling that may offer some help here:

  • The epistemology of truth-telling refers to the validity or scope of the knowledge offered. In that sense, Bonhoeffer did not tell the truth (but perhaps the nurse in the Xylocard case did).
  • The morality of truth-telling refers to the correct appreciation of the real situation in which that truth is demanded. The more complex that situation, the more troublesome the issue of truth-telling becomes (the nurse in the Xylocard case may not have done this, but perhaps should have).

Bonhoeffer's goal in not disclosing was not self-preservation, but the protection of the conspiracy's efforts to jam the Nazi death machine, thereby honoring the perspective of the most vulnerable. That his tormentors wanted to know the truth was unethical, much more so than Bonhoeffer's concealment of it.

Translate this into the situations faced by the nurse in the Xylocard case (or the pilot in the story of Chapter 2, or the nurse Julie in Chapter 1). Here it may be less ethical for prosecutors or judges in positions of power to demand the full truth, than it would have been for these professionals to offer only a version of that truth.

Asking for honesty initially, as the airline did, and as the hospital's procedures proscribed, is reasonable. It is here that honesty can contribute to the larger goals of accountability and learning. Responding to this, as the nurse did, was reasonable too, and an honest attempt to give account and perhaps help the hospital learn. Not responding to it, as the captain did, was perhaps foolhardy and unreasonable (but we do not know what history the airline had, or what signals it had sent out earlier about its tolerance for reporters and their mistakes).

But going to court, and demanding honesty there, became a different issue altogether in both these cases. Once adversarial positions were lined up against each other in a trial, where one party had the capacity to wreak devastating consequences onto another, the ethics of honesty got a whole new dynamic. Demanding honesty in these cases ended up serving only very narrow interests, such as the preservation of a company's or hospital's reputation, or their protection from judicial pressure. Or it deflected responsibility from the regulator (who employed the prosecutor) after allowing the airline to routinely hand out dispensations from existing rules—something that played a role in the incident in Chapter 2. Quite unlike Bonhoeffer, who must have been under tremendous pressure, self-preservation did become the overriding aim of the parties in these trials.

Wringing honesty out of people in vulnerable positions is neither just nor safe. It does not bring out a story that serves the dual goal of satisfying calls for accountability and helping with learning. It really cannot contribute to just culture.

Notes

1 Dekker SWA, Laursen T. From punitive action to confidential reporting: A longitudinal study of organizational learning. Patient Safety & Quality Healthcare 2007;5:50–6.

2 Dekker SWA, Hugh TB. Balancing "no blame" with accountability in patient safety. New England Journal of Medicine 2010;362:275.

3 Dekker 2007, op. cit.

4 Barach P, Small SD. Reporting and preventing medical mishaps: Lessons from non-medical near miss reporting systems. British Medical Journal 2000;320:759–63.

5 Vaughan D. The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago, IL: University of Chicago Press; 1996.

6 Billings CE. Aviation Automation: The Search for a Human-centered Approach. Mahwah, NJ: Lawrence Erlbaum Associates; 1997.

7 Sharpe VA. Promoting patient safety: An ethical basis for policy deliberation. Hastings Center Report 2003;33:S2–19.

8 Ibid.

9 Guardian. The Guardian 2007 18 March;Sect. 1.

10 Berlinger N. After Harm: Medical Error and the Ethics of Forgiveness. Baltimore, MD: Johns Hopkins University Press; 2005.

11 Cohen JR. Legislating apology: The pros and cons. University of Cincinnati Law Review 2002;70:819–72.

12 Berlinger 2005, op. cit.

13 Ibid.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset