1 What is the Right Thing to Do?

The dead girl was wrapped in a shower curtain.

When the police found the package in the trunk of her stepfather's car, they noticed the little girl had a rag stuffed down her throat, secured in place with a bandage around her head.

It turned out that the rag had been put there by her mother, who, days before, had shoved the girl under a bed and left her.

Under that bed, she had died. Alone.

Her body was now on its final journey, to be dumped in a wood.

The rag would have kept her from crying—crying from abandonment, fear, hunger. At three years of age, her body weighed in at 10 kg, or about 20 lb.

How can a society accomplish justice in the aftermath of something like that? What is the right thing to do?

The mother was charged and convicted and got six years in jail. That would be just to many people. To them, it would be the right thing to do. Some might say that the sentence was too short. The mother was also forced into treatment. Not right, some would say, not deserved. Very appropriate, and very smart and just, others would say.

But the prosecution was not satisfied. Not yet. They found another contributing party and produced a charge of manslaughter in the second degree.

The stepfather, you'd think. Aiding, abetting, driving a car with a body in the trunk? That would be him.

Except it wasn't.

The charge was leveled against the social worker who replaced the original worker tasked with looking after this family. The alleged failure of the replacement social worker was that she did not pick up on signals of the child's neglect. Here, in short, was the case. The family had been troubled from the very start. When the girl was one year old, the State had taken her out of the mother's care because of signs of abuse. She was returned to the family not much later. The paperwork that would have testified as to the family fulfilling the conditions for the child's return, however, was lost, or never produced. The child protection council was not notified either.

The social worker visited the family three times, and found little to report. After a while, she went on sick leave. It took months before a replacement was found. The replacement worker drew up a plan for the mother. It specified, among other things, what to give to a toddler to eat, when, how often, and other basic things related to child care and hygiene. The mother never really managed. The girl started falling behind in language, and started to look a little blue.

And then, one day, she was dead.

Was charging the replacement social worker with manslaughter the right thing to do? Barring any visceral responses—"no of course it wasn't!" or "yes, it definitely was!"—this is not an easy question to answer. It is an ethical question. It is a question about our values, about what we consider to be right or wrong.

That finding an answer might be hard, however, does not mean that the question resists systematic reflection. Ethics, as a branch of philosophy, offers that sort of systematic reflection. Now it is not an easily penetrable field, and any short treatment of what it might offer is likely unfair. But some such guidance is available. Consider three different ethical schools here:

  • utilitarianism
  • deontology
  • consequentialism.

These are big Greco–Latin labels. Here is another way of putting them (again, oversimplifying things):

  • getting the greatest good for the greatest number;
  • expecting people to live up to professional duty;
  • considering the consequences of the judgment.

Utility

Let us start with the first, where the ethical or right thing to do is that which produces the greatest good for the greatest number. Getting rid of an unsafe person (removing a social worker who does not pick up signals of neglect) could then qualify as ethical. The benefit to families, to children, to coworkers, and the organization, is greater than any cost. In fact, the cost is born mostly or exclusively by the individual who is removed and charged. All the possible benefits go to a lot of people, the cost goes to one. It could be argued that an even greater good goes to an even greater number here—the society surrounding this family and their State caretakers. They receive the good that those complicit in the death of the little girl get punished. Getting rid of a bad apple, a deficient worker, harms virtually no one and benefits a lot of people. In fact, any harm is inflicted only on the person or party who might deserve it anyway. So utilitarianism could perhaps argue that this would be the right thing to do.

Duty

But would we want to be a utilitarian with regard to this question? Let us take another position, that of deontology. Deontology studies the nature of duty, or obligation, and one specific way of doing that is to look at professional duty. The duty of care, for example, that comes with professions where (potentially risky) decisions about the lives of other people get taken. This is often called a fiduciary relationship. It is a relationship of trust between professional and client (patient, family, passenger, child), where the client has comparatively limited knowledge and power to influence what the professional might do or decide. The relationship, and people's willingness to engage in it, is founded on the trust that the professional knows what she or he is doing, and does the best for the person in her or his care. This is where deontology might suggest that going after the replacement social worker is ethical, is the right thing to do. She did not live up to her duty of care. She violated the fiduciary relationship. She knew what the child and mother needed, or should have known. And she should have ensured that this was leading to a safe situation for the child, not a lethal one.

But, of course, things are not as simple as that. The fiduciary relationship is also founded on the belief that the professional will do everything in the best interest of the client in front of her or him. When meeting with a client—a family, a patient—nothing in the world should be more important than the client seen there and then. The financial bottom line is not more important, nor is the clock, nor the next client waiting to be seen. The duty to do the best for the current client overrules them all.

But that works only in an ideal world. Giving all the time and resources to one family (living up maximally to the duty ethic relative to that client) takes away time and resources from others. This militates against the ability to live up to the duty ethic with those other clients. It creates a classic goal conflict, or ethical conflict even, for social workers (as it does for many physicians). And most families or patients could be argued to deserve or require more time than is accorded them. This is, in most Western countries, a structural constraint for services like social work, State family support, child protection, or healthcare. They are always under pressure of limited resources—not enough money, not enough people (remember it took months to find a replacement in the little girl's case), not enough time. And always more families or patients to be seen, waiting for help, attention.

So part of being a good professional, of living up to the duty ethic, is making sure that all families get the best care you can give them. That, of course, sounds like utilitarianism: The best to the most, the greatest good to the greatest number. A good duty ethic under limited resources and goal conflicts, then, means being a good utilitarian. It means juggling time and resources in a way that gets the most to the most families. But of course this militates against a more pure reading of duty ethic—that nothing is more important than the family seen there and then. There is no hope that such an ethical conflict can ever be resolved. It is felt by most social workers, and most healthcare workers, every day, all over the world. Organizations who employ or deploy such professionals often do little to encourage serious reflection over moral conflict, nor do they help their people manage it. The conflict simply gets pushed down into the workday, to be sorted out at the sharp end, on the go, as a supervisor draws up the schedules, as a social worker hurries from one family to the next.

This complicates any judgment about whether somebody lived up to professional duty. If we want to come to a fair judgment of whether pursuing the replacement social worker is the right thing to do, then there is a lot more we need to look at. Just considering the dead girl and connecting that, in hindsight, to the (now so obvious) signals of neglect that the social worker should (now so obviously) have picked up and acted on, is not going to be enough. What was the case load for this worker? What were the handover procedures when getting cases from the previous worker? How did signals of neglect come in over time, and how did they compare to the perceived criticality of the signals coming from other families in the care of this worker? Who made the schedules and what rationale were they based on? And we could go on. How was social work funded and staffed and organized in this State? Whether prosecuting the social worker is the right thing to do would depend on a careful collage of answers to all of those questions, and probably more.

Consequence

What are the consequences of charging the replacement social worker with manslaughter? Of course, there are all kinds of consequences. Not in the least for the social worker herself. Chapter 6 considers the consequences for the "second victim" in more detail. But what matters here are the consequences for the profession, and for the people in its care: Children like the one who died. One predictable consequence is this: Prosecution of the social worker is likely to tell her colleagues that they should stare harder and intervene more aggressively—or else.

And so they did. The very next year, the number of children taken out of their families' care in this State doubled. Only very weak signals, or mere hints of trouble, would be necessary for a social worker to decide to intervene. The cost of missing signals is simply too large. But that sort of response has consequences too: The cost gets displaced. It gets moved around the system and part of it may well end up on the heads of the most vulnerable ones. Because where do those children go? While in the care of the State, many would go to foster families or other temporary solutions. In many countries appropriate foster families are difficult to find, even with normal case loads. Doubling the number of children from one year to the next can lead to a lowering of standards for admitting foster families. This can have consequences for the safety and security of the children in question.

And there are more consequences. Doubling the number of cases from one year to the next will lead to a doubling or at least an increase of the paperwork, an increase in the supervisory and organizational attention devoted to them. It is unlikely that resources will quickly be made available to have the organization grow accordingly. So other work probably gets left undone. And there is a multiplier effect here. When noticing that a colleague suffers such consequences for having been involved in a failure, professionals typically start being more cautious with what they document. The paper trails of their actions get larger, get more pre-emptive, more cautious. It is one of the defensive measures that professionals often take. And, as research has shown, paying a lot of attention to the possibility of being held accountable like this, detracts attention and cognitive resources from the actual task.1 In other words, social workers may be looking harder at paperwork and protocol and procedure than at children.

With all those consequences, is charging the replacement social worker the right thing to do? Consequentialism would suggest not. The things that get changed when a failure is met with an "unjust" response (the prosecution of an individual caregiver in the example above) are not typically the things that make the organization safer. It does not typically lead to improvements in primary processes. It can lead to "improvement" of all the stuff that swirls around those primary processes: bureaucracy, involvement of the organization's legal department, bookkeeping, micro-management. Paradoxically, many such measures can make the work of those at the sharp end, those whose main concern is the primary process, more difficult, lower in quality, more cumbersome, and perhaps even less safe.

Responding to Failure: The Organization

Considering whether punishing the social worker is the right thing to do is one thing. But if you run the organization, or a part of it, you are left with a tragic failure that might make you look pretty bad. What do you do? Responding appropriately to the death of a little girl in the care of your organization is incredibly hard. All kinds of interests are at stake—almost independent of whether a prosecutor decides to go after one of your social workers or not. These are among the questions:

  • What serves the organization best?
  • What should you do with the professional involved?
  • What about the public image (the consumers of your services or goods)?
  • What about the regulator who is watching over you?
  • What about your own position or survival as organizational manager?

A question that runs through all of these is this. How can you justly deal with the individual who was involved, while also ensuring that your organization learns as much as it can from the event? How will you balance accountability and learning? Other employees will likely be watching carefully, to see what you are going to do. Here are the immediate options, and neither of them are only good.

If you come down hard on the social worker, because you feel that you somehow need to match the severity of your response to the gravity of the event, then you will create a lot of the same effects as the prosecutor would have done. Other social workers will be more careful to leave a paper trace, will be hesitant to tell you about near misses, and will clog your system with more false alarms than you can deal with. You might, in the eyes of some, have held somebody accountable. But your basis for organizational learning has gone out the window.

The alternative is that you do not sanction at all. You might believe that the social worker is not uniquely deficient, that she will indeed be even safer in the future. You might offer her critical incident stress management, to ensure that she does not succumb to her own guilt, remorse, trauma. You might invite her to share her story, her account, with the other workers. But how do you sell that message to the other workers in your organization? They may well have expected that some kind of sanction was forthcoming because of this violation of professional duty. So if you don't sanction, other workers may even be upset that apparently anything goes. Somebody lets a kid die on her watch, and nothing happens?

And how do you sell it to the regulator or to others in society who might be waiting for some kind of strong countermeasure? It is not that regulators are necessarily short-sighted and have no imagination. Very often, the rules around which their work is configured leaves them very few options other than to sanction, to recertify, to revoke, to write a letter telling you to watch out better next time. It can be very difficult to persuade a regulator that "you are doing something about the problem" when, empirically, you seem to be doing nothing of the sort. You leave the worker in place, after all. I have Seen that it takes a strong stand to convince the regulator, and indeed other parties in society, that you are in fact doing an immense lot—particularly to help the organization learn and hopefully prevent in the future. You ensure that the story, the account, is preserved and distributed. You take on board the recommendations it implies. You create a culture where your people will feel free to share safety-critical information with you. You balance accountability with learning.

An operations manager of a mine recently asked me whether I thought it would be just to fire employees who show up for work while drunk or drugged. This is precisely what his mine does. The employee is fired, no questions asked, no appeal offered.

In a raw, narrow sense, that could be seen as just, I told him. After all, if this person is going to operate heavy equipment that can harm other workers, you want the person to be sober.

But, I said, perceptions of injustice may start to seep in if that is all that you do. Consider the location of your mine, I said. It is in the absolute middle of nowhere, a hot, dusty place. You have fly-in, fly-out workers, like on an oil rig. When out there, the guys (mostly guys) are isolated. They have nothing much for entertainment, very little in the way of social support except each other.

I am not making excuses for somebody who shows up drunk or drugged, I said, but what you want to consider is opening up a parallel inquiry into the conditions that make it more likely for people to do so. What is it in the situation in which you configure them that makes them vulnerable? If you don't do that, and just fire your people, then other workers might begin to wonder about your understanding of their conditions, your interest in them, and indeed whether what you are doing is just after all. And you might well harm your own organization, I said. You are going to need those people. They are hard to replace. And you do not want your mine to get a reputation that keeps people from wanting to work there.

Finally, I said, if you want to keep your instant-firing policy in place, you want to make sure that the decisions about firings have a good foundation in your workforce. That it is not just you who decides. Make sure there is a constituency for the policy, involve your workers in its language, its application, and the judgments that arise from it. They are the ones, after all, who are going to have to work, or not, with that person. If they feel that judgment is going to be passed over their heads, without any involvement, workers may start covering for each other—helping colleagues hide the evidence of drunkenness or drugs. As a manager, you won't know about it, but the safety of your operation is going to be hollowed out from the inside out.

A Just Culture: Balancing Safety and Accountability

Calls for accountability are important. And responding adequately to them is too. Calls for accountability themselves are, in essence, about trust. About people, regulators, the public, employees, trusting that you will take problems inside your organization seriously. That you will do something about them, and hold the people responsible for those problems to account. Accountability is fundamental to human relationships. If we cannot be asked to explain why we did what we did, then we somehow break the pact that all people are locked into. Being able to offer an account for our actions is the basis for a decent, open, functioning society.

But only responding to calls for accountability can quickly create injustice. And mess up safety.

I recall how one safety-critical industry was under intense media scrutiny in a country where I once lived. The newly elected government had pledged to the public that it would let the industry continue to function if it were safe. Then reports started to leak out about operators drinking on the job, about an internal erosion in safety culture, about a lack of trust between management and employees. The regulator was under exceptional pressure to do something. To show that it, and the government, could be trusted.

So the regulator sent parts of the cases it had discovered to the prosecutor. The media loved it: Now something was happening! Maybe crimes had been committed by people to whom the public had entrusted the running of this safety-critical technology! Now somebody was finally going to be held accountable.

The regulator saw bow some of the media spotlight on it got dimmed. It could breathe a little easier now. But it was a bittersweet lull. The relationship with the industry was dramatically disturbed. Regulators have to rely on open disclosure by people in the industry they regulate, otherwise they have no accurate or truthful information to go and regulate on. Such disclosure was now going to be very unlikely. It would be, for years to come.

In addition, safety improvements, at least for the media (and thereby public opinion, and, by extension, the government's stance on the issue) could now be largely collapsed into the pursuit of a few bad apples in the industry's management. Now that these people would be held accountable, any other safety improvements could simply be assumed to be less important, or to follow automatically. Of course they would not. Publicly or legally reminding people of their responsibilities may have some effect in getting them or others to behave differently (though never for a long time). And the negative consequences of such accountability easily outweigh these effects.

Only responding to calls for accountability is not likely to lead you to justice or to improved safety. People will feel unfairly singled out, and disclosure of safety problems will suffer. A just culture, then, also pays attention to safety, so that people feel comfortable to:

  • bring out information about what should be improved to levels or groups that can do something about it;
  • allow the organization to invest resources in improvements that have a safety dividend, rather than deflecting resources into legal protection and limiting liability.

A just culture, then, means getting to an account of failure that can do two things at the same time:

  • satisfy demands for accountability;
  • contribute to learning and improvement.

Virginia Sharpe, a philosopher and clinical ethicist who has studied the problem of medical harm for many years, has captured these dual demands in what she calls "forward-looking accountability."2 Accountability that is backward-looking (often the kind in trials or lawsuits) tries to find a scapegoat, to blame and shame an individual for messing up. But accountability is about looking ahead. Not only should accountability acknowledge the mistake and the harm resulting from it, it should lay out the opportunities (and responsibilities!) for making changes so that the probability of such harm happening again goes down. I will go into this more deeply in the final chapter.

For now, it may seem impossible to convert your profession or organization to forward-looking accountability. It may seem impossible to both hold people accountable and learn and improve at the same time. Which is why real just cultures seem so elusive. If that is how you feel, you are not alone. The two seem impossible to reconcile:

  • Set up ways that people can tell stories that contribute to learning and improvement (e.g. confidential incident reporting) and some people will cry foul: Your operators or managers should own up! They should take responsibility! I demand to know who messes up!
  • But tell stories that satisfy such demands for accountability and you may find that there is very little learning or improvement leverage in them. In fact, you may find that the very act of forcing out such stories (e.g. through a trial) makes learning very difficult.

Creating, and getting consensus around, an explanation of failure that both satisfies demands for accountability and contributes to learning and improvement is a wonderful challenge. It is the challenge at the heart of a just culture.

Wanting everything in the open, but not tolerating everything

What is it that can make a just organization a safe organization, and an unjust one an unsafe one? People who write or think about just culture agree: It has to do with being open, with a willingness to share information about safety problems without the fear of being nailed for them. Most people also believe that the openness of a just culture is not the same as uncritical tolerance or generosity. If everything "goes," then in the end no problem may be seen anymore as safety-critical—and people will stop talking about them for that reason. It is precisely this tension between:

  • wanting everything in the open;
  • while not tolerating everything.

I will deal with both in this book. It will cover how the obligations to disclose are about wanting everything relevant in the open—and how a perceived lack of justice Can mess that up really quickly. It will cover the problems with not tolerating everything—because the "everything" in there is not about a clear line or definition, but about who gets to decide. It will cover how a just culture is about the always uneasy, but exciting melding of the two. It is exactly the friction between wanting everything in the open so that you can learn, but not tolerating everything so that you can be "just," that makes building a just culture such an interesting venture.

Involving the Legal System

A final step into the muddled morass of responding to failure is to involve the legal system. Once this step is taken (or once the legal system starts involving itself), all bets about achieving "justice" are off. In fact, in all the cases I have seen up close, the outcome of a trial in the wake of failure was never "just." (Nor did it improve safety.)

  • Victims would typically feel undercompensated and often started wondering about the wisdom of having a trial in the first place.
  • The practitioner or professional on trial would definitely feel singled out as scapegoat, unfairly bearing the legal and moral load of the mishap.
  • Proceedings would hardly be about the content of the case, and more about arcane legal protocol and procedure (and when they were about content, they typically, and unjustly, ran roughshod over all kinds of operational subtleties and nuances).
  • The organization would feel that it got unjust attention in the media, attention it would gladly do without.
  • There would always be a losing side, even if the practitioner got off the hook.

So with a legal system, justice is hard to achieve in the wake of failure. But there is more. When a professional mistake is put on trial, safety almost always suffers. Rather than investing in safety improvements, people in the organization or profession invest in defensive posturing, so they themselves are better protected against prosecutorial attention. Rather than increasing the flow of safety-related information, legal action has a way of cutting off that flow. Safety reporting often gets a harsh blow when things go to court.

In 2006, Julie, a nurse from Wisconsin, was charged with criminal "neglect of a patient causing great bodily harm" in the medication death of a 16-year-old girl during labor. Instead of giving the intended penicillin intravenously. Julie accidentally administered a bag of epidural analgesia. Julie lost her job, faced action on her nursing license and the threat of six years in jail as well as a 25,000 US dollars fine. Julie's predicament likened that of three nurses in Denver in 1998, who administered benzathine penicillin intravenously, causing the death of a neonate. The nurses were charged with criminally negligent homicide and faced five years in jail. One pleaded guilty to a reduced charge; another fought the charge and was eventually exonerated.

In other, similar, cases where healthcare workers and other professionals were to stand trial on criminal charges, incident reporting rates dropped. Sure, somebody may have been held "accountable" but the system did not get any wiser for it. Only dumber—literally. In the long run, it seems as if nobody benefits from this type of response to failure. Also, the things that get changed in response to legal action are not necessarily the things that make the operation or organization any safer.

Judicial proceedings can rudely interfere with an organization's priorities and policies. They can redirect resources into projects or protective measures that have little to do with the organization's original mandate, or with safety. What may be improved in the example above was all kinds of aspects of bureaucracy. Not again would this organization be "caught" by a prosecutor without an elaborate, auditable, and defensible paperwork footprint of its actions and decisions. The other thing that likely happened was that the organization adjusted its criterion for intervention downward: It would now be satisfied with less evidence to step in, rather than get caught again by a prosecutor after (in hindsight) stepping in too late. This represents a dilemma, at many levels and in many ways, for various organizations today.

Ask What is Responsible, Not Who is Responsible

The question that drives safety work in a just culture is not who is responsible for failure, rather, it asks what is responsible for things going wrong. What is the set of engineered and organized circumstances that is responsible for putting people in a position where they end up doing things that go wrong?

Shortly after midnight on 21st of June, 1964, James Chaney, Michael Schwerner, and Andrew Goodman were murdered by a group of White Citizens' Council and Ku Klux Klan members in Mississippi. The three young civil rights activists had been in the State to help black Americans register to vote. A Neshoba county deputy Sheriff, Cecil Price, stopped the three men on a tip from other white activists in Meridian, Mississippi, jailed them and instructed his secretary to keep quiet about their incarceration. Meanwhile he notified his Klan associates who assembled and planned how to kill the three civil rights workers.

With a fine of 20 US dollars the three men were ordered to leave the county. Price followed them to the edge of town, but pulled them over again and held them until the Klan arrived. They were taken to an isolated spot where Chaney, a black man, was mutilated and all three were shot dead. A local minister was part of the Klan group that attacked them.

The bodies were not located until weeks later, and the outrage over their killings helped bring about the passage of the 1964 Civil Rights Act. Commenting on the crimes not long after, Martin Luther King urged people to ask not who was responsible, but what was responsible for the deaths. What was the mix of hatred, of discrimination, of bigotry and intolerance, of fear, ratified in how counties were run, in how politics was done, in how laws were written and selectively applied? It was that mix that drove men to see their acts as legitimate, as necessary for their survival and continued supremacy. King's was a system-level appeal avant-la-lettre, a quest to go up and out in seeking an understanding of why such evil could happen, rather than a down-and-in hunt for a few bad Klan apples.

In the search for the three young men (two of them white), at least seven bodies of Blacks turned up. Many of them had been missing for months, without much action or even attention from authorities. Missing, murdered Blacks were the norm. Similar norms or fixed ideas prevailed. An earlier trial had hung because one tormented Mississippi jury member could not stomach declaring the minister guilty.

It took decades to convict Ray Killen, one of the Klan members involved. Opposing his three consecutive 20-year sentences, Killen argued in 2005 that no jury of his peers at the time would have found him guilty. He was probably right. The operation of the institution of justice might not have led to justice. In fact, it took extrajudicial action to achieve justice. A mafia member (of the Colombo crime family) was allegedly recruited by the FBI to help find the bodies. He threatened a Klansman by putting a gun in his mouth, forcing him to reveal the location. Illegal, but quite just, according to many.

Organizations concerned with building a just culture do not normally struggle with forces as deep, pervasive, and dark as those that killed Chaney, Schwerner, and Goodman. They will not be asked to make sense of the behaviors of Klansmen, nor take a position on whether it invites sanction or not. Yet the question that King raised—ask not who is responsible, but what is responsible—rings as relevant for us now as it did then. The aim of safety work is not to judge people for not doing things safely, but to try to understand why it made sense for people to do what they did—against the background of their engineered and psychological work environment. If it made sense to them, it will for others too. Merely judging them for doing something undesirable is going to pass over that much broader lesson, over the thing that your organization needs to do to learn and improve.

Offloading a failure on to a few individuals is not usually going to get you very far. The conclusion drawn from most incidents and accidents in aviation is that everybody and everything contributes in a small way and that these small events and contributions can combine to create unfortunate and unintended outcomes.3,4 People do not come to work to do a bad job. Like King, what people concerned with safety must try to understand is not who is responsible for an error, but what is responsible. What set of circumstances, events, and equipment puts people in a position where an error becomes more likely, and where its discovery and recovery are less likely? The aim is to try to explain why well-intended people can act mistakenly, without necessarily bad intentions, and without purposefully disregarding their duties or safety.

Notes

1 Lerner JS, Tetlock PE. Accounting for the effects of accountability. Psychological Bulletin 1999;125:255–75.

2 Sharpe VA. Promoting patient safety: An ethical basis for policy deliberation. Hastings Center Report 2003;33;S2–19.

3 Dekker SWA. The Field Guide To Understanding Human Error. Aldershot, UK: Ashgate Publishing Co.; 2006.

4 Dekker SWA. Drift Into Failure: From Hunting Broken Components To Understanding Complex Systems. Farnham, UK: Ashgate Publishing Co.; 2011.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset