CHAPTER 6

Eight Ways to Reduce Failures

Imagine that a group’s leader believes that a lot of group members have valuable information and perspectives, and she wants to elicit their ideas. What can she do?

Most managers are exceedingly busy, confronting a daunting number of tasks. It is tempting for them to prefer employees who offer upbeat projections and whose essential message is that there is no need to worry. Employees are well aware of that temptation, and many of them are reluctant to provide their bosses with bad news. No one likes to be anxious or spread anxiety, especially to those who have power over them.

We have seen that in the Obama administration, Nancy-Ann DeParle and Jeff Zients solved that problem by making it clear that they wanted to hear the truth, even if it consisted of bad news. If they didn’t get it, they would ask questions until they did. All over the world, wise groups benefit from the same practice.

By the way, we do not want to be viewed as opposed to optimism. Optimistic emotions and positive energy are enormously valuable, in their place. Optimistic people might well be more likely to succeed. But we see no role for unrealistic optimism in the early stages of an endeavor, when options are being evaluated and plans are being made. The time for optimism is after the decision has been made, when unified commitment to the course of action becomes essential and where even somewhat unrealistic aspirations can motivate the hard work and perseverance that are necessary for the highest levels of achievement.

One of our central themes is the immense importance of diversity, not necessarily along demographic lines, but in terms of ideas and perspectives. We are speaking above all of cognitive diversity. One of the virtues of anxiety is that it tends to reflect and to produce such diversity. In addition, leaders can do a great deal to increase diversity, both by creating the right kind of culture and by hiring the right kind of people.

One of the particular advantages of diversity and dissent is that they promote two things that institutions need: creativity and innovation.1 When minority voices are heard, well-functioning groups are likely to be jolted out of their routines, and fresh solutions, even a high degree of innovation, can follow. When dissent and diversity are present and levels of participation are high, groups are likely to do a lot better.2

It is important to understand just how to harness the power of diversity. Many groups mistakenly mix together into one process both divergent functions, which require diverse views, and consensus-seeking, critical functions. It is important to think ahead to understand when each objective is needed and to plan the group process to achieve each objective separately, even to the degree of insulating the two processes from one another by designing a two-stage strategy. Interestingly, this insight is supported by conceptual arguments from Darwinian evolutionary theory and from the modern computer science of machine learning (more on this important distinction in chapter 7). For now, keep in mind that both divergent and consensus processes are needed, but are best when implemented in separate stages of the larger problem-solving strategy.

In line with our pleas for anxious leadership and for diversity, we suggest eight potential approaches: (1) inquisitive and self-silencing leaders; (2) “priming” critical thinking; (3) rewarding group success; (4) role assignment; (5) perspective changing; (6) devil’s advocates; (7) red teams; and (8) the Delphi method. We begin with the simplest and least formal and move toward approaches that have a higher degree of complexity and structure.

Inquisitive and Self-Silencing Leaders

Even if they have important information, some people are more likely to silence themselves than others. We have noted that group members are more likely to speak out if they have high social status or are confident about their own views.3 We have also referred to the complementary finding that members of low-status groups—less educated people, African Americans, and sometimes women—may have less influence within deliberating groups.4 On juries, lower-status members, as measured by their occupations and sex, have been found to be less active and less influential in deliberation.5 Not surprisingly, one of the greatest studies of the jury process, The American Jury, surveyed judges and juries in the early 1960s and found that men out-talked women two-to-one in jury deliberations.6 But we were surprised when forty years later, analyzing our own jury deliberation data, we found that men still out-talked women two-to-one in jury deliberations. In many companies, a similar situation undoubtedly holds.

Wise groups take careful account of these findings and take corrective steps. Revealingly, the risk that unshared information will have too little influence is much reduced when that information is held by a leader within a group. Not surprisingly, leaders are eager to share the information that they have. And the leaders’ words usually count, because people listen to what leaders have to say.7

Consider a revealing experiment. A medical team consisting of a resident physician, an intern, and a third-year medical student was asked to diagnose an illness. The team displayed a strong tendency to emphasize unshared items stressed by the resident, who was the highest-status member of the group. In this particular respect, the team did not fall prey to the problem of hidden profiles—because the resident’s information, even though uniquely held, was successfully transmitted to all group members.8 More generally, people who are experienced in the task at hand are more likely to mention and to repeat unshared information.

One reason for these findings is that those with higher status or competence are less subject to the social pressures that lead people to silence themselves. Another reason is that leaders and experts are more likely to think that their own information is most important and worth disclosing to the group—notwithstanding the fact that the information held by other group members cuts in the other direction.

The simplest lesson is that leaders and high-status members can do the group a big service by indicating their willingness and their desire to hear uniquely held information. If they are inquisitive, they are more likely to learn. Of course, time is scarce, but quick meetings in which leaders ask for information that has not yet emerged can be a big help. (In government, Sunstein created a “fifteen-minute rule,” meaning that staff meetings could not go over fifteen minutes unless there was special reason—and the short time ensured that a lot of information flooded the group from the start.) Leaders can also refuse to state a firm view at the outset and in that way allow space for more information to emerge.

“Priming” Critical Thinking

We have seen that when people silence themselves in deliberating groups, it is often because of social norms—because of a sense that their reputations will suffer and that they will be punished, not rewarded, for disclosing information that departs from their group’s inclination. Hidden profiles remain hidden in large part for this reason; those who disclose unique information face a risk of disapproval.

But none of this is inevitable. Social norms are not set in stone. If consensus is prized and known to be prized, then self-silencing will be more likely. But if the group is known to welcome new and competing information, then the reward structure will be fundamentally different and will support much better outcomes.

Social scientists have done a lot of work on the importance of priming—that is, triggering some association or thought in such a way as to affect people’s choices and behavior. If male teachers and female students are primed to think of gender, they act differently, more in conformity with gender stereotypes. Words like Wall Street increase people’s propensity to compete and reduce the chances they will cooperate. In one study, if the participants thought that the name of the game they were asked to play was Community Game, they were far more likely to work together than if they thought the name was the Wall Street Game.9

Self-silencing can be affected by priming as well. Striking evidence for this claim comes from hidden-profile experiments that prime people by asking them to engage in a prior task that involved either “getting along” or, instead, “critical thinking.”10 When the prime involved getting along, people sensed that their goal was to cooperate and to be friendly with one another. When the prime involved critical thinking, people felt that their goal was to arrive at the right solution. Once people were primed by a task that called for critical thinking, they were far more likely to disclose what they knew—and there was a substantial reduction of hidden profiles. Good team players think critically, and they do not always get along. Wise groups can prime that particular action.

This point applies to many organizations, including corporate boards. In the United States, the highest-performing companies tend to have “extremely contentious boards that regard dissent as an obligation” and that “have a good fight now and then.”11 Recall that unsuccessful investment clubs have little dissent and lose a great deal of money when the members are united by close social ties. By contrast, the best-performing investment clubs lack such ties and benefit from dissent and diversity.12

For both private and public groups, the general lesson is clear. If the group encourages disclosure of information—even if the information opposes the group’s inclination—then self-silencing will be reduced significantly. Deliberation will benefit as a result. Good social norms and a good culture can go a long way toward reducing the potentially bad effects of social pressures. Although there is no simple or automatic way to encourage disclosure, leadership makes a big difference and leaders can inculcate norms that help. Subtle pressures toward conformity can be avoided. Genuine (rather than merely symbolic) openness to different or disagreeable views, sometimes in the form of material rewards, matters a lot. Happy talk can be rewarded—or it can be countered with some version of, “OK, now tell me something I need to know.”

Rewarding Group Success

We have seen that people often silence themselves because they receive only a fraction of the benefits of disclosure. This point raises an obvious question: How would groups perform if individuals knew that they would be rewarded not if their own answer was correct, but if the majority of the group turns out to be correct?

In a situation of this kind, hidden profiles, cascades, and group polarization will be reduced. If people are rewarded when their group is right, they are far more likely to reveal what they actually know. In such a situation, incentives are changed, because people internalize the benefits of disclosure. Good managers are entirely aware of that fact.

Careful experiments show that it is possible to restructure incentives in just this way—and hence to reduce the likelihood of cascades.13 Cascades are far less likely when each individual knows that he has nothing to gain from a correct individual decision and everything to gain from a correct group decision. Groups produce much better outcomes when it is in the individual’s interest to report exactly what he sees or knows. It is the candid and fully informative announcement, from each person, that is most likely to promote an accurate group decision.

An emphasis on the importance of group success can improve decisions in many real-world contexts, simply because the emphasis provides better access to more minds. Consider the case of whistle-blowing—a practice understood not merely as disclosing inappropriate or unlawful conduct, but more broadly as drawing attention to possible flaws in the group’s plans and actions. Whistle-blowing is often a product not of the whistle-blower’s narrow self-interest, but of the whistle-blower’s desire to ensure that the organization or group succeeds.

True, some whistle-blowers are malcontents, but many of them are sincerely trying to promote their group’s success rather than their own. In poorly functioning groups, whistle-blowers act at their own risk, which makes it less likely that people will become whistle-blowers at all. Wise groups welcome some whistle-blowing. Loyalty is important, but when people reveal information aimed to make a group work better, they are hardly being disloyal.

The general lesson is that identification with the group’s success is more likely to ensure that people will say what they know, regardless of whether it fits the party line. And if group members focus on their own personal prospects rather than those of the group, the group is more likely to err. Both social norms and material incentives can play crucial roles in establishing the priorities of group members. The challenge is to introduce one or the other, or both, into deliberating groups.

The Role of Roles

Experimental Evidence

Imagine a deliberating group consisting of people with specific roles that are appreciated and known by all group members. One person might have medical expertise; another might be a lawyer; a third might know about public relations; a fourth might be a statistician. In such a group, sensible information aggregation should be far more likely, simply because each member knows, in advance, that each of the others has something to contribute. Hidden profiles will be less likely to remain hidden if there is a division of labor, in which each person is known to be knowledgeable about something in particular.14 Specialists can be helpful, not only because of their specialized knowledge, but also because they will feel empowered to speak up.

Several experiments strongly support the hypothesis.15 In one such experiment, each member of a three-person group was given a good deal of independent information about one of three murder suspects.16 In half of these groups, the specific “expertise” of each member was publicly identified to everyone before discussion began. In the other half, there was no such public identification of the specialists. The bias in favor of shared information was much reduced in the groups with the publicly identified experts; the public identification operated as an effective device to solve the information-sharing problem. By contrast, the reduction of the bias was a lot smaller when experts were not identified publicly and when each group member was only privately told, by the experimenter, that he or she was an expert on a particular candidate.

The lesson is clear. If a group wants to obtain the information that its members hold, all group members should be told, before deliberation begins, that different members have different, and relevant, information to contribute. The effect of role assignment in reducing hidden profiles can be significant in solving the challenges of eliciting distributed information.

Diverse “Equities”

Let’s turn from behavioral lab experiments to the real world. As Sunstein observed up close, a strong form of role assignment occurs in the federal government. When the process works well, it makes government groups very wise. The EPA has many employees who know a lot about clean air and clean water. Those at the US Department of Agriculture are experts on farming and agriculture. The Department of Transportation knows about highways and railroads. The Department of State and the Office of the US Trade Representative are concerned with international relationships, and they know about the effects of American policies on other nations and on US relationships with these nations.

In his role as administrator of OIRA, Sunstein frequently saw good decisions emerge from processes in which every agency’s role, with its distinctive expertise and perspective, was respected. In fact, part of Sunstein’s job was to make sure that those involved in making decisions listened to different people, with their different roles. When the system worked well, there were no hidden profiles and people did not become polarized or join cascades.

In government, people commonly speak of the equities of various participants in discussions. It’s an ugly term, to be sure, but a daily part of government-speak. (While we are at it, other awful terms include do-out, for tasks that follow meetings; deliverables, for products of work; loop in, meaning to include someone; and circle back, for getting back to someone. Ugh, to be sure.) The word equities is a shorthand way to signal that diverse people’s views and roles matter and require independent attention, concern, and respect.

For those who work in government, talk of equities can be frustrating and even infuriating. Those who engage in such talk can be obstacles to the effort to obtain consensus, and a consensus is often necessary to get anything done. What’s worse, the term suggests that participants ought to focus not on what is right, but on the arguments and concerns that grow out of their distinctive positions. But when the process is working well, the idea of equities is indispensable. It licenses people to provide information and perspectives that reflect their own role—and thus ensures that important information doesn’t get lost. Even if a leader isn’t especially wise about information pooling, participation by people with diverse equities can ensure that the group learns what it needs to know.

One reason that government sometimes decides badly is that diverse people do not end up in the most important rooms. The simple lesson: leaders in both private and public sectors do well when they ensure that people are assigned different tasks and roles.

Perspective Changing

Here’s a useful tactic, one that applies a version of the idea of role assignment. Suppose that you are a leader of an organization and that it is not doing well, perhaps because it is stuck in old ways of thinking. Recall that even more than individuals, groups stick to courses of action that are failing. What can you do?

Intel Corporation, a large American company, faced exactly this problem in the 1980s. After fourteen years of profits, it was losing a lot of business in the memory chip market, which it had pioneered. In a dramatic move, the company decided to abandon the entire market.17 Its CEO, Andrew Grove, explained the decision:

I remember a time in the middle of 1985, after this aimless wandering had been going on for almost a year. I was in my office with Intel’s chairman and CEO, Gordon Moore, and we were discussing our quandary. Our mood was downbeat. I looked out the window at the Ferris wheel of the Great America amusement park revolving in the distance, then I turned back to Gordon and I asked, “If we got kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” I stared at him, numb, then said, “Why shouldn’t you and I walk out the door, come back and do it ourselves?”18

This is a profound story, because Grove was able to shock himself out of his routine thought processes by a purely hypothetical role assignment, in which he asked Moore, and himself, what a new CEO would do. That question changed his perspective by creating a critical distance. For Intel, it initiated a spectacularly successful new strategy. The story suggests that when a group is aimlessly wandering or on a path that does not seem so good, it is an excellent idea to ask, “If we brought in new leadership, what would it do?” Asking that simple question can break through a host of conceptual traps.

Speak of the Devil

If hidden profiles and self-silencing are the source of group failure, then an obvious response is to ask some group members to act as devil’s advocates, deliberately advocating a position that is contrary to the group’s inclination. Throughout recent history, both public and private sectors have shown a lot of interest in devil’s advocacy. The basic idea was a central suggestion of both the 2005 Senate committee reporting on intelligence failures in connection with the Iraq War and the 2007 review board that investigated a series of blunders at the National Aeronautics and Space Administration (NASA).

Some successful leaders show an implicit awareness of the importance of listening to devils. Such leaders try to elicit diverse views not through any kind of formal process, but by indicating their own agreement with different (and incompatible) positions. Consider the distinctive practice of President Franklin Delano Roosevelt, who sometimes took the ingenious approach of indicating his agreement with people having inconsistent positions. All the advisers thought that the president agreed with them, so they were emboldened to develop and elaborate their diverging opinions.

Roosevelt’s approach gave people the confidence and license to produce the best arguments on behalf of their positions. When Roosevelt made up his mind, it was only after he heard a sincere and rigorous statement of conflicting positions, offered by people who believed, firmly if erroneously, that the president shared their views. No doubt, his advisers sometimes were upset when they finally learned what he really thought.

Roosevelt’s practice was a good one, because in any White House, presidential advisers are greatly tempted to silence their doubts and to provide a unified and supportive front. To say the least, the president is busy, and he has the weight of the world on his shoulders; many advisers do not want to trouble him or to create internal disagreement. They also want the president to like and approve of them, and those who challenge the president’s view, or the views of other advisers, risk incurring disapproval. For this reason, some White House advisers have a tendency to offer happy talk to the president. This practice can avoid conflict and unpleasantness, but is not in the interest of the president, the nation, or the world. Of course, the president is unique, but something analogous can be said about the boss at many organizations.

The idea of the devil’s advocate is meant to formalize the commitment to the expression of dissenting viewpoints. In at least one well-known case, the approach appeared to work. As Irving Janis described, “during the Cuban missile crisis, President Kennedy gave his brother, the Attorney General, the unambiguous mission of playing devil’s advocate, with seemingly excellent results in breaking up a premature consensus”—a consensus that might well have led to war.19 It is worthwhile to wonder whether and when similar assignments might have proved helpful with the presidents who followed Kennedy.

By their very nature, those assuming the role of devil’s advocate are able to avoid the social pressure that comes from rejecting the dominant position within the group. After all, they are charged with doing precisely that. And because they are specifically asked to take a contrary position, they are freed from the informational influences that can lead to self-silencing.

Hidden profiles are a lot less likely to remain hidden if a devil’s advocate is directed to disclose the information she has, even if that information runs contrary to the apparent consensus within the group. In groups that are at risk of hidden profiles, a devil’s advocate should help a great deal. For groups that seek to get wiser, it would seem sensible to appoint devil’s advocates.

So much for theory. Unfortunately, we cannot give a strong endorsement of this approach, because research on devil’s advocacy in small groups provides mixed support. True, there is some evidence for the view that devil’s advocates can be helpful.20 Many experiments do find that genuine dissenting views can enhance group performance.21

But there is a difference between authentic dissent and a formal requirement of devil’s advocacy; an assigned devil’s advocate does far less to improve group performance. One reason is that any such requirement is artificial—a kind of exercise or game—and group members are aware of that fact. The devil’s advocate can seem to be just going through the motions. And indeed, when an advocate’s challenges to a group consensus are insincere, group members discount the arguments accordingly. At best, the devil’s advocate facilitates a more sophisticated inquiry into the problem at hand.22

Because arbitrarily selected devil’s advocates are acting out a role and have no real incentive to sway the group’s members to their side, they succeed in their assigned role even if they allow the consensus view to refute their unpopular arguments. Unlike a genuine dissenter, the devil’s advocate has little to gain by zealously challenging the dominant view—and as a result such advocates often fail to vigorously challenge the consensus.23

The lesson is that if devil’s advocacy is to work, it will be because the dissenter actually means what she is saying. If so, better decisions should be expected. If not, the exercise will turn out to be a mere formality, with little corrective power. Considering the existing evidence, we suggest that it is a lot better for groups to encourage real dissent—by, for example, assigning roles to experts representing different knowledge sets or perspectives—than to appoint formal, artificial dissenters. Designating groups of dissenters composed of members who, before deliberation, sincerely favor different solutions can solve some hidden-profile problems.24

Contrarian Teams

Another method, related to the appointment of a devil’s advocate but more effective according to existing research, is called red teaming. This method has been extensively applied to military teamwork, but it can be applied in a lot of domains, including business and government.25 Any implementation plan, if it has a high level of ambition, might benefit from red teaming.

Red teaming involves the creation of a team that is given the task of criticizing or defeating a primary team’s plans to execute a mission. There are two basic forms of red teams: those that play an adversary role and attempt to defeat the primary team in a simulated mission, and those given the same instructions as a devil’s advocate, which is to construct the strongest case against a proposal or plan. Even an artificial role assignment can be powerful enough to produce substantial improvements, if the assignment is to more than one dissenter. It is as if having more than one dissenter provides social proof of the validity or at least the significance of the divergent views. Anxious people can also operate as the functional equivalent of red teams, and sometimes they enlist red teams to test worst-case scenarios.

Versions of this method are used in all branches of the military and in many government offices. (Important government regulations are sometimes evaluated with the help of informal red teams.) In industry, some firms offer red-teaming services to other companies, as in the case of “white-hat hackers” who are paid to attempt to subvert software security systems and to penetrate corporate firewalls.

Within law firms, there has been a long tradition of pretrials, or the testing of arguments with the equivalent of red teams. In important cases, such efforts can reach the level of hiring attorneys from a second firm to develop and present a case against advocates from the primary firm. Often these adversarial tests are conducted before mock juries so that the success of the primary and red-team arguments can be evaluated through the eyes of citizens similar to those who will render the ultimate courtroom verdict.

One size does not fit all, and the cost and feasibility of red teams will vary from one organization to another. But in many contexts, red teams are an excellent idea, especially if they are sincerely motivated to find mistakes and to exploit vulnerabilities and are given clear incentives to do exactly that.

The Delphi Method

Is there a way to obtain the benefits of individually held information while also supporting the kinds of learning that deliberation is designed to promote? To overcome social influences, leaders might simply ask people to state their opinions anonymously and independently, maybe before deliberation, so that their private statements are available to others. The secret ballot can be understood as an effort to insulate people from social pressures and to permit them to say what they actually believe. Some groups should consider more routine use of the preliminary secret ballot, simply to elicit more information. The challenge is then to answer this question: What should groups do with the votes that emerge from secret ballots?

As an ambitious effort to answer that question, consider the Delphi method, a formal process for aggregating the views of group members. The Delphi method, which can be undertaken via computer networks or in a traditional meeting, has several key features.26 In many of its applications, the method is simply a version of averaging that enlists social learning.

Here’s how it works. Individuals offer first-round estimates (or votes) in complete anonymity. Then a cycle of reestimations (or repeated voting) occurs, with a statistical requirement that forces convergence; the second-round individual votes have to fall within the middle quartiles (25–75 percent) of the range of the first round’s estimates. This process is repeated until group members converge on a single point estimate. Although this is essentially a social averaging process, people need not converge on the mathematical average or median, as individuals can use the distributions of prior estimates from their colleagues to inform their new votes. This process allows, for example, a very confident (or stubborn) voter to exert more weight on the final outcome than in a numerical calculation.

The Delphi method has a quirky history, having been invented at the RAND Corporation to assist military and diplomatic analysis of cold war scenarios. Originally it was applied to forecast military-technological developments and to anticipate the actions of an opponent in a strategic diplomatic negotiation or war. Like many RAND inventions, its applications have shifted to peacetime questions, usually concerned with forecasting events from politics (e.g., regime shifts, election outcomes) and industry (e.g., sales volumes, unemployment rates). In some recent applications, the method has been used to promote information sharing, rather than forcing consensus on estimates or solutions. In one application, over one thousand contributions were solicited with respect to the creation of a proposal for the development of international web-based communication systems in Central and South America.27

The Delphi method has some significant virtues. First, it ensures the initial anonymity of all members through a purely private statement of views. The purpose of anonymity is precisely “to diminish the effects of social pressures, as from vocally dominant or dogmatic individuals, or from a majority.”28 Second, people are given an opportunity to offer feedback on one another’s views. Group members are permitted to communicate—sometimes fully, but sometimes only their ultimate conclusions. The conclusions, given anonymously, are always provided to others by a facilitator, sometimes in the form of a statistical summary or a tabulation (histogram) of all the responses. Thus “the feedback comprises the opinions and judgments of all group members and not just the most vocal.”29 Finally, and after the relevant communication, the judgments of group members are elicited again and subject to another statistical aggregation.

In several contexts, the Delphi method has worked well.30 For general-knowledge questions, the method yielded better answers than did individual estimates—though open discussion did still better, apparently because it served to correct errors.31 Note here that the Delphi method is more successful when group members are provided not only with the mean or median estimate, but also with the reasons that group members have for their views.32 An account of reasons is most likely to move people in the direction of the correct answer.33 New technologies can easily be enlisted in the use of the Delphi method.

In one experiment, people were asked to consider specified heights and weights and to say whether people with those heights and weights are more likely to be male or female.34 Deliberating groups did no better than statistical groups on that task; often, deliberating groups did worse. (Recall from chapter 1 that a statistical group is one whose members do not confer with another before they make a decision; instead the members’ solutions are combined objectively, usually with a mathematical equation.) But the authors tried a third method of aggregating opinions, one close to the Delphi method. Under that method, people were asked to make private estimates initially. After a period of discussion, people were then asked to make final estimates. These were the most successful groups. A simple approach of estimate-talk-estimate radically reduced errors.

A natural alternative to the Delphi method would be a system in which ultimate judgments are stated anonymously, but only after deliberation. Anonymity, both in advance and at the conclusion, would insulate group members from reputational pressure and, to that extent, could reduce the problem of self-silencing. Wise groups should be experimenting with the Delphi method or at least with less formal variations. Such groups need not worry over close adherence to the particulars of the Delphi method; they can act more informally to combine a degree of initial anonymity with some room for discussion.

We have covered a number of approaches here, from the relatively simple to the more formalized. The evidence suggests that all of them can make groups wiser. Managers should not be pests, but they can jerk people out of complacency and encourage the group to spend some time on worst-case scenarios. All of our approaches can be seen as efforts to make groups a bit less cheerful. In our own experience, role assignment has particular promise and should be used far more widely than it now is, certainly if the assignment is designed not as part of a game, but as a serious endeavor.

Playing Moneyball

We have referred to Michael Lewis’s Moneyball, which demonstrated that statistical analysis outperforms judgments made on the basis of intuition, anecdotes, and experience. Our emphasis in this chapter has been on how to improve group performance, but we offer a final note, which involves the importance of relying on data and evidence whenever you can. Within the federal government, cost-benefit analysis has become the coin of the realm and is an important corrective to individual and group errors. If you can obtain and analyze data, you can often overcome most of the problems we have identified, because many of our problems are based on losing contact with reality. Nothing seems to inject reality into a discussion and banish wishful thinking and biased speculations as well as empirical evidence, especially in the form of data and numbers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset