CHAPTER 1

From High Hopes to Fiascos

When managers and other leaders are deciding how to proceed, they usually talk the problem through. But why, exactly, is it helpful to talk the problem through? Why and when is deliberation important or even desirable?

A big part of the answer must be that if people talk to one another, they will end up with wiser judgments and better outcomes. But does deliberation actually have this effect? This is a crucial question, and an empirical one, which cannot be answered by intuition or anecdotes. By imposing pressure on one another, group members may reach a consensus on falsehood rather than truth. A group of like-minded people, prone to error and with similar inclinations, is particularly vulnerable to this problem. If a bunch of people think that a complex government program is going to work immediately or that an untested, new product will be a big hit, we might have a bad case of happy talk.

To explain why groups go wrong when they deliberate, we investigate two types of influences on group members. The first type involves informational signals, which lead people to fail to disclose what they know out of respect for the information publicly announced by others. In the federal government, for example, people might silence themselves because they think that an official who does not share their views and who has his own information must be right. If the secretary of defense has a strong conviction about whether military intervention is a good idea, the people who work for the secretary might shut up, not because they agree, but because they think that the secretary probably knows what he is doing.

In the private and public sectors, leaders often seem to have a halo, which makes them appear unusually sharp and smart. Their jokes are funnier, their wisdom is wiser, their perspective wider, their questions more probing. In government, Sunstein noticed exactly this phenomenon, with civil servants occasionally treating his own tentative and insufficiently informed judgments as if they were far cleverer than they actually were. The halo can be fun to experience, especially if you have an ego, but it is also a real problem. It encourages happy talk and makes the group more likely to err. Anxious employees provide an important corrective, because they are willing to wonder whether the leaders are right. And if the leaders themselves are anxious—if they have a smile and personal warmth but also a troubled little voice in their heads asking, What am I missing here?—they will make their groups better.

The second type of influences involves social pressures, which lead people to silence themselves to avoid various penalties. In many cases, what matters is the mere disapproval of others, but if those others are important, the disapproval could lead to serious personal risks. Within firms, people often stay quiet and decline to disclose what they know, not because what they know is unimportant, but because they do not want to seem foolish or disagreeable. They are especially unlikely to speak up if the leaders or most others in the group seem to have a clear conviction. Is it really worthwhile to make the boss sad or mad?

From the boss’s point of view, the answer should be yes, because the boss might learn something. But some bosses don’t see things that way, and many employees know that it might well be better just to shut up. Here again, the right kind of anxiety can go a long way, because anxious employees will not worry much about social pressures and because anxious bosses welcome a wide range of views.

As a result of these two types of influences, groups run into four independent problems:

  • Groups do not merely fail to correct the errors of their members; they actually amplify those errors.
  • Groups fall victim to cascade effects, as group members follow the statements and actions of those who spoke or acted first, even if those statements and actions lead the group in unfortunate, terrible, or tragic directions.
  • Groups become more polarized, ending up in more extreme positions in line with the predeliberation tendencies of their members—such as when a group of people, inclined toward excessive optimism, becomes still more optimistic as a result of internal discussions.
  • Groups focus on shared information—what everybody knows already—at the expense of unshared information and thus fail to obtain the benefit of critical and perhaps troubling information that one or a few people have.

Because of these problems, groups often fail to achieve their minimal goals of correcting individual mistakes and aggregating the information actually held by their members. A confident, cohesive, but error-prone group is nothing to celebrate. On the contrary, it might be extremely dangerous both to itself and to others. A promising start-up may fail as a result. A government agency might waste taxpayer money; a new federal program might fail. A large business may continue on a badly mistaken course, even though it should have pulled the plug long ago. A law firm might press a doomed litigation strategy. When groups make poor or self-destructive decisions, one of these four problems is usually the explanation.

Positive Prospects

All over the world, deliberating groups start with high hopes. They’re especially likely to be hopeful if each member thinks well of the others—if people are friendly, respectful, and engaged with one another socially as well as professionally. And while we will spend a lot of time showing why optimism about group decisions is often wrong, it is sometimes right. If deliberating groups do well, we can imagine three principal reasons.

1. Groups are equivalent to their best members. One or more group members might know the right answer, and other members might become convinced that this answer is right and therefore accept it. If group members are listening, the group will perform at the level of its best members. If many or at least some members suffer from ignorance or from a form of bias that leads to error, others should correct them. Deliberation can fix individual errors rather than propagate them in a way that allows the group to converge on the judgment of its wisest or most accurate member.

Imagine, for example, that fifteen people are trying to make some prediction about the likely fate of some product, and that one of the fifteen is both an expert and a superb prognosticator. Maybe the other group members will quickly see that they have an expert in their midst, and they will follow this person’s lead. Consider “eureka” problems, in which the right answer, once announced, is clear to all. A trivial example: Why are manhole covers round? Answer: Because if they were almost any other shape, a loose cover could shift orientation and fall through the hole, potentially causing damage and injuries. (Of course!)

There are less trivial cases, requiring clever solutions to seemingly intractable problems, where the solution, once announced, is immediately clear to all. For such problems, groups should be expected to agree on the answer as announced by the member who actually knows it. Someone may know, for example, that a new tablet or cell phone has a fatal (but subtle) flaw, or that a website just isn’t ready for prime time, or that a military strike is unlikely to have its intended effect.

2. The whole is the sum of the parts: aggregating information. As Aristotle suggested, deliberation could help people to share existing information in a way that leads the group as a whole to know more than any individual member does. Suppose that the group contains no true experts on every aspect of a question, but that helpful information is dispersed among the members so that the group is potentially expert even if its members, considered individually, are not. Well-functioning companies often create cross-functional teams to aggregate information in just this way. If everyone is working together and listening to one another, the organization will be able to aggregate information to get a full picture of, for example, the effects of a proposed rule designed to reduce air pollution from power plants.

Or suppose that the group contains a number of experts, but that each member is puzzled about how to solve a particular problem. Deliberation might elicit the relevant information and allow the group to make a sensible judgment. In this process, the whole is equal to the sum of the parts, and the sum of the parts is exactly what is sought. No member can have all the parts.

When a group is trying to solve a crossword puzzle, something of this kind often occurs, as different group members contribute what they know. Many problems are like crossword puzzles in the sense that small groups do better than individuals, and large groups do better than small ones—simply because each person knows something that others do not, and it is easy to share and combine the dispersed information. As we will see, social media and prediction markets often do well for similar reasons.

3. The whole goes beyond the sum of the parts: synergy. The give-and-take of group discussion might sift information and perspectives in a way that leads the group to discover an innovative solution to a problem—a solution in which the whole is actually more than the sum of its parts. In such cases, deliberation is a powerful form of information aggregation, through which the exchange of views leads to a creative answer or solution. If a group is seeking to improve the design of an automobile, a tennis racquet, a tablet, or a cell phone—or to come up with the right response to a threat to national security—the exchange of ideas can produce creative solutions that go far beyond any simple aggregation of what group members thought before they started to talk. The same is true if people are trying to write a play together or to come up with a sensible policy to deal with the problem of childhood obesity. And in fact, groups can be highly innovative, especially if divergent thinking is nurtured and minority views are welcome.

Confident and Unified

To what extent do these three mechanisms work in practice? Two points are entirely clear.

First, group members tend to become far more confident of their judgments after they speak with one another.1 A major effect of group interactions is a greater sense that the postdeliberation conclusion is correct—whether it actually is or not. One reason is that corroboration by others increases people’s confidence in their judgments.2 If your colleagues or friends tell you that you are right, you are more likely to think that you are right, even if you are pretty confused (and wrong).

Confidence is certainly a good thing, but as we will see, it is not so good if people end up being both confident and wrong. A great risk of group deliberation is that it will simultaneously produce great confidence and grave error. Leaders are particularly likely to get in trouble for this reason. Group members tend to like to please their leaders, who will therefore not tell the group what it needs to know.

Second, deliberation usually reduces variance. After talking together, group members tend to come into accord with one another. Before deliberation, group members are often far apart; that is one reason that deliberation seems important or necessary. After deliberation, they tend to come into agreement (especially but not only if they are close-knit or have frequent interactions). It follows that members of deliberating groups will converge on a position on which members have a great deal of confidence. We will soon see some examples of this phenomenon in Colorado, where discussion led both liberals and conservatives to end up unified on major political questions, but with a wider chasm of disagreement between the two groups after discussion.

Convergence on a particular position is fine, of course, if that position is also likely to be correct. But if it is not, then group members will end up sharing a view in which they firmly believe, but which turns out to be wrong. Most of us are familiar with situations in which groups are confident, unified, but mistaken. The Bush administration’s belief that Saddam Hussein had weapons of mass destruction is one example. The Obama administration’s unwarranted confidence in the initially disastrous HealthCare.gov website, prelaunch, is another.

Truth versus Majority Rule

Unfortunately, there is no good evidence that deliberating groups will consistently succeed in aggregating the information held by their members. The basic lesson is that people pay a lot of attention to what other group members say and do—and that they do not end up converging on the truth. In fact, they often ignore their own beliefs and say that they believe what other people believe. There is a clear warning here about the potential effects of not only group deliberation but also social media, which can lead people to accept falsehoods.3

Consider this example from recent research. Suppose that people are provided with the following statement: “Oatmeal contains soluble fiber, which reduces your low-density lipoprotein (LDL), the ‘bad’ cholesterol.” (As it happens, this statement is true.) If people are informed of the true-false rating by “a majority of others like you” (on a scale of 1 to 6, from “definitely false” to “definitely true”), they are much influenced by the crowd. Not surprisingly, people tend to go along with the crowd if they tend to think independently that the crowd is right. Also not so surprisingly, people tend to go along with the crowd if they believe that the answer is unclear and debatable.

But strikingly, people also tend to go along with the crowd if the answer is false, even if they have independent reason to believe that it is false. The authors conclude that people’s answers come close to supporting the hypothesis that “people always follow the collective credibility rating, even when they are sure that the statement is true or false.”4 Here is a major warning for managers who might find that their employees agree on some course of action, not because they have reason to think that it is right, but because they think that most other people think that it is right.

A classic study demonstrates that majority pressures can be powerful even for factual questions to which some people know the right answers.5 The study involved twelve hundred people, forming groups of six, five, and four members. Individuals were asked true-false questions involving art, poetry, public opinion, geography, economics, and politics. They were then asked to assemble into groups that discussed the questions and produced answers. The views of the majority played a dominant role in determining each group’s answers; people tended to go along with what most people thought.

The truth played a role, too, but a lesser one. If a majority of individuals in the group gave the right answer, the group’s decision moved toward the majority in 79 percent of the cases. If a majority of individuals in the group gave the wrong answer, the group’s decision nonetheless moved toward the majority in 56 percent of the cases.

The truth did have an influence—79 percent is higher than 56 percent—but the majority’s judgment was the dominant one. And because the majority was influential even when wrong, the average group decision was right only slightly more often than the average individual decision (66 percent versus 62 percent). What is most important is that groups did not perform nearly as well as they would have if they had properly aggregated the information that group members had.

Other studies find that with respect to questions that have definite answers, deliberating groups tend to do about as well as or slightly better than their average members—but not as well as their best members.6 (So much for Aristotle.) So group members usually do not end up deferring to their internal specialists. Often truth does not win out, even if someone in the group knows what is true.

The most that can be said is that groups will often converge on the truth if (1) the truth begins with at least some initial support within the group and (2) the task has a demonstrably correct answer.7 The second condition is especially hard to fulfill. In business and in government, many groups have to make predictions: Will a new employee perform well? Who should be promoted? Will a product sell? Will an environmental regulation save a lot of lives? Alas, in such cases, no answer may be demonstrably correct. When a group outperforms most of its individual members, it often does so because the issue is one for which a particular answer can be demonstrated, to the satisfaction of all or most, to be right. Even in that situation, which is not so usual, the group might not do well if the demonstrably correct solution lacks much support at the outset.

A good way to predict a judgment of a group is to ask: What did the majority think, before people started to talk? For optimists about group judgments, that’s unfortunate. More specifically, and depending on the task, the central tendency of group members before they start to talk provides a pretty good forecast of how the group as a whole will end up. It follows that if the majority is wrong, the group will usually be wrong as well. With groups of experts, the same conclusion holds.

In fact, group discussion may turn out to be less helpful than a simple effort to elicit people’s independent views. In the words of Scott Armstrong, an expert on forecasting at the Wharton School, a “structured approach for combining independent forecasts is invariably more accurate” than “traditional group meetings,” which do “not use information efficiently.”8 Those are strong words, but there’s a lot of truth in them. Let’s now turn to this claim.

Statistical Groups versus Deliberating Groups

A great deal of evidence suggests that under certain conditions, a promising way to answer factual questions is this: put the question to a large number of people, and take the average answer. As emphasized by James Surowiecki in his engaging and illuminating book, The Wisdom of Crowds, the average answer, which we might describe as the answer of a statistical group, is often accurate, where accuracy is measured by reference to objective facts.9

Many of the studies of statistical groups involve quantitative estimates, and the studies seem to show a kind of magic, potentially used by businesses and governments hoping to work some magic of their own. Consider a few examples:

  • In an early study, Hazel Knight asked college students to estimate the temperature of a classroom.10 Individual judgments ranged from 60 degrees to 85 degrees; the statistical judgment of the group was 72.4 degrees, very close to the actual temperature of 72 degrees. That estimate was better than 80 percent of the individual judgments.
  • When people are judging the number of beans in a jar, the group average is almost always better than the judgments of the vast majority of individual members. In one such experiment, a group of fifty-six students was asked about a jar containing 850 beans; the group estimate was 871, more accurate than that of all but one of the students.11
  • The British scientist Francis Galton sought to draw lessons about collective intelligence by examining a competition in which contestants attempted to judge the weight of a fat ox at a regional fair in England. The ox weighed 1,198 pounds; the average estimate, from the 787 contestants, was 1,197 pounds, more accurate than any individual’s guess.12

In light of these findings, many groups might want to answer questions not through deliberation, but simply by consulting people and selecting the average response. Imagine that a large company is attempting to project its sales for certain products in the following year; maybe the company needs an accurate projection to know how much to spend on labor and promotion. Might it do best to poll its salespeople and trust the average number? Armstrong suggests that the answer is often yes.13 Or suppose that a company is deciding whether to hire a new employee. Should it rely not on deliberation, but on the average view of its relevant personnel?

In part 2, we will return to these questions and recommend some methods that can improve on simple arithmetic averaging (prediction markets and the Delphi method). For now, let us simply note that statistical groups often do better than deliberating groups, because the former rigorously incorporate all the information that their members have.

Two Sources of Self-Silencing

There are two reasons why exposure to the views of others might lead people not to disclose or act on what they know.

Informational Signals

The first reason people might stay silent involves the information conveyed by what other people say and do. Suppose that most people in your group believe that some proposition is true. If so, you have reason to believe that the proposition is in fact true, and this reason might seem to outweigh your purely private reasons to believe that the proposition is false. Sensibly enough, you might disregard your own information if most group members disagree with where you are headed. The informational signal from the group is strong enough to trump your personal one.

It is in part for this reason that whenever group members are isolated or in the minority, they might not speak out. They might simply defer to the informational signal provided by the statements or actions of others. In a law firm, for example, most group members might be quite optimistic about their likely success in court. (We have noted that human beings have a pervasive tendency toward unrealistic optimism.) Their optimism might lead the firm’s skeptics to silence themselves on the ground that their own judgments must be ill informed or wrong. The problem is that if the skeptics refrain from speaking out, the group could lose important information—and depending on the context, the consequences could be serious or even catastrophic.

A famous and revealing example is the failed American invasion of Cuba at the Bay of Pigs. The invasion, designed to overthrow the revolutionary Cuban government headed by Fidel Castro, failed because President Kennedy’s advisers said nothing, even when they had reason to believe that the mission would not succeed. In the event, two American supply ships were sunk by Cuban planes, two ships fled, and four did not arrive in time. The Cuban army, consisting of twenty thousand well-trained soldiers, killed a number of the invaders and captured most of the remaining twelve hundred. The United States was able to obtain release of the prisoners, but only in return for $53 million in foreign aid to Cuba, alongside international opprobrium and a strengthening of relations between Cuba and the Soviet Union.

Soon after the failure, President Kennedy asked, “How could I have been so stupid to let them go ahead?”14 Kennedy’s advisers were an exceptionally experienced and talented group. Notwithstanding their experience and talent, no member of that group opposed the invasion or proposed alternatives. Some of Kennedy’s advisers entertained private doubts, but they “never pressed, partly out of a fear of being labeled ‘soft’ or undaring in the eyes of their colleagues.”15

The failure to press those doubts mattered. According to Arthur Schlesinger Jr., a participant in those meetings, Kennedy’s “senior officials . . . were unanimous for going ahead . . . Had one senior advisor opposed the adventure, I believe that Kennedy would have canceled it. No one spoke against it.”16 Schlesinger suppressed his own doubts but did not object: “In the months after the Bay of Pigs I bitterly reproached myself for having kept so silent during those crucial discussions . . . I can only explain my failure to do more than raise a few timid questions by reporting that one’s impulse to blow the whistle on this nonsense was simply undone by the circumstances of the discussion.”17

As the Bay of Pigs example suggests, the strength of the informational signal will depend on how many people are giving it, and on how admired, intimidating, or powerful they are. If the group contains one or more people who are known to be authorities or who otherwise command a lot of respect, then other group members are likely to silence themselves out of deference to the perceived or real authority. Moreover, people do not like being sole dissenters. If all but one person in a deliberating group have said that some proposition is true, then the remaining member might well agree that that proposition is true, even to the point of ignoring the evidence of his or her own senses.

The psychologist Solomon Asch memorably established this point in his famous experiments involving judgments of the lengths of straight lines drawn on cards, in which he found that most group members were willing, at least once, to defer to the group’s clearly false judgments, at least when members of the group were otherwise unanimous.18 In other words, people were prepared to ignore the evidence of their own senses in order to agree with everyone else. If people will ignore their own visually clear perceptions of simple straight lines and defer to others, then surely they are likely to defer to others on the hard and complicated questions that face businesses and governments.

Here too, the group is at risk of throwing away information that it needs. It has to find a way to get an honest answer to this question: What do you see?

Social Incentives

The second reason for self-silencing involves the consequences of speaking up and dissenting. People might be silent not because they think that they must be wrong (as in the case of informational pressure), but instead to avoid the risk of social punishment of various sorts. In fact, social influences undoubtedly contributed to self-silencing in the Kennedy White House. Even in societies and organizations that are strongly committed to freedom and honesty, people who defy the dominant position within the group risk a form of disapproval that will lead them to be less trusted, less liked, and less respected in the future.

Leaders have a lot of influence here. If your boss thinks that some proposition is true, you will not want to say that it is false. Businesses and governments inevitably have hierarchies, and those who are high in the hierarchy tend to command agreement from those who work for them. The higher-ups send a signal: Disagree with me at your peril. They might send an equally damaging signal: Complicate my life at your peril. Or their body language might suggest: I am very busy. Even if your boss has not yet said what she thinks, it’s not likely that you will want to speak out in defiance of the views of most of your colleagues.

Most of us have seen these dynamics in action, and many of us have been subject to them ourselves. If certain group members who serve as leaders are willing and able to impose social punishments, others will be unlikely to defy them publicly. (This, by the way, is one reason that smart leaders often speak last, after they hear from everyone else. As Sunstein can attest, even presidents tend to let others speak first. More on this later.) Similarly, a large majority will impose more social pressure than a small one.

A Framework: The Benefits—and Costs—of Speaking Up

We can put these points into a more general framework. Suppose that group members are deliberating about some question—say, what will happen in two months if the group continues on its current course. Suppose, too, that each member has some information that bears on the answer to that question. Will the members be willing to disclose what they know?

For each group member, the answer is likely to depend on the individual benefits and the individual costs of disclosure. In some cases, of course, people who disclose information to the group will end up benefiting a lot. They can win respect, admiration, and status. They might end up with a promotion. Wise groups, and the leaders of such groups, let people know that when members help to produce better decisions, they will be rewarded. When groups follow this practice, they change the individual’s calculus by increasing the private benefits of disclosure.

But many groups are not so wise. In some groups, people who disclose information will end up gaining a lot less than the group itself will. If you have information that is jarring or disruptive, and you speak out, your colleagues might look at you funny or like you a bit less. In this sense, participants in group deliberation face a social dilemma, in which each person, following his or her rational self-interest, will not tell the group what it needs to know. At least this is so if each member receives only a small portion of the benefits that come to the group from a good outcome.

This unfortunate situation faces many institutions, including corporate boards, religious organizations, and government agencies. Much of the time, the institution, and not the individual, is the big winner from a good decision. Think, for example, of a young lawyer who is prepared to tell senior members of a law firm that their current litigation strategy is way off course. It may well be that the young lawyer will risk her own position inside the firm, and the potential winner—the firm itself—is unwilling to reward that particular malcontent, at least in the short run.

Altruism and social norms can be helpful here. In government, for example, people may well end up disclosing their doubts and the information that supports those doubts, even if they will not gain a thing from the disclosure. The reason? They are patriots. They want to do their job well.

In government, a number of civil servants Sunstein knew within the Office of Management and Budget had no obvious political affiliation, cared about the public interest, and were willing to tell political leaders (including Sunstein) that they were just wrong. One example is Kevin Neyland, the longtime deputy administrator of OIRA—political affiliation unknown, integrity unmatched, a kind and heroic soul with a willingness to ruffle feathers. Because Neyland often saw worst-case scenarios, Sunstein nicknamed him Eeyore, after the gloomy pessimist in Winnie-the-Pooh. For political leaders, the Kevin Neylands of the world are indispensible, even if they aren’t always a lot of fun.

The upshot is that if people are motivated by commitments or character to follow a sometimes self-sacrificing, helpful norm (“Tell the group what you know”), the absence of private benefits may not be a terrible problem. But in many groups, it is hazardous to rely on altruism, and prevailing group norms don’t always favor expressing your doubts.

It gets worse if the statements of other group members suggest that the information held by a would-be dissenter is probably wrong or unhelpful. If so, then the private benefit of disclosure is reduced far more. In that event, the potential dissenter has good reason to believe that disclosure would be just a waste and would not improve the group’s decision at all. Things are even worse if those who speak against the apparent consensus will suffer some kind of reputational injury (or more). If they will lose the confidence of their colleagues or their boss, they have a lot to gain from shutting up.

In that event, the private calculus is straightforward: silence is golden. With a fuller understanding of that calculus, we can now see why group members will often go along with the apparent consensus. Under reasonable assumptions, they will not benefit from speaking out, and they might even be harmed. And in any case, they might just be wrong.

In the discussion thus far, we have been assuming that group members have facts and opinions and that the question is whether they should stay quiet. That’s a useful simplifying assumption, but sometimes group members just aren’t sure what they think. The question, then, is whether they should go out and learn, perhaps by doing a little research and inquiring about the views of others, including those outside their group.

If other group members seem pretty sure about what is true, you might think, What’s the point of trying to learn something new? Wouldn’t that be a waste? And if other group members don’t want alternative views and might punish those who venture them, it might make sense to ask the same questions. We will emphasize how groups can learn what their members know. Our discussion will also explore how groups can encourage their members to learn and to transmit to the group what they find. In part 2, we investigate some adventurous ways that members might do exactly that.

Self-Censorship

Suppose that your own ultimate conclusion goes in one direction, but that you have information suggesting that your conclusion may not be right. On balance, you have reached a certain conclusion, but you have knowledge cutting in both directions. What will you do? Will you inform members of your group of everything that you know or only what supports your conclusion?

The evidence suggests that in such circumstances, people have a strong tendency to self-censor. When people have information that conflicts with their own conclusion, they tend not to share that information with others. One study of over five hundred mock jury deliberations almost never observed a juror contributing information that contradicted his or her currently preferred verdict.19 This form of self-censorship is an independent problem, because it can deprive groups of crucial information. Groups and their leaders need to know not only about people’s conclusions, but also about the assortment of factors that led to those conclusions.

Self-censorship of this kind may occur as part of a deliberate strategy to persuade: it would be counter to your interests to provide information that contradicts your own conclusions. Why on earth would you give ammunition to the other side? But the same result can occur even when the speaker has the best of intentions. Group members do not have a lot of motivation to share information that they themselves have already discounted in reaching their own conclusion. Their reluctance to share stems not from instrumental or strategic factors, but from a sense that it would be counterproductive to share information that they themselves deem false, misleading, or invalid. In a way, that conclusion is pretty reasonable—but here again, groups are being deprived of knowledge, and potentially important knowledge, that their own members have.

Minorities and Low Status

Both informational pressures and social influences help to explain the finding that in a deliberating group, people who are in a minority position often silence themselves or otherwise have disproportionately little weight. There is a more particular finding: members of low-status groups—less educated people, African Americans, and sometimes women—speak less and carry less influence within deliberating groups than do their higher-status peers.20

Both informational influences and social pressures, likely to be especially strong for low-status members, contribute to this result. Once more, the unfortunate consequence can be a loss of information to the group as a whole, thus ensuring that deliberating groups do far worse than they would if only they could aggregate the information held by group members.

Let us now turn to the four major sources of group failures.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset