5

COMPUTER SIMULATION
METHODS FOR GROUPS

From Formula Translation to
Agent-based Modeling

James R. Larson, Jr

LOYOLA UNIVERSITY, CHICAGO

One danger in writing a chapter about computer simulation methods is that it will be read only by the cognoscenti: those who already know a great deal about the topic. Book chapters and journal articles describing particular computer simulations too often are either so abstract that it is hard for anyone but an expert to imagine how those simulations are actually implemented, or so technically detailed that nonexperts are quickly overwhelmed by the thick catalog of particulars. As a consequence, both types tend to be ignored by those with little or no background in computer simulation methods.

In this chapter I try to find a middle ground between the extremes of broad abstraction and excruciating detail that I hope will appeal to those who are unfamiliar with computer simulation, but who nevertheless are curious about (a) how computers can be made to simulate group behavior, and (b) how such simulations can benefit theory development. Toward this end, I emphasize just a few central concepts related to the use and implementation of computer simulations. In doing so, I hope to encourage those who have not tried it before to consider using this valuable research tool.

Preliminaries

In the broadest sense, a computer simulation is simply a program that has been written to emulate some aspect of behavior. The target behavior might be that of human individuals or groups, but could also be the behavior of nonhuman physical or imaginary objects, groups of objects, or large systems. Computer simulations are increasingly used for all sorts of purposes, ranging from gaming and education, to industrial design and product testing. We encounter computer simulations and their results on a near-daily basis, for example, in weather forecasts and highway travel-time projections. Computer simulations can entertain, enlighten, and be of great practical value. And more to the point of the present chapter, they can also be very useful as a tool for developing theory about group behavior.

Computer simulation is sometimes called computational modeling. Computational modeling should not be confused with constructing and testing statistical models using behavioral data, such as might be done with computer programs for hierarchical linear or structural equation modeling. The latter are techniques for discerning regularities in the already observed behavior of individuals and groups. Computational modeling (computer simulation), by contrast, is done for the purpose of generating predictions about behavior, given (a) a set of input parameters and processes, and (b) one or more theoretical ideas concerning how those parameters and processes combine to produce the target behavior. Once generated, those predictions can be evaluated for their match to the actual behavior of real people. A close match can increase our confidence that the theory expressed in the simulation is an accurate portrayal of reality, whereas a poor match suggests the need to revise the theory/simulation. Or, the simulation might yield interesting results under conditions (parameter values) for which no empirical data currently exist. In this case, the simulation gives direction to future research with real individuals and groups.

Several points are worth highlighting here. First, the purpose of computer simulation is not to test theory, it is to express theory (Ostrom, 1988; Simon, 1992; Sun, 2009). Computer simulation thus serves the same function as do natural language and mathematics, the other two common modes of theoretical expression. Second, a computer simulation's predictions about behavior are obtained simply by running the program. Thus, it is the computer, not the theorist, that deduces the theory's consequences. In this sense, the computer is a “derivation machine” (Latané, 1996), cranking out the logical implications of the theoretical ideas written into the programming code. Those implications take form as the simulation's output. Third, theories expressed as computer simulations are tested in exactly the same way that natural language and mathematically expressed theories are tested, by comparing their predictions (output) to empirical observations of real individuals and groups. Depending on the results of this testing, the simulation (and so the theory) might be modified and retested until known empirical benchmarks are matched.

These similarities to natural language and mathematical theories notwithstanding, expressing theory via computer simulation offers several advantages (see also, Davis, Eisenhardt, & Bingham, 2007; Hastie & Stasser, 2000; Lewandowsky, 1993; Myung & Pitt,2002). One is that computer simulations can handle substantially greater complexity than is possible with either natural language or strictly mathematical theories. Increasingly, we seek to understand complex behavior. Complexities arise from the interaction among the manifold intrapersonal and interpersonal processes presumed to drive behavior, and from the repeated (and sometimes recursive) operation of those processes through time. The ability of computers to handle such complexities, and to derive unambiguous predictions in light of them, far exceeds that of humans using theories expressed in natural language. Mathematically expressed theories, too, are better than natural language theories at handling complexity. But it can sometimes be quite difficult, if not impossible, to formulate theory in terms of precise mathematical expressions (cf. Estes, 1975; Krause, 1996). And even when such formulations are possible, the equations themselves can become extremely complex, and so impose their own challenges when it comes to deducing behavioral predictions. Not surprisingly, and as will be described more fully in the next section to follow, complex mathematical formulations are often translated into computer simulations so that their less-obvious implications can be seen.

Computer simulation also encourages the identification and elimination of ambiguities, voids, and other theoretical insufficiencies. At the very least, the vagaries typically found in natural language theories must be given specific definition in a computer simulation. Consider, for example, the term “combined” in the following statement from a theory about group decision making: “The members’ individual decision preferences are determined by the combined valence of the information that the group discusses about each choice alternative.” Although this statement may be adequate as one element of a natural language theory of group decision making, it is too imprecise to be useful in a computer simulation. What exactly does “combined” mean? Computers cannot “combine” in the abstract; they can “combine” only in specific ways, for example, by summing or averaging. So, should the valences of discussed information be summed, averaged, or combined in some other manner when simulating the formation of members’ decision preferences? Questions like this must be answered if a simulation is to be made operational. The answers can sometimes be found in the extant empirical literature (e.g., Anderson, 1991). But even then, several competing answers might turn up, in which case it would be prudent to implement each in a different version of the simulation, then test those versions competitively to determine which one provides the best fit to empirical data (cf. Stasser, 1988). Either way, the computer simulation will end up a more specific, detailed theory. The real behavior of individuals and groups arises from specifics, not generalities. Theories that are stated more specifically would therefore seem to have an advantage over those that are stated only generally, other things being equal.

But how are these advantages of computer simulation achieved? How does one actually go about getting computers to simulate human behavior? In the remainder of this chapter I describe three generic approaches to computer simulation. Although these approaches emerged at different points in the history of computer simulation, it should not be inferred that each one simply supplanted what had gone before. A more accurate description is that each new approach added a layer of capabilities not previously available, and that computer simulations employing one of the later-developed approaches often contain elements of the earlier approaches as well. My use of this tripartite division is thus mainly a pedagogical convenience that helps draw attention to several distinct features of contemporary simulation methods. Below I describe each approach in turn, and illustrate them with examples from my own work.

Formula Translation

The earliest use of computers for simulation purposes was to perform the often-tedious calculations required to solve complex mathematical equations. Many theories in both the natural and behavioral sciences are expressed mathematically – formulae are used to describe how particular sets of variables are presumed to be related to one another. Such theories are tested by solving for one of the variables in the equation given known values for the remaining variables, and then comparing the computed solution to values observed in the real world.

An historically significant example comes from the world of physics, and involves simulating the ballistic trajectory through the air of a launched, batted, or thrown projectile. Such trajectories are understood by theory to be a function of the projectile's initial velocity and launch angle, gravitational acceleration, time, and atmospheric conditions that affect drag (e.g., air temperature and humidity). The relationship among these variables is usually expressed in a system of linked equations. An important military application of this theory is computing (predicting) the launch angle needed to hit a target with an artillery shell when all of the other variables listed above (along with the distance to and elevation of the target, and the direction and velocity of the wind) are known. These computations can all be done by hand, of course. But the task becomes much easier and more useful with the aid of a computer, particularly when many such calculations are required in a short period of time. The first programmable digital electronic computer, ENIAC,1 was developed specifically to perform this task (Goldstine, 1972), though any laptop computer in existence today can be made to do the same thing. To accomplish this, it is necessary only that the mathematical formulae describing the theory of trajectories be translated into the symbolic code understood by these computers.2

Thus, as I use the term, a formula translation approach to computer simulation is merely an extension of how predictions from mathematically expressed theories have always been derived. Computers are employed to perform the same calculations that would otherwise be done by hand, but with greater speed and accuracy. When used in this manner, the computer makes the theory's predictions more accessible, which by itself can be of substantial benefit. Speedier access to a theory's predictions offers improved visualization and understanding, and can lead to insights that would be difficult to obtain if all of the computations had to be done (slowly) by hand.

A contemporary example from the realm of group behavior is a computer simulation that predicts the content of group decision-making discussions. If we assume that group decisions are determined in part by what groups discuss, it makes sense to inquire about factors that affect the content of discussion. One such factor is the number of members who were aware of the decision-relevant information in advance. It is often the case that some of that information will have been known to everyone in the group prior to discussion. This commonly held knowledge is referred to as shared information. However, there may also be certain pieces of decision-relevant information that were known to just one member or another before discussion started. This uniquely held knowledge is called unshared information.3 When members each hold a certain amount of unshared information, and when the best choice alternative can be identified only by taking account of that information, the decision they make as a group has the potential to be far superior to the decision that any one of them would have made acting alone. But this potential can be realized only if members actually mention during discussion the unshared information they hold (cf. Winquist & Larson, 1998).

Interestingly, there is much empirical evidence that groups tend to discuss significantly more of their shared than of their unshared information (for recent reviews, see Brodbeck, Kerschreiter, Mojzisch, & Schulz-Hardt, 2007; Larson, 2010). This difference is quite robust, with groups sometimes discussing twice as much shared as unshared information, and it occurs even when members do not know in advance what information they hold is shared and what is unshared. The latter fact rules out motivation as a necessary cause of this phenomenon: members cannot intentionally choose to discuss one type of information more than the other if they do not know what is shared and what is unshared.

But if motivation does not account for this phenomenon, what does? Stasser and Titus (1987, 2003) offered a simple but elegant explanation in their Collective Information Sampling (CIS) model of group discussion. The CIS model conceptualizes group decision-making discussions as a sampling process, wherein the content of discussion is obtained by members sampling from the pool of decision-relevant information they collective hold. This sampling is accomplished simply by them recalling and then mentioning the individual items of decision-relevant information. However, because shared information initially is held by more members than unshared information, there are more opportunities for the group as a whole to sample a given item of shared rather than unshared information. As a result, shared information is more apt than unshared information to be mentioned during discussion, even when group members are equally motivated to surface both types, and even when they are equally (though not necessarily perfectly) able to recall both types.

One way to represent these ideas mathematically is as follows:

image

where p(D) is the probability that a given piece of decision-relevant information will be discussed by the group, p(R) is the probability that any one member who was aware of that information prior to discussion will both recall and mention it during discussion, and n is the number of members who were aware of that information prior to discussion. If we assume that the available decision-relevant information is all equally memorable (i.e., p(R) is invariant across items),4 then the CIS model's prediction about the overall proportions of shared and unshared information that will be discussed given any particular value of p(R) can be obtained by computing p(D) when n is set equal to group size for shared information (because everyone in the group was aware of the shared information prior to discussion), and to 1 for unshared information (because only one group member was aware of the unshared information). One implication of this formulation is that p(Dshared) > p(Dunshared) for all values of p(R) except 0 and 1.

Equation 5.1 is relatively simple, and its predictions are easily derived without the aid of a computer. However, the CIS model itself makes additional predictions that Equation 5.1 does not reveal. Specifically, it predicts the likelihood of each new piece of information raised during discussion being either shared or unshared information.5 To anticipate, it suggests that over the course of discussion the probability of mentioning additional items of shared information gradually declines, while the probability of mentioning additional items of unshared information gradually increases. By way of analogy to the ballistic trajectory example discussed above, the CIS model is able to predict the entire “discussion entry trajectories” of shared and unshared information, not just the cumulative effect of those trajectories (i.e., the total amounts of shared and unshared information that will be discussed).

These additional predictions are derived by applying a computational algorithm that takes into account the sequential nature of group discussion: the fact that information is brought into discussion one piece at a time. The calculations required by this algorithm are not difficult when considered individually. They entail nothing more than addition, subtraction, multiplication, and division (for a full description, see Larson, 1997, especially Case 3 in the Appendix). What is difficult is the very large number of them that is needed, even for problems of modest proportion. For example, when three people each hold just nine shared and three unshared items of decision-relevant information, as many as 136,135 intermediate calculations are necessary, the results of which, when cumulated, yield the handful of probabilities (18 in this case) that fully describe the predicted discussion entry trajectory of shared information.6 This surprisingly large number arises because of the very large number of possible orders in which nine items of shared information and 3 + 3 + 3 = 9 items of unshared information might be raised during dissuasion. Obviously, this is well beyond what reasonably can be calculated by hand.

The problem is quite manageable, however, when the computational algorithm is translated into a computer simulation. I wrote just such a simulation using a general purpose computer programming language called BASIC (Larson, 1997).7 That simulation solves the complete discussion entry trajectory predicted by the CIS model, given specific values for several input parameters (e.g., group size, number of shared items, number of unshared items, p(R), etc.). The solutions computed by this simulation can be evaluated by comparison to empirical observation.

Figure 5.1 illustrates such a comparison. It displays the computer simulation predictions for, and the empirical results actually obtained from, 24 three-person teams of physicians that worked together to diagnose two hypothetical patient cases (Larson, Christensen, Franz, & Abbott, 1998). The information about each case was presented in three specially constructed video-taped interviews with the patient, with each physician in the team seeing a different interview. These video tapes were designed so that the physicians all learned some information about the case that the other two members of their team also learned (shared information) and some that no one else learned (unshared information). However, the physicians were not told which of the information they saw was shared and which was unshared. After viewing privately their separate video tapes, the three physicians met in a conference room to discuss each case and to decide as a group what disease was most likely to be producing the symptoms displayed by the patient.

The horizontal axis in Figure 5.1 refers to the sequential order in which the various items of case information were initially mentioned during discussion (i.e., the first item mentioned, the second item, the third, etc.), with each successive position referring to the introduction of a new, not-yet-mentioned piece of information. The solid circles are the computer simulation predictions about the proportion of teams in which shared information would be introduced at each item serial position (equivalent to the probability of mentioning shared information at each position). As can be seen, that proportion was expected to be high initially, but gradually decrease as more and more information was brought into discussion. By contrast, the proportion of teams in which unshared information would be introduced at each item serial position was predicted to be low initially, but gradually increase (the introduction of unshared information is not shown in Figure 5.1, but can be computed by subtracting each of the plotted values from 1.00). These predictions are close to what was actually observed. The proportion of teams that introduced shared information at each item serial position (open circles), and in particular the quadratic regression line that best fits those observations (thin line without markers), follow closely, and are statistically indistinguishable from, the curve defined by the computer simulation predictions. Overall, the computer simulation predictions account for 73 per cent of the variance in the observed proportions shown in Figure 5.1.

image

FIGURE 5.1 Predicted and observed proportion of group discussions in which shared information was brought out in each item serial position. Reprinted from Larson et al. (1998) with permission of the American Psychological Association.

In sum, the principal advantage of the formula translation approach to computer simulation is that it allows one to “do the math” much more quickly than would otherwise be possible. This, in turn, makes it easier to “play with” the parameters written into the simulation in order to uncover their expected effects. When the CIS model is implemented as a computer simulation, for example, it becomes easy to vary the recall parameter, p(R), and so examine the model's predictions about how the discussion entry trajectories of shared and unshared information might change when the decision-relevant information is either more or less difficult to remember. Epistemologically, there is no difference between (a) using a computer program to predict the content of group discussion under varying conditions of member recall, and (b) using a (different) computer program to predict the trajectory of an artillery shell under varying atmospheric conditions (cf. Simon, 1992).

Generative Process Modeling

A rather different approach to computer simulation is to model directly the operation of key processes hypothesized to generate a given type of behavior, then allow those processes to “run” (by running the program) so that the simulated behavior (program output) they produce can be observed. If this is done a large number of times, simulating in each instance the behavior of a single individual or group, then the simulation's predictions for individuals or groups in general can be obtained by cumulating the results across those many instances (i.e., just as inferences drawn from observations of real behavior are based on results cumulated across many instances). For example, predictions about the development of outgroup stereotypes might be obtained by modeling in each of a large number of simulated individuals a learning process, a memory storage process, and a sequence of encounters with different members of the outgroup (e.g., Kashima, Woolcock, & Kashima, 2000; Linville, Fischer, & Salovey, 1989; Queller, 2002; Read & Urada, 2003; Van Rooy, Van Overwalle, Vanhoomissen, Labiouse, & French, 2003). Likewise, predictions about the brainstorming productivity of groups might be obtained by modeling in each of a large number of groups the retrieval of ideas from long-term memory, the waxing and waning of attention paid to ideas put forth by others, and the forgetting of ideas held too long while waiting for a speaking turn (e.g., Brown, Tumeo, Larey, & Paulus, 1998; Coskun, Paulus, Brown, & Sherwood, 2000).

Three points should be noted here. First, simulations that follow the generative process modeling approach do not try to emulate every conceivable mental and behavioral process at play in real individuals and groups. Rather, they model only those processes presumed essential for explaining the phenomenon in question, and then only the most relevant features of those processes. Thus, like any theory, computer simulations are both a simplification and an abstraction of reality.

Second, although the modeled processes need not be represented in every detail, those details that are represented should be realistic, in the sense of being consistent with the current state of knowledge in the field. It is generally accepted, for example, that humans have two memory systems – long-term and working memory – and that the latter has a limited, though elastic, capacity. A simulation that includes as one of its components a model of group member memory should not be inconsistent with these facts. At the same time, computer simulation is valuable as a theory development tool precisely because it permits alternative theoretical ideas about the operation of underlying processes to be implemented and tested competitively. Thus, the theorist must be clear about which aspects of the simulation do and do not follow conventional wisdom.

Finally, the individual or group behavior being simulated is an emergent product that arises from the underlying process(es) modeled in the simulation. The generative process modeling approach to computer simulation explains behavior as a direct consequence of these underlying processes. This stands in contrast to the formula translation approach, which explains behavior in terms of mathematical or statistical relationships among variables (cf. Smith & Conrey, 2007).

To illustrate these points, I describe here a simulation that predicts the same discussion entry trajectories of shared and unshared information as predicted by the formula translation simulation outlined above, but does so via a generative process modeling approach. As will be seen, the predictions made via the two approaches are similar, but the generative process modeling approach is both easier and more practical to implement.

Thus, let us consider once again a three-person group in which each member holds nine shared and three unshared items of decision-relevant information. For simplicity, we will assume that during discussion the members can recall all of the information that they individually hold (i.e.,p(R) = 1.00). As previously described, the CIS model views group discussion as a sampling process, wherein the content of discussion is obtained by members sampling from the pool of information they collectively hold. Taking a generative process modeling approach, we might try to simulate the essential details of this process. That is, we might simulate the successive, random selection (without replacement) of one discussion item after another, continuing until there is either no more information left that has not already been sampled, or until some other stopping criterion is reached.8 By simulating a large number of group discussions in this way, each involving the same number of people, with the same recall capacity, and holding the same mix of shared and unshared information, inferences can be drawn about what the CIS model predicts for such discussions in general under these conditions. Specifically, the probability of the first, second, third, etc., new discussion item being either shared or unshared information can be inferred directly from the proportion of simulated group discussions in which the first, second, third, etc. new piece of information sampled actually was either shared or unshared information, respectively.

To see how this sampling process might be simulated, it is important first to recognize that in real three-person groups there are three opportunities to sample (i.e., mention during discussion) each piece of shared information (again, because every member holds that information), but only one opportunity to sample each piece of unshared information (because only one member holds it). Thus, in a discussion among three people who collectively hold nine pieces of shared and 3 + 3 + 3 = 9 pieces of unshared information, there are initially 3 × 9 = 27 opportunities to sample shared information, but only 1 × 9 = 9 opportunities to sample unshared information.

Now, suppose we let the integers 1–27 represent each of the 27 opportunities to sample shared information, and the integers 28–36 represent the nine opportunities to sample unshared information. Next, consider the following two lines of pseudo programming code:9

(1) Let X be any random integer between 1 and 36, inclusive.

(2) If X ≤ 27, then [Discuss Shared]; Otherwise [Discuss Unshared].

I will refer to “≤ 27” in the second line above as the decision rule, and set aside for the moment the question of where X might come from. Let us suppose, however, that at any given moment X, whatever its source, can be depended upon to be one of the integers 1–36, with each of those integers being equally likely to occur (i.e., p(1) = p(2) = p(3) = … = p(36) = 1/36).

Next, let us imagine in a single execution of this pseudo code that X happens to be an integer in the range 1–27 (e.g., suppose it is 15), so that the computer performs the action “[Discuss Shared].”We will take this as a simulation of actually sampling and mentioning shared information during group discussion. But if one piece of shared information has now been mentioned, then the pool of not-yetmentioned information from which the next discussion item will be drawn has decreased in size by one piece of shared information (i.e., it now comprises eight shared and nine unshared items). Consequently, there remain only 3 × 8 = 24 opportunities to sample another item of not-yet mentioned shared information, but still 1 × 9 = 9 opportunities to sample an item of not-yet mentioned unshared information. Paralleling what was done above, we might now use the integers 1–24 to represent the 24 remaining opportunities to sample shared information, and the integers 25–33 to represent the (still) nine opportunities to sample unshared information. To obtain the second discussion item, we would then simulate sampling from this slightly shrunken pool of information by executing a modified version of the pseudo code given above, one that uses the decision rule “≤ 24,” and where the value of X is constrained to be one of the (again, equally likely) integers 1–33.

Of course, in the original execution of the pseudo code as described in the first sentence of the preceding paragraph, X might have been in the range 28–36, not 1–27 (e.g., suppose it was 31 instead of 15). Had that occurred, the computer would have performed the action “[Discuss Unshared],” which we would have taken as a simulation of sampling and mentioning unshared information during group discussion. Further, the pool of not-yet-mentioned information would have decreased by one piece of unshared (not shared) information, so that for the next discussion item there would still be 3 × 9 = 27 opportunities to sample shared information, but only 1 × 8 = 8 opportunities to sample another item of not-yet-mentioned unshared information. In this case, we would continue to use the integers 1–27 to represent the (still) 27 opportunities to sample shared information, but use the integers 28–35 to represent the eight remaining opportunities to sample unshared information. Thus, to obtain the second discussion item in this alternative scenario, we would simply execute again the original version of the pseudo code given above (i.e., with the decision rule “ ≤ 27”), but this time constrain the value of X to be one of the integers 1–35.

The same basic routine would be used to obtain all subsequent discussion items. In each case, we would keep track of how the pool of not-yet-mentioned information shrinks as a result of removing the item selected, and change accordingly the decision rule and/or the constraints on X in the pseudo programming code. In this way, it is possible to simulate the sequential entry of shared and unshared information into a single group discussion. And by repeating this entire procedure from the beginning, we can simulate any number of such discussions. Importantly, with this approach it is unnecessary to perform all of the many thousands of computations needed to derive the exact probabilities associated with the formula translation approach. Rather, by modeling the step-by-step sampling process that is posited by theory to generate the content of group discussion, and by doing so for a large number of simulated discussions, we can closely approximate those probabilities simply by observing the proportion of times the computer actually performs the actions “[Discuss Shared]” and “[Discuss Unshared]” when sampling the first, second, third, etc. discussion items.

All of this hinges, however, on the critically important assumption that X can be depended upon to be one of n equally likely integers within a specified range, where n is the total number of sampling opportunities (shared plus unshared) that exist at any given moment. This assumption is made plausible by obtaining X from a random number generator. A random number generator is a utility (function) embedded within many programming languages that furnishes, on demand, a sequence of numbers, one after another, that do not have a discernable pattern to them – for all intents and purposes they are randomly ordered.10 The values produced by random number generators are often decimal numbers in the range 0–1 (including 0 but excluding 1), with all values in that range being equally likely to occur. Simple arithmetic operations can be performed on these values in order to transform them to the range required by the simulation (e.g., to obtain integers in the range 1–36, compute [ X × 36] + 1, then drop the decimal portion of the number). Thus, in the present example, the first line of pseudo code given above implies obtaining a value for X from a random number generator, then transforming that value as necessary to yield an integer within the required range.

Random number generators are invaluable to generative process modeling, as they supply the stochastic feature that is central to how we understand all behavioral phenomena. Besides the tendency of groups to discuss one type of information more than another (e.g., shared vs. unshared), random number generators are critical for modeling such phenomena as the proclivity of individuals and groups to recall certain kinds of information more readily than others (e.g., attitude consistent vs. inconsistent information), to forget information with the passage of time, to be more or less talkative, and to choose certain courses of action (e.g., cooperation vs. competition) more often than others. In each case, randomly generated numbers, along with judiciously chosen decision rules about the values of those numbers, are used to instantiate a probabilistic element in the simulation.

I employed a random number generator – and several of the other ideas described above – in a second version of the computer simulation that predicts the discussion entry trajectories of shared and unshared information (Larson, 1997). Called the Dynamic Information Sampling Model of Group Discussion (DISM-GD), this generative process modeling simulation, just like the formula translation version, relies on inputs about group size, the amounts of shared and unshared information available, p(R), and several other parameters. Unlike the formula translation version, however, the generative process modeling version does not yield the exact, mathematically computed probabilities of shared and unshared information entering discussion at each item serial position. Rather, those probabilities are estimated from the observed proportion of simulated groups in which the actions “[Discuss Shared]” and “[Discuss Unshared]” are actually performed at each item serial position.

Figure 5.2 displays output from both versions of the simulation. The top panel displays the exact discussion-entry probabilities computed via the formula translation version, and the bottom panel displays the proportions obtained when the generative process modeling approach was used to simulate 1,000 separate group discussions. As can be seen, the two versions yield similar results. Both predict that shared information is more likely than unshared information to be brought up early in discussion, whereas the reverse is predicted later on. The one obvious difference between them is that the formula translation version yields a pair of smooth curves, whereas the curves produced by the generative process modeling version contain small irregularities. Those irregularities are a natural consequence of the stochastic nature of the generative process modeling approach, but their magnitude tends to decrease as the number of simulated group discussions increases.

image

FIGURE 5.2 Predicted probability of shared and unshared information entering discussion in each item serial position derived from the formula translation-based (top panel) and generative process modeling-based (bottom panel) computer simulations. Both simulations assume a three-person group that collectively holds 9 shared and 3 + 3 + 3 = 9 unshared items of information, and where the members are able to recall and mention all of that information during discussion (i.e., p(R) = 1.00).

What is not obvious from Figure 5.2, is that the formula translation version of the simulation took nearly 300 times longer to run than did the generative process modeling version: it took just 5 seconds to generate the proportions on the bottom, but nearly 25 minutes to calculate the probabilities on the top. This difference increases exponentially as the size of the problem increases (e.g., increasing by just 33 per cent the amounts of available shared and unshared information increases the run time to 6 seconds for the generative process modeling version, but to more than 24 hours for the formula translation version!). Further, the formula translation version took substantially longer to program initially. Thus, even when a theory can be expressed fully in mathematical form, there may still be significant practical advantages to using a generative process modeling approach.

Agent-based Modeling

The most recently developed approach to computer simulation is agent-based modeling. Agent-based modeling is an evolutionary outgrowth of generative process modeling. An agent-based model is one that simulates simultaneously multiple agents, or actors, who behave in ways that impact one another. Each agent is endowed with its own generative processes, and each can act autonomously vis-à-vis its environment. That environment might include various resources (e.g., food, money, information), along with opportunities for reward and punishment, but always includes the other agents in the simulation. Agents are thus a key element of the environment for one another, in the sense that their actions affect – and are affected by – the actions of others. Depending on the simulation, agents might affect one another either directly or indirectly. They affect one another directly when, for example, they exchange task-relevant information, engage in cooperative help-giving, or put forth arguments to persuade others. By contrast, agents affect one another indirectly when they act in ways that change some nonagentic feature of the environment that in turn impacts others (e.g., by consuming a nonrenewable resource). It is the central purpose of agent-based modeling to understand the cumulative effects of these direct and indirect influences over time.

Several characteristics of the agents that inhabit this type of computer simulation are worthwhile noting (cf. Gilbert, 2008; Smith & Conrey, 2007). First, they are self-contained, discrete entities with flexible behavioral repertoires: they usually can act in more than one way. Second, agents typically possess local rather than global knowledge of their environment. For example, in a simulation involving many agents, each might be aware only of the actions of its closest neighbors, and may have access to information only about those resources and rewards that are near at hand. Third, agents display bounded rationality, gathering information and generating behavior by means of relatively simple rules. Complex computational routines, like those typical of the formula translation approach to computer simulation, generally are not used. Fourth, each agent acts autonomously according to its own objectives – agents are not under the command of a central authority. Finally, a given agent's behavioral repertoire may or may not be able to adapt to changes in its environment. Depending on the purpose of the simulation, the rules by which agents generate behavior might be fixed in advance and immutable during a given run of the simulation, or they might instead be learned and modified as a function of experience.

Agent-based modeling is well suited for simulating behavior both in large social networks (e.g., Axelrod, 1997; Kalick & Hamilton, 1996; Kennedy, 2009; Kenrick, Li, & Butner, 2003; Latané & Bourgeois, 2001; Schelling, 1971) and in small groups (e.g., Feinberg & Johnson, 1997; Kameda, Takezawa, & Hastie, 2003; Reimer, & Hoffrage, 2005, 2006; Ren, Carley, & Argote, 2006; Rousseau & Van Der Veen, 2005; Stasser, 1988, 2000; Stasser & Taylor, 1991). Here I illustrate its application in the latter domain with a simulation I created to explore the effects of diversity among members’ problem-solving strategies on group problem-solving performance (Larson, 2007a, 2010).

The simulation is called ValSeek, after the value-seeking problems with which it is concerned. A value-seeking problem is one that requires problem-solvers to search for a solution from among a set of alternatives that vary in their underlying value or desirability. The goal is to find the alternative with the highest value. A real-world example is designing a portable consumer electronics product (e.g., a smartphone). A product's design refers not only to its visual and tactile features, but also to its functionalities. Each possible combination of features and functionalities represents a unique solution to the design problem. Because the combination of design elements ultimately selected will determine the product's appeal to customers, and so its success in the marketplace, finding an appealing design that can also be built reliably and inexpensively is a value-seeking problem.

The individual elements of a given solution alternative sometimes contribute additively to that alternative's value, but sometimes they contribute multiplicatively. The latter implies that some elements may be more (or less) useful when certain other elements are also present. Consequently, solving value-seeking problems requires that solution elements be considered in combination, not separately. But often the number of possible combinations is very large, so much so that it is impractical to evaluate all of them. Under these conditions, the strategies that problem-solvers use to sift through the myriad possibilities – considering some combinations, while ignoring others – can significantly affect the value they ultimately find.

Value-seeking problems are represented abstractly in the ValSeek simulation. A problem consists of nothing more than a set of solution alternatives that vary in their underlying value. Each alternative is represented as a string of binary digits (e.g., 1–1–0–0–1), with each digit being a different solution element, and each possible combination of elements being a different solution alternative. No particular meaning is given to either the solution elements or the solution alternatives, except that each alternative is randomly assigned a different value (expressed in arbitrary units). Thus, in a particular run of the simulation, 1–1–0–0–1 might be assigned more value than 1–1–0–1–1. Given a set of solution alternatives involving the same number of elements (e.g., there are 25 = 32 alternatives involving five binary elements), the problem is to sift through the alternatives in search of the one with the highest value.

ValSeek simulates group problem-solving as a collaborative activity among a small number of agents (up to six). Beginning at a randomly selected starting point, each agent independently searches through the set of solution alternatives in serial fashion, considering them one at a time, comparing each to the alternatives examined before it. As they go, agents can also communicate with one another when they make progress toward finding a high-value solution (see later). For each new alternative considered, the agent first determines its value (by looking it up in a table; ValSeek is not concerned with how value is determined, only that it is determined), then compares that value to the value of the best alternative previously considered. If the current alternative has more value, then that previous alternative is abandoned, and the current alternative becomes the (new) best alternative considered so far. On the other hand, if the current alternative has less value than the previous one, the previous alternative is retained (i.e., it remains the best alternative considered so far), and the current one is abandoned. Once the outcome of this comparative evaluation is known, the alternative to be considered next is determined.

The solution alternative that is considered next depends partly on the binary code of the alternative retained in the just-completed comparative evaluation, and partly on the agent's strategy for selecting to-be-considered alternatives. That strategy is implemented in the ValSeek simulation simply as an instruction for changing the elements of the binary code of the retained (best so far) solution alternative. An example is “flip one randomly selected element of the code,” where “flip” means change that element to its other possible state (i.e., from 0 to 1, or vice versa).11 For example, if as a result of the just-completed evaluative comparison the retained solution were 1–1–0–0–1, then a single execution of this strategy instruction might lead the agent to consider next the solution alternative 1–0–0–0–1 (i.e., the second element was randomly selected and flipped). The value of that new alternative would then be determined in the same manner as before, and it would be compared to the value of 1–1–0–0–1. Depending on the outcome of this new comparative evaluation, one of the two alternatives would be retained, the strategy instruction would be executed again, and another comparison of values would be made. This process would be repeated over and over until no new alternative can be found that has more value than the previously retained alternative.

This evaluation process has several important features. First, an agent's strategy instruction will, in general, permit it to consider only a subset of all possible solution alternatives – some will be considered, but some will be overlooked. Second, a group of agents may be endowed either with the same strategy instruction or with different instructions. For example, some agents might be given the instruction “flip all except one randomly selected element of the code” (e.g., 1–1– 0–0–1 might become 0–1–1–1–0, where only the second element was not flipped). Different strategy instructions cause agents to move through the solution alternatives differently, considering different (though often overlapping) subsets of those alternatives as they go. Third, even when endowed with exactly the same strategy instruction, agents do not necessarily evaluate the same subset of solution alternatives, or do so in the same order. This is because each agent starts the process with a different, randomly selected alternative, and because the strategy instruction itself contains a strong stochastic component (e.g., given the retained solution 1–1–0–0–1 and the strategy instruction “flip one randomly selected element,” there are five equally likely solution alternatives that might be considered next).

Fourth, agents can communicate with one another whenever they find a solution alternative that has more value than the value of the best alternative they previously considered. These are occasions when communication often occurs in real groups, as they are the moments where it is most evident that progress is being made toward finding the solution with the highest value. Communication is operationalized in the ValSeek simulation as agents passing the identity (binary code) of their newly identified best solution alternative to the other agents in the simulation, who then compare its value to that of their own currently best alternative (that value might be either higher or lower). Depending on the outcome of this comparative evaluation, those other agents might either retain or abandon this new alternative, and then consider additional alternatives in the same manner as before. In this way, agents can influence the solution alternatives that other agents consider next. The agents’ propensity to communication about newly identified solution alternatives is treated as a variable in the ValSeek simulation. Consequently, the results from simulated groups with a high propensity to communicate (e.g., those with extroverted, highly verbal members) can be compared to the results from groups with a low propensity to communicate (e.g., those with introverted, taciturn members).

Finally, the group's problem-solving activity terminates when no agent can find a new alternative that has more value than the value of its own currently best (retained) alternative. The group's collective solution is then defined simply as the best of the agents’ best alternatives. One consequence of this process is that while groups will choose the solution with the highest discovered value, they will not necessarily discover the solution that is objectively best, simply because they will not always evaluate every possibility. An important output from the ValSeek program is thus the proportion of simulated groups that do in fact identify the solution that is objectively best.

An example of the output from the ValSeek simulation is shown in Figure 5.3. It illustrates the simulation's predictions about the joint effect of two variables: (a) the degree of diversity among agents with respect to their problem-solving strategies; and (b) the agents’ propensity to communicate with one another when they find higher-value solution alternatives. In these simulations, groups of four agents each solved value-seeking problems involving five binary elements (like those described above). The diversity variable had two levels: agents were all endowed either with the same strategy instruction (homogeneous groups), or with different strategy instructions (heterogeneous groups). The instructions employed in the simulation were the two mentioned previously, along with two others: “flip two randomly selected, adjacent elements of the code,” and “flip all except two randomly selected, adjacent elements of the code.” The communication variable had three levels: agents exchanged information about newly discovered solution alternatives either 20 percent, 50 percent, or 80 percent of the time, to simulate groups whose members have either a relatively low, moderate, or high propensity to communicate, respectively.

image

FIGURE 5.3 Predicted performance for four-person groups with either heterogeneous or homogeneous problem-solving strategies, and whose members have either a weak (20 percent), moderate (50 percent), or strong (80 percent) propensity to communicate. Adapted in part from Larson ( 2007b ).

Each data point in Figure 5.3 represents the proportion of simulated groups (out of 10,000) that identified the objectively best solution from the full set of 32 alternatives. Four different versions of the homogenous group condition are represented in the figure, one for each different strategy instruction employed. These are labeled “A–A–A–A,” “B–B–B–B,” “C–C–C–C,” and “D–D–D–D” groups, respectively. As can be seen, regardless of the strategy instruction used, given the same propensity to communicate, homogeneous groups all perform similarly. More importantly, the homogeneous groups all performed considerably worse than the heterogeneous groups (the “A–B–C–D” groups). Further, and somewhat surprisingly, the performance of homogeneous groups worsened with increased communication. This is because increased communication among agents who all have the same problem-solving strategy increases the degree to which they all search through the solution alternatives in exactly the same way. Said differently, increased communication makes it more likely that four similar agents will perform as if they were just one agent. This does not occur, however, in heterogeneous groups – varying their propensity to communicate, at least within the range simulate here, does not harm their performance (though neither does it improve their performance much). Finally, across all conditions, simulated groups tended to perform better than their average member would have performed working completely alone (marked in the figure with the dotted line labeled “Average member baseline”). However, only the heterogeneous groups perform better than their best member would have performed working alone (marked with the dotted line labeled “Best member baseline”). The ValSeek simulation thus predicts that heterogeneous groups should demonstrate what I referred to elsewhere as a strong synergistic performance gain when solving value-seeking problems (Larson, 2007a, 2010). This and the other predictions shown in Figure 5.3 would have been extremely difficult to deduce had the ideas underlying ValSeek not been expressed as a computer simulation.

Lessons Learned

A theorist who wants to begin using computer simulation as part of his or her work faces several challenges. The biggest, I believe, is overcoming in one's own thinking the habits of mind associated with conventional modes of theorizing about individual and group behavior. As I hope the present chapter has made clear, modern computer simulation methods, particularly those that employ a generative process and/or agent-based modeling approach, emphasize the emergent nature of behavior, and call attention to the repeated and persistent interplay over time of the often manifold underlying processes that give rise to it. These approaches explain behavior by reference to the ability of the simulated processes actually to generate a facsimile of that behavior. In other words, when using a computer simulation to express theory, explanation inheres in generation. To create such a simulation it is therefore necessary that the theorist himself or herself learn to think in dynamical generative terms. In my experience, this is much easier said than done.

The difficulty is that “generative thinking” is largely incommensurate with traditional modes of theorizing. Like most researchers who study small group behavior, I was originally trained in variable-based theory construction, where explanation is rooted in covariation, not generation. That is, in a variable-based approach, emphasis is placed on the observed (or hypothesized) pattern of covariation among variables across cases, and it is the structural (usually causal) relationships among those variables that take center stage. These relationships are often depicted in diagrams with boxes and arrows, and are the basis for understanding behavior. Although theories of this sort have been extremely useful, they do not offer a complete picture of behavior. Importantly, because they are mostly static and cross-sectional, theories that focus on covariation do not speak in a very direct way to the inherently dynamic and longitudinal character of behavior generation. Despite this shortcoming, the variable-based approach has become so firmly entrenched as the way to theorize about behavior that it is difficult for most researchers to think in other potentially useful ways.

All of this makes it tough for anyone who wishes to get started in computer simulation actually to do so. Particular troublesome is developing a computer simulation in a topic area where none currently exists, simply because it is hard to know how even to approach the problem. A more workable strategy, at least until one has gained some experience with computer simulation methods, is to create simulations in areas where there is already some form of template that can be used for guidance. Each of my own simulations described in this chapter initially took shape in this way.

For example, the development of DISM-GD, the simulation that predicts the complete discussion entry trajectories of shared and unshared information, was guided by the ideas in Stasser and Titus (1985, 1987). They described in natural language the core elements of the generative process assumed to produce the summary results they observed (i.e., the total amount of shared and unshared information eventually discussed by groups). I set out initially to express those same ideas in a computer simulation that modeled the underlying generative process, and found in the first few runs of that simulation that it made the same summary predictions as did the mathematical formulation given by Stasser and Titus (i.e., Equation 5.1). Once the nucleus of the simulation was written, it was relatively easy to add to it the ability to report the intermediate values associated with the full discussion entry trajectories of shared and unshared information, like those shown in Figures 5.1 and 5.2. I then further modified the simulation in order to explore other interesting issues, such as what happens when, instead of participating equally in the group's discussion, members participate differentially.12

In similar fashion, development of the ValSeek model was strongly influenced by the ideas in Hong and Page (2001), who published their work in an economics journal. They were concerned with sequential problem-solving situations, wherein one person first works alone on a value-seeking problem to find the best solution he or she can, then a second person takes over from where the first left off, then a third takes over from where the second left off, and so on. My goal initially was to replicate with an agent-based model the predictions they made for this scenario via a mathematical analysis. Once that was accomplished, my next step was to make the agents more fully interactive, and so better simulate what actually occurs in real groups. Thus, like DISM-GD, ValSeek started out as a replication of what already existed in another form, but then went beyond the ideas that gave rise to it.

A final challenge worth mentioning here concerns software. There are a number of dedicated software platforms for creating computer simulations, including agent-based models (cf. Alessa, Laituri, & Barton, 2006). These can be very useful, especially for the novice, as they typically do not require any special programming skills. On the other hand, each of these platforms is designed for a particular genre of simulation, and so may or may not be suited to the needs of a theorist who wishes to construct a specific type of simulation. If an appropriate platform does not currently exist, it will be necessary to program the simulation from scratch using a general-purpose programming language.

I wrote the ValSeek simulation using the Visual Basic® programming language, and I wrote DISM-GD using its predecessor, Basic 7.0®. For anyone who does not already know a general-purpose programming language, but who would like to learn one in order to develop a computer simulation, Visual Basic is a good place to start. Visual Basic is a relatively easy-to-learn, object-oriented language that is taught in many high schools, colleges, and universities. I recommend taking a one-semester course in Visual Basic, although some may be able to learn it on their own with the aid of a good introductory textbook (e.g., Schneider, 2009).13 But be patient. You will not learn much about computer simulation per se from an introductory course. What you will learn instead is a set of skills that will be very useful later on when you do start your first computer simulation project.

Alternatively, rather than writing the program yourself, you might consider hiring a programmer. This tack presents its own set of challenges, however. In computer simulation, as in many areas oflife, the devil is in the details. One particularly devilish problem increating computer simulations is that, as suggested previously, ideasmust be expressed with much greater precision than is normally found in natural language theories. As a consequence, when deciding how to implement an idea in a simulation, there are often important programming choices to be made about which our natural language theories offer little guidance. Further, these choices sometimes appear trivial on their face, despite having profound implications for the way the simulation operates. A programmer hired by the theorist to write the code – especially someone who is unfamiliar with the relevant natural language theories – may not always be able to tell the difference between trivial and theoretically significant programming choices, and so may notknow when it is important to seek advice from the theorist. This, inturn, makes it difficult for the theorist to know for sure that the simulation has been implemented in exactly the way he or she intended. My ownexperience in communicating theoretical subtleties to hired programmers(mostly students from computer science departments) has been sufficiently frustrating that I usually prefer to do the programming myself.

Next Steps

For those who want to learn more about the details of computer simulation methods, there are a number of useful resources available, though little in the way of textbook treatments, and not much that focuses specifically on simulating behavior in small groups. Still, many of the techniques used to simulate behavior in larger social networks are readily adapted to simulating behavior in small groups. Axelrod and Testfatsion (2006) provide a helpful annotated bibliography, and Testfatsion (2010) maintains a companion website with many useful links to downloadable resources and demonstration software. Davis et al. (2007), Elliott and Kiel (2004), Frantz and Carley (2009), Gilbert (2008), Harrison, Lin, Carroll, and Carley (2007), Macy and Willer (2002), and Miller and Page (2004) also provide valuable insights. Among these, Davis et al. (2007) offer a particularly useful roadmap for developing theory using computer simulation methods. Their treatment, and that of the other works cited here, also elaborate a number of the themes introduced in the first section of this chapter.

Authors’ Notes:

I would like to thank Nick Aramovich, Olga Goldenberg, Ryan Leach, Jared Majerle, and the two editors for their helpful comments on an earlier draft of this chapter. Correspondence regarding this chapter can be sent to the author at [email protected].

Notes

  1. 1 ENIAC is an acronym for the Electrical Numerical Integrator And Computer. It was built at the University of Pennsylvania between 1943 and 1945, and operated until the mid-1950s. ENIAC contained 18,000 vacuum tubes, weighted 30 tons, and could solve in 25 seconds the complete trajectory of an artillery shell that, when fired, took 30 seconds to reach its target (Burks & Davidson, 1999). Although remarkably slow by today's standards, in 1945 ENIAC was 1,000 times faster than any other calculating machine then in existence (Goldstine, 1972).
  2. 2 The translation process involves two main steps. The first is performed by the programmer when he or she translates the mathematical formula as expressed on paper into the notation required by the programming language that he or she is using. Anyone who has entered a formula into an Excel® spreadsheet is familiar with this. The second, more challenging step is performed automatically by the programming language itself. All computers operate on instructions expressed in binary code – long strings of 0s and 1s – often referred to as machine language. The primary function of any programming language is to translate the notation input by the programmer into the specific arrangement of 0s and 1s that is required to get the computer to perform the desired function. Thus, the formula translation process is ultimately one of converting an arrangement of symbols understandable to humans into the binary code understandable to machines. It is no coincidence that one of the first widely used programming languages developed to help accomplish this formula translation task draws its very name from that use: FORTRAN.
  3. 3 Between these extremes are various degrees of partially shared information: information that is held by more than one, but not every, group member. Although I focus here on the extremes, the principles involved are perfectly general, and so can be applied as well to these intermediate levels of sharedness.
  4. 4 This assumption may be unrealistic in absolute terms, but there is no reason to presume that shared information is inherently any more or less memorable than unshared information.
  5. 5 It is important to realize that the CIS predicts only the entry into discussion of information that has not previously been mentioned. It does not predict either the thoroughness with which that information is discussed or its likelihood of being repeated (but see Larson & Harmon, 2007).
  6. 6 The trajectory for unshared information can then be obtained by subtraction.
  7. 7 I will have more to say about the ease and usefulness of learning a programming language near the end of this chapter.
  8. 8 One interesting stopping criterion is the total amount of information brought into discussion, with discussion terminating when some fixed quantity, Y, has been reached (regardless of whether that information was originally shared or unshared). Y might be set to a high value to simulate groups whose members are all high in need for cognition, and a low value to simulate groups whose members are all low in need for cognition, under the assumption that high need for cognition groups will discuss more information than low need for cognition groups (cf. Cacioppo, Petty, Feinstein, & Jarvis, 1996).
  9. 9 Pseudo programming code, or simply pseudo code, is an informal way of expressing the instructions contained in a computer program. Pseudo code does not have the syntax of a real programming language, but is more easily understood by humans. In principle, different programmers using different programming languages should be able to translate the ideas expressed in pseudo code into real computer programs that all perform the same function.
  10. 10 Different random number generators use different algorithms to produce numbers. One involves computing the sequence of digits to the right of the decimal point in the irrational number π (3.1415926535897932384626433…). Of course, because they are mathematical algorithms, random number generators are deterministic – given the same set of initial conditions, the same sequence of numbers will be produced each time the generator is started. An important initial condition for most random number generators is a starting value called a “seed.” Given the same seed, the same string of numbers will be produced. But given different seeds, different strings of numbers will be produced. One way to improve the “randomness” of a random number generator is to change the value of the seed in a way that is itself hard to predict. For example, many generators consult the computer's internal clock for the current time of day (usually expressed to a fraction of a second) and base the seed on this value. By doing this, the generator will have a different seed each time it is started, unless it happens to be started at exactly the same time each day – a very unlikely event. Thus, whereas the algorithms employed by random generators are deterministic, and so the numbers they produce can never be truly random, those numbers are nevertheless exceptionally difficult to predict, which makes them very similar to random numbers. For this reason, these deterministic but hard-to-predict numbers are often called pseudo random numbers.
  11. 11 The stochastic feature of this instruction ensures that no special priority is given to any one element of the binary code. This is consistent with the generic nature of the value-seeking problems being modeled, where no inherent priority among solution elements is assumed.
  12. 12 As it turns out, there is a hidden assumption underlying the model expressed in Equation 5.1. It implicitly assumes that members contribute equally to the group's discussion. In reality, this is hardly ever true. Relaxing this assumption yields results that predict an even stronger discussion bias in favor of shared information than is predicted by Equation 5.1 (cf. Larson, 1997, Table 2). The existence of this hidden assumption became clear only when I began using DISM-DG to explore the impact of differential participation rates among members.
  13. 13 I was fortunate as an undergraduate student in the early 1970s to have taken an elective course in FORTRAN programming. Although I have not used FORTRAN in nearly 30 years, that course gave me a fundamental understanding of how programming languages work, which has since allowed me to learn several new languages on my own, including Visual Basic. There are many differences among programming languages, of course, but there are also commonalities, so learning a new language is not as difficult as it might seem – certainly it is a lot easier than learning a new natural language.

References

Alessa, L. N., Laituri, M., & Barton, M. (2006). An “all hands” call to the social science community: Establishing a community framework for complexity modeling using agent based models and cyberinfrastructure. Journal of Artificial Societies and Social Simulation, 9(4), 6. Downloadable at : http://jasss.soc.surrey.ac.uk/9/4/6.html.

Anderson, N. H. (1991). Contributions to information integration theory. Hillsdale, NJ : Erlbaum.

Axelrod, R. L. (1997). The dissemination of culture: A model of local convergence and global polarization. Journal of Conflict Resolution, 41, 203–226.

Axelrod, R., & Testfatsion, L. (2006). A guide for newcomers to agent based modeling in the social sciences. In K. Judd, & L. Testfatsion (Eds.), Handbook of computational economics, Vol. 2: Agent-based computational economics (pp. 1647–1658). Amsterdam : North-Holland.

Brodbeck, F. C., Kerschreiter, R., Mojzisch, A., & Schulz-Hardt, S. (2007). Group decision making under conditions of distributed knowledge: The Information Asymmetries Model. Academy of Management Review, 32, 459–479.

Brown, V., Tumeo, M., Larey, T. S., & Paulus, P. B. (1998). Modeling cognitive interaction during brainstorming. Small Group Research, 29, 495–526.

Burks, A. W., & Davidson, E. S. (1999). Introduction to “The ENIAC.” Proceedings of the IEEE, 87, 1028–1030.

Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119, 197–253.

Coskun, H., Paulus, P. B., Brown, V., & Sherwood, J. J. (2000). Cognitive stimulation and problem presentation in idea-generating groups. Group Dynamics, 4, 307–329.

Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Developing theory through simulation methods. Academy of Management Review, 32, 480–499.

Elliott, E., & Kiel, L. D. (2004). Agent-based modeling in the social and behavioral sciences. Nonlinear Dynamics, Psychology, and Life Sciences (Special Issue: Agent-Based Modeling), 8, 121–130.

Estes, W. K. (1975). Some targets for mathematical psychology. Journal of Mathematical Psychology, 12, 263–282.

Feinberg, W. E., & Johnson, N. R. (1997). Decision making in a dyad's response to a fire alarm: A computer simulation investigation. In B. Markovsky, M. J. Lovaglia, & L. Troyer (Eds.), Advances in Group Processes, 14, 59–80.

Frantz, T. L., & Carley, K. M. (2009). Agent-based modeling within a dynamic network. In S. J. Guastello, M. Koopmans, & D. Pincus (Eds.), Chaos and complexity in psychology: The theory of nonlinear dynamical systems (pp.475–505). New York : Cambridge University Press.

Gilbert, N. (2008). Agent-based modeling. Thousand Oaks, CA: Sage.

Goldstine, H. H. (1972).The computer: From Pascal to von Neumann. Princeton, NJ : Princeton University Press.

Hastie, R., & Stasser, G. (2000). Computer simulation methods in social psychology. In H. Reis & C. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 85–114). Cambridge : Cambridge University Press.

Hong, L., & Page, S. E. (2001). Problem solving by heterogeneous agents. Journal of Economic Theory, 97, 123–163.

Harrison, J. R., Lin, Z., Carroll, G. R., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32, 1229–1245.

Kalick, S. M., & Hamilton, T. E. (1986). The matching hypothesis reexamined. Journal of Personality and Social Psychology, 51, 673–682.

Kameda, T., Takezawa, M., & Hastie, R. (2003). The logic of social sharing: An evolutionary game analysis of adaptive norm development. Personality and Social Psychology Review, 7, 2–19.

Kashima, Y., Woolcock, J., & Kashima, E. S. (2000). Group impressions as dynamic configurations:The tensor product model of group impression formation and change. Psychological Review, 107, 914–942.

Kennedy, J. (2009). Social optimization in the presence of cognitive local optima: Effects of social network topology and interaction mode. Topics in Cognitive Science, 1, 498–522.

Kenrick, D. T., Li, N. P., & Butner, J. (2003). Dynamical evolutionary psychology: Individual decision rules and emergent social norms. Psychological Review, 110, 3–28.

Krause, U. (1996). Impossible models. In R. Hegselmann, U. Mueller, & K. G. Troitzsch (Eds.), Modeling and simulation in the social sciences from the philosophy of science point of view (pp.65–75). Dordrecht : Kluwer.

Larson, J. R., Jr (1997). Modeling the entry of shared and unshared information into group discussion:A review and BASIC language computer program. Small Group Research, 28, 454–479.

Larson, J. R., Jr (2007a). Deep diversity and strong synergy: Modeling the impact of variability in members’ problem-solving strategies on group problem-solving performance. Small Group Research, 38, 413–436.

Larson, J. R., Jr (2007b). A computational modeling approach to understanding the impact of diverse member problem-solving strategies on group problem-solving performance. Invited paper presented at the annual Society for Experimental Social Psychology Small Groups Research Preconference. Chicago, IL.

Larson, J. R., Jr (2010). In search of synergy in small group performance. New York : Psychology Press.

Larson, J. R., Jr, Christensen, C., Franz, T. M., & Abbott, A. S. (1998). Diagnosing groups: The pooling, management, and impact of shared and unshared case information in team-based medical decision making. Journal of Personality and Social Psychology, 75, 93–108.

Larson, J. R., Jr, & Harmon, V. M. (2007). Recalling shared vs. unshared information mentioned during group discussion: Toward understanding differential repetition rates. Group Processes and Intergroup Relations, 10, 311–322.

Latané, B. (1996). Dynamic social impact: Robust predictions from simple theory. In R. Hegselmann, U. Mueller, & K. G. Troitzsch (Eds.), Modeling and simulation in the social sciences from the philosophy of science point of view (pp.287–310). Dordrecht : Kluwer.

Latané, B., & Bourgeois, M. J. (2001). Successfully simulating dynamic social impact: Three levels of prediction. In J. P. Forgas & K. D. Williams (2001). Social influence: Direct and indirect processes (pp.61–76). New York : Psychology Press.

Lewandowsky, S. (1993). The rewards and hazards of computer simulations. Psychological Science, 4, 236–243.

Linville, P. W., Fischer, G. W., & Salovey, P. (1989). Perceived distributions of the characteristics of in-group and out-group members: Empirical evidence and a computer simulation. Journal of Personality and Social Psychology, 57, 165–188.

Macy, M. W., & Willer, R. (2002). From factors to actors: Computational sociology and agent-based modeling. Annual Review of Sociology, 28, 143–166.

Miller, J., & Page, S. E. (2004). The standing ovation problem. Complexity, 9(5),8–16.

Myung, I. J., & Pitt, M. A. (2002). Mathematical modeling. In J. Wixted (Ed.), Stevens’ handbook of experimental psychology (3rd ed., Vol. 4, pp. 429–460). New York : Wiley.

Ostrom, T. M. (1988). Computer simulation: The third symbol system. Journal of Experimental Social Psychology, 24, 381–392.

Queller, S. (2002). Stereotype change in a recurrent network. Personality and Social Psychology Review, 6, 295–303.

Read, S. J., & Urada, D. I. (2002). A neural network simulation of the outgroup homogeneity effect. Personality and Social Psychology Review, 7, 146–159.

Reimer, T., & Hoffrage, U. (2005). Can simple group heuristics detect hidden profiles in randomly generated environments. Swiss Journal of Psychology, 64, 21–37.

Reimer, T., & Hoffrage, U. (2006). The ecological rationality of simple group heuristics: Effects of group member strategies on decision accuracy. Theory and Decision, 60, 403–438.

Ren, Y., Carley, K. M., & Argote, L. (2006). The contingent effects of transactive memory: When is it more beneficial to know what others know? Management Science, 5, 671–682.

Rousseau, D., & Van Der Veen, A. M. (2005). The emergence of a shared identity: An agent-based computer simulation of idea diffusion. Journal of Conflict Resolution, 49, 686–712.

Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1, 143–186.

Schneider, D. I. (2009). An introduction to programming using Visual Basic 2008 (7th ed.). Upper Saddle River, NJ : Prentice Hall.

Simon, H. A. (1992). What is an “explanation” of behavior? Psychological Science, 3, 150–161.

Smith, E. R., & Conrey, F. R. (2007). Agent-based modeling: A new approach for theory building in social psychology. Personality and Social Psychology Review, 11, 87–104.

Stasser, G. (1988). Computer simulation as a research tool:The DISCUSS model of group decision making. Journal of Experimental Social Psychology, 24, 393–422.

Stasser, G. (2000). Information distribution, participation, and group decision: Explorations with the DISCUSS and SPEAK models. In D. R. Ilgen, C. L. Hulin (Eds.), Computational modeling of behavior in organizations (pp. 35–156). Washington, DC : American Psychological Association.

Stasser, G., & Taylor, L. A. (1991). Speaking turns in face-to-face discussions. Journal of Personality and Social Psychology, 60, 675–684.

Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psychology, 48, 1467–1478.

Stasser, G., & Titus, W. (1987). Effects of information load and percentage of shared information on the dissemination of unshared information during group discussion. Journal of Personality and Social Psychology, 53, 81–93.

Stasser, G., & Titus, W. (2003). Hidden profiles: A brief history. Psychological Inquiry, 14, 304–313.

Sun, R. (2009). Theoretical status of computational cognitive modeling. Cognitive Systems Research, 10, 124–140.

Testfatsion, L. (2010). On-line guide for newcomers to agent-based modeling in the social sciences. http://www.econ.iastate.edu/tesfatsi/abmread.htm

Van Rooy, D., Van Overwalle, F., Vanhoomissen, T., Labiouse, C., & French, R. (2003). A recurrent connectionist model of group biases. Psychological Review, 110, 536–563.

Winquist, J. R., & Larson, J. R., Jr (1998). Information pooling: When it impacts group decision making. Journal of Personality and Social Psychology, 74, 371–377.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset