18

THE ANALYSIS OF GROUP
INTERACTION PROCESSES

Dean E. Hewes

UNIVERSITY OF MINNESOTA

Marshall Scott Poole

UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN

Group interaction is a process. It is not merely a set of messages, nor is it only a series of messages. It is a series ofmessages that influence subsequent group interaction and/or reflect underlying rules of interaction such as phases that sequentially structuregroup interaction (cf. Hewes, 2009d; Poole, 1983b). This was recognizedby some of the original framers of group behavior (Bales & Strodbeck, 1951; Fisher, 1970; McGrath & Altman, 1966) and it was showcased in classic, if formidable essays on the mathematics of group processes (Arrow, Karlin, & Suppes, 1960; Coleman, 1964). If temporal patterns of interaction are central to the study of groups, then to understand groups fully, it is important to have methods for characterizing and testing theories of group interaction.

As easy as it may be to say this, when we set about to identify temporal patterns in interaction, we find that most commonlyused analytical methods, for example, analysis of variance for experimental data, regression for survey data, and nonparametric statistics, are simply not geared for the task. To study temporal patterns in group interaction requires us to do two things: (a) identify possible interaction patterns, and (b) conduct tests of hypotheses about those patterns. For example, a common temporal model of group decision-making posits that groups go through a sequence of phases during the decision process. In their classic model, Bales and Strodtbeck (1951), for example, posited that groups solve problems in a three-stage process composed of (a) an orientation phase dominated by information sharing, fact-finding, and characterization of the problem, followed by (b) an evaluation phase in which members considered alternative solutions to the problem and conflicts over which alternative should be chosen occurred, concluding in(c) a control phase in which members come to a common decision, exerting control over one another and over their common environment. To assesswhether this model fits the group's interaction we must first find a way to identify these three phases (if they occur at all) and then find a way to test for whether the observed sequence of phases fits this pattern.

When we first started studying group interaction in the early 1970s (for Hewes) and the late 1970s (for Poole), there were few widely accepted methods for the analysis of temporal patterns. Most researchers improvised and devised ways to get at temporal patterns that utilized traditional methods. For example, when Bales and Strodtbeck (1951) tested their problem-solving phase model, they devised a way to code behavior corresponding to what would be expected for each of theirthree phases (orientation, evaluation, and control), namely the famous Interaction Process Analysis coding system (Bales, 1950). They coded live discussions (audio recorders were not available then!) using this system and ended up with a series of coded statements. They then divided their discussions into three equal segments (corresponding to their three phases) and conducted t-tests to compare the levels of each type of code for the first, second, and third segments, respectively. Bales and Strodtbeck found that there were more orientation acts in the first segment, more disagreement and opinion acts in the second segment, and more solution and agreement acts in the third segment, which they took as evidence in favor of their model. Most studies of group interaction in the 1970s utilized improvised methods, which had the disadvantage of lack of standardization and thus might not yield comparable results across studies. During the 1980s and subsequently, scholars began to systematize their methods for interaction analysis, to the point where Poole, Van de Ven, Dooley, and Holmes (2000) could describe four different approaches to the analysis of processes, along with methods of capturing process data and statistical methods for analyzing it.

This chapter will discuss two major methods for the analysis of interaction, specifically models of sequential contingencies, such as Markov process models and lag-sequential analysis, and phasic analysis. Both methods facilitate inquiry into the patterns of sequential dependencies between and among coded communication acts.

This essay will focus mainly on general descriptions of these methods rather than the details of actually conducting the analysis. These details can be found in other sources (Bakeman & Gottman, 1997; Hewes, 1980; Poole et al., 2000). Group processes (and alltemporal processes) are typically more complicated than static data. For this reason there is a need to adapt and improvise methods for process analysis. So there are no “cookbooks” for process analysis, the way there are for analysis of experiments or survey data (although things are becoming somewhat standardized and maybe there will be in the future). In this chapter we will try to give you a sense of the state of the current art in interaction analysis of group processes, as well as a glimpse of the future.

In the next section we define process and consider the specific questions we must ask to study processes. Subsequent sections address these questions, including a brief section on coding and categorizing the events that describe the process, followed by sections on methods for identifying and describing patterns in interaction processes, and concluding with a section on explaining why patterns occur. Specifically we will discuss sequential contingency analysis and phasic analysis.

What is a Process?

What is a process? Nicholas Rescher (1996; see also Teichmann, 1995) offers a succinct and inclusive definition:

A process is a coordinated group of changes in the complexion of reality, an organized family of occurrences that are systematically linked to one another either causally or functionally … A process consists in an integrated series of developments unfoldingin joint coordination in line with a definite program. Processes are correlated with occurrences or events: Processes always involve various events, and events exist only in and through processes.

(p. 38)

This definition has several implications. First, processes involve change and unfold over time, which necessitates research designs involving longitudinal study of one or more meetings or discussions. Second, processes are indicated by one or more series of events. An event is a “happening, occurrence or episode … a change … or composite of changes” (Mackie, 1995, p. 253). Teichmann (1995, p. 721) comments, that “ ‘process’ isto ‘change’ or ‘event’, rather as ‘syndrome’ is to ‘symptom’.”A process underlies a collection of events or changes, providing “some sort of unity or unifying principle to it” (Teichmann, 1995, p. 721). To infer that a process is the unifying principle underlying a sequence of events, it is necessary to identify patterns in the events that reflectthe process in question and test whether those patterns as opposed to other possible patterns hold.

For example, Bales and Strodtbeck's problem-solving model is an example of what has been termed a unitary sequence (Poole, 1981) or life-cycle (Poole & Van de Ven, 2004) model of group process. This model assumes that the phases of the process will occur in a set order because of logical necessity or institutional rules that govern the problem-solving and decision-making process. For instance, it is logical to assume that we cannot develop solutios for a problem before we have characterized it and identified relevantfacts, so it is first necessary to orient the group before moving to a phase of debate about solutions (evaluation). In the same vein, it isnecessary to debate solutions before coming to a final solution in the control phase. This logical sequence serves as an underlying generativemechanism of the process that produces a three-phase sequence of activities (orientation–evaluation–control), and so accounts for the observed series of events. (A generative mechanism refers to rules or other factors that produce a particular effect; in our case, the observed patterns in group interaction processes. As we describe more of these, you will notice that each explanation for group processes uses somewhat different elements in its generative mechanism.) Another generative mechanism that could produce a unitary sequence is a set of institutional rules that require the group to engage inactivities in a certain order. For example, juries are required to first decide on guilt or innocence before deciding on a sentence for a defendant.

Mohr (1982) originally differentiated variance and process approaches in social scientific research. In general terms, a variance theory explains change in terms of relations among independent variables and dependent variables, while a process theory explains how a sequence of events leads to some outcome. Figure 18.1 shows a pictorial comparison ofthe two approaches, which are described and distinguished in greater detail in Poole et al. (2000; see also Poole, 2007). Variance and processapproaches require us to adopt quite different research strategies and methods of analysis.

Explanations in variance theories take the form of causal models that incorporate these variables (e.g., X causes Y which causes Z). A variance theory can be tested using well-known experimental or survey approaches (see Chapter 2, this volume). A variance theory based on the functional theory of group decision, for example, would explain the effectiveness of group decisions as a function of variables such as the amount of evaluation of solutions and task complexity (Gouran & Hirokawa, 1996; Hollingshead et al., 2005). Functional theory posits that the amount of positive and negative analysis of consequences of solutions agroup conducts should positively affect decision-making effectiveness. Task complexity should have a negative relationship with effectiveness and should interact with problem analysis so that more problem analysisfor complex problems will enhance effectiveness (Gouran & Hirokawa, 1996; Orlitsky & Hirokawa, 2001).

image

FIGURE 18.1 Variance and process theories exemplifying the decision functions theory

To test this theory we could setup an experiment with two conditions, one where groups were given a high-complexity task and one where they were given a low-complexity task, holding other elements of the task constant. Effectiveness of group decision-making could be measured by comparing the actual decisions the groups made in each condition with a normatively correct decision as determined by outside task experts. We could measure solution evaluation either by having subjects rate the amount of each of these that their groups engaged in (a post-test) or by recording the group discussions, coding them for problem analysis and solution evaluation, and then taking the total number of statements of each type as measures of problem analysis (observational measurement). Standard statistical analysis could beemployed to determine whether relationships among variables were consistent with those hypothesized by the theory.

In contrast, a process theory focuses on a series of events that bring about or lead to some outcome and attempts to specifythe generative mechanism that could produce the event series. Explanations in process theories take the form of theoretical narratives that account for how one event led to another and that one to another, and so on to the final outcome. A process theory is tested by identifying or measuring observable events and determining whether the relations among events are what would be expected if the generative mechanism in the process theory was in operation.

As depicted in Figure 18.1, a process theory based on functional decision-making theory would posit that groups make a decision by engaging in a series of phases of activity that begins with problem analysis, then transitions to criteria specification, then to alternative solution generation, to positive and negative evaluation of solution consequences (in light of criteria), and, finally, to a choice of the best decision (we will call this a “unitary sequence” model, as opposed to a “multiple sequence” model in which groups might pass through several different sequences of phases). The generative mechanism that determines the order in which these activities are undertaken by the group is the principle of logical necessity, as described above. The degree to which the group adhered to this order and also carried out the activities in each phase completely and effectively is what determines the ultimate effectiveness of the decision.

To test this process theory of group decision-making, we could record the discussions of groups making in the experiment justdescribed. Performance of decision functions would be measured using a coding system that identified each function. To ascertain whether thegroup performed the functions in the specified order, the discussion could be divided into five segments, and the relative proportion of acts corresponding to each of the five functions (problem analysis, criteriaspecification, alternative generation, etc.) that occurred in each segment. If a group's process was generated by the hypothesized logical necessity model, then there should be relatively more problem analysis acts than any other type in the first segment, more criteria development acts than any other type in the second segment, more solution development acts than any other type in the third segment, and so on. The degree to which a group's relative proportionof acts in each segment was consistent with this pattern could be calculated. The process theory could then be tested by ascertaining whether consistency with the hypothesized pattern was positively related to decision-making effectiveness.

We could go further and determine whether level of task complexity led to differences in group decision processes. For example, the groups engaged in a highly complex task might be expected to deviate more from the unitary sequence of phases in the logical necessitymodel than groups engaged in a task low in complexity. In this case we might test for whether low complexity groups adhered to the unitary sequence, whereas high complexity groups exhibited multiple sequences.

As this example illustrates, process theories are tested by identifying patterns in the temporal sequencing of group interaction and then evaluating whether these patterns are consistent with those that would be expected if the generative mechanism posited bythe theory were in operation. This emphasis on the study of temporal patterns rather than relationships among variables is what distinguishes process research from variance research.

Analysis of group interaction processes is guided by four questions that aid in matching descriptions of patterns in group processes to the theories that might account for them. (1) What facets of interaction (types of events) should we measure? (2) Can we measure them reliably and validly? (3) What temporal patterns are there in the observed sequence of events? (4) What kind of processes account for these temporal patterns and what generates these processes? In the next section we will address the first two questions, which are well covered byMeyers and Seibold in this volume (Chapter 17), only insofar as required to set the stage for this chapter. Our primary focus will be methods for addressing questions 3 and 4, which will be discussed in subsequent sections.

Getting the Data: Coding Social Interaction

The first step in interaction analysis is to identify the events that make up the sequence by coding group interaction. Meyers and Seibold (this volume) give a detailed discussion of the codingprocess; see also Folger, Hewes, and Poole (1984), Neuedorff (2002), Krippendorf (2004) and Krippendorff & Bock (2009). It is importantthat the coding be reliable and the coding system valid, as discussed in their chapter.

Reliability and validity of codings are not just important for their own sake, they are required to meet the assumptions of interaction analysis methods. Sequential contingency analysis hinges onthe identification of sequential relationships among acts, and it assumes that each category of act is coded with high reliability. Hence reliability must be determined in terms of individual categories in order to detect potentially serious sources of bias in sequential patterns (Hewes, 1985). Unitizing reliability is also critical, as Meyers and Seibold point out. Phasic analysis assumes that phases are indicated by coding categories, so if the categories are not reliable and valid, there will be error in phase identification.

These concerns were remainder of this section will describe an example we will use throughout this chapter. Poole and Dobosh(2010) analyzed conflict management in two jury deliberation sessions, the first being the trial phase, in which the jury determines guilt or innocence on the various counts registered against the defendant, and the second the penalty phase, in which the jury determines the penalty that should be applied to the guilty party. The case involved murder andthe defendant was eligible for the death penalty, so there was the potential for considerable tension and conflict in the deliberations (see SunWolf, 2007, for a more complete description of the trial).

Poole and Dobosh coded transcripts of the two jury deliberations with a coding system designed to capture the climate of decision-making interaction, the Group Working Relationships Coding System(GWRCS; Poole & Roth, 1989a). An abridged set of categories fromthis system are shown in Table 18.1. The GWRCS was derived through detailed analysis of group decision-making sessions coded with multiple category systems (Poole, 1983b). Rather than coding at the act level, units are defined in terms of a set time period such as 30 seconds, or, as in this case, ½ page of transcript. A time interval instead of specific thought or idea units is employed because the climate of interaction can best be identifiedby considering how members act and react toward each other; hence, coding based on extended intervals of interaction in which there are interchanges among members is required. Poole and Dobosh found 99 per cent agreement on unitizing reliability. Interrater reliability measured with Cohen's kappa was 0.97. There is also evidence for the validity of the GWRCS because it has been shown to have systematic relationshipsto related categories in task and conflict coding systems (Poole, 1983a; Poole & Roth, 1989a; Poole, Holmes, & DeSanctis, 1991). The analysis in Poole and Dobosh employed both sequential and phasic analyses and simplified versions of their analysis will be used to illustrate the methods discussed in subsequent sections.

Table 18.1 The Group Working Relationships Coding System (GWRCS)

1. Focused work: periods when members are primarily task-focused and do not disagree with one another, instead working together with little expression of conflict
2. Critical work: periods when members disagree with each other, but the disagreements are centeredon ideas, and no opposing sides are differentiated
3. Opposition: periods in which disagreements are expressed through the formation of opposing sides; the existence of a conflict or disagreement is openly acknowledged during these periods
4. Open discussion: a mode of resolution of opposition that involves mutual engagement of parties in problem-solving discussions, negotiation, or compromise
5. Relational integration: periods when the group is not task-focused; these exhibit tangents, joking, and positive socioemotional interaction

Table 18.2 Distributional structure of the working relationships in Deliberations 1 and 2

  Deliberation 1 Deliberation 2
Focused work (FW) 188 (42.2 %) 34 (6.0 %)
Critical work (CW) 146 (32.7 %) 147 (26.3 %)
Opposition (OPP) 30 (6.7 %) 127 (22.5 %)
Open discussion (OD) 21 (4.7 %) 217 (38.8 %)
Integration (INT) 61 (13.7 %) 11 (2.0 %)

Before proceeding to consider various methods of analyzing group processes, let us first present the distributional structure of the various types of interaction that occurred in the two deliberations. Distributional structure refers to the total numbers of acts in each category (i.e., in how the interaction is distributed across the categories). It collapses the process into a summary picture of the discussion, and thus provides useful background to process analysis, much as a table of descriptive statistics does for a more sophisticated statistical analysis. The distribution of acts across categories for the twodeliberations is shown in Table18.2. As the table shows, there is more focused work and integrationin Deliberation 1 than in Deliberation 2. On the other hand, there is more opposition and open discussion in the Deliberation 2 than Deliberation 1. The analysis of processes presented below will shed some light on the meaning of these differences.

Identifying Patterns in Group Interaction

Recall that our third question was: What temporal patterns are there in the observed sequence of events? Broadly, there are two approaches to answering this question: sequential contingency analysis (which includes Markov chain analysis, semi-Markov processes, and lag-sequential analysis) and phasic analysis. The two genres of techniques differ in that sequential contingency analysis focuses on patterns among individual events at the micro-level, while phasicanalysis focuses on larger segments of interaction with common functions. We will discuss them in turn and then in the next section turn to explaining the patterns that we identify.

Sequential contingency analysis

The most widely used sequential analytic technique is sequential contingency analysis which was introduced quite early in the study of group behavior (Bales, 1950; Leik & Meeker, 1975). Sequential contingency analysis describes group interaction in terms ofconditional probabilities, the probability that one type of event or act will follow another type of event. For example, what is the probability that a focused work act will be followed by an opposition act? If that probability differs from chance, then there is a sequential contingency relationship between the events.

There are several methods of doing sequential analysis, including Markov process models (Hewes, 1980), an extension of them called semi-Markov process models (Hewes, Planalp & Streibel, 1980), and lag-sequential analysis (Bakeman & Gottman, 1997).

We will primarily discuss Markov process models sincethey are the foundation for the others. Markov process models take a set of possible events or acts (in this case the categories of the GWRCS)and model the odds that various sequences of acts occur in the group interaction. For example, what are the odds that a period of focused workwill be followed by a period of critical work and then that there will be a period of opposition? Markov process models can be constructed to describe the probability that every possible sequence of events of various lengths (two, three, four, etc.) will occur. These models identify some sequences as highly probable and some as having low probability ofoccurrence. This depicts the pattern of group interaction at the micro-level, where acts are related to other acts preceding and following them.

To define Markov process models somewhat more formally, they are used to predict the probability distribution of a set of events (in this case occurrence of the five interaction types) at time t + k based on the probability distribution of the same setof events at time t. So in this case we will use the probability distribution of the five GWRCS acts at time t to predict the probability distribution of the five GWRCS acts at a future time t + k. The model is portrayed in a transition matrix (T) that indicates the probabilities of transition from one interaction type to another over time. Table 18.3 shows a transition matrix for the GWRCS categories. The types along the side represent the events at time 1 and the types along the top the events at time 2. So in Table 18.3, the element q12 represents the probability that a unit of focused work (at time 1) is followed by critical work (at time 2), while the element q11 represents the probability that a unit of focused work is followed by a second unit of focused work. Thus, this model describes sequential interdependencies among units, a description of the local structure of action in terms of which units tend to lead to (or be responded to by) other units. The probabilities in each row must sum to one, since each previous unit must be followed by some other unit (except for the very last unit in the sequence)

Table 18.3 First-order markov transition matrix for the Group Working Relationships Coding System, predicting probability of transition from unit at time 1 to unit at time 2

  FW2 CW2 OPP2 OD2 INT2
FW1 q11 q12 q13 q14 q15
CW1 q21 q22 q23 q24 q25
OPP1 q31 q32 q33 q34 q35
OD1 q41 q42 q43 q44 q45
INT1 q51 q52 q53 q54 q55

FW, focused work; CW, critical work; OPP, opposition; OD, open discussion; INT, integration; subscript 1 = time 1; subscript 2 = time 2.

Markov chain models are only valid representations ofsequential dependencies if they can satisfy three assumptions: the assumptions of order, stationarity, and homogeneity. For the model to fit, all three assumptions must be satisfied. We will briefly discuss how these assumptions were tested, referring the reader to the discussion in Hewes (1980) and Poole et al. (2000); for a more technical discussion see Bishop, Fienberg, and Holland (1975).

Before turning to the three critical tests, we will describe how the Markov transition matrix T is computed from the coded data. Since a Markov chain models the dependence of subsequent units on prior units, the sequence of coded units is lagged one unit and then the lagged sequence is cross-tabulated with the original sequence to produce a matrix in which the rows are the lagged interaction units and the columns are the original sequence, as shown in Table 18.3. Then the entry in each row of the cross-tabulated matrix is divided by the row sum, giving a probability that the unit at time 1 will be followed by each of the seven units at time two (note that these probabilities sum to one). This gives us a firstorder matrix, in which each unit at time 2 is dependent only on the distribution of units at time 1. If a first-order matrix adequately describes the data, it means that the each interval of interaction is influenced only by the interval immediately preceding it.

It is also possible that a unit is dependent on the distribution of the preceding two units, which represents a second-orderprocess. In this case we would cross-tabulate the lag 2 sequence, the lag 1 sequence, and the original sequence (usually all possible combinations of the acts at time 1 and time 2 would be ranged along the side toform the rows and the final act in the sequence would be across the topas the columns). Table 18.4 shows a partial second-order transition matrix for the GWRCS. The entire matrix would contain 125 cells. This could be repeated for cases where a unit is dependent on the distribution of the preceding three acts (a third-order process, which would have 625 cells), four acts (a fourth-order process, which would have 3125 cells!), and so on. Eventually the matrix would have so many cells that there would not be enough data to estimate the probabilities within the cells; studies suggest that thestatistics for testing the model perform well with expected values as low as 1 per cell. So with five categories we would need at least 25 acts to estimate the matrix in Table 18.3, and it would be better to have more than that. As you can imagine, we would need a very long meeting to have enough acts toestimate the probabilities for the fourth-order Markov process.

Table 18.4 A partial table for a second-order Markov process

  FW3 CW3 OPP3 OD3 INT3
FW1-FW2          
FW1–CW2          
FW1–OPP2          
FW1–OD2          
CW1–FW2          
CW1–CW2          
CW1–OPP2          
CW1–OD2          
CW1–INT2          
OPP1–FW2          
OPP1–CW2          
……          

FW, focused work; CW, critical work; OPP, opposition; OD, open discussion; INT, integration; subscript 1 = time 1;subscript 2 = time 2.

Order

The first assumption that must be tested to fit a Markov chain is that the process has a definite order, as defined in theprevious paragraph. To fit models of different orders, it is necessaryto be able to model dependencies between time 1, time 2, time 3, and so on. If the unit at time 2 is dependent only on the distribution of units at time 1, then the data fit a first-order process; if the unit at time 3 is dependent on the distribution of units at times 1 and 2, then the data fit a second-order process; if the unit at time 4 depends on the distribution of units at times 1, 2, and 3, then the data fit a third-order process, and so on.

It is important to have a hypothesis about the order of the process. For instance, our previous example of the unitary modelof group decision-making (driven by logical necessity in which problem analysis must come first, and only then can we specify criteria, and then we are in a position to develop solutions, etc.) suggests that we should have a first-order process. Each functional statement of problem definition or solution development is either preceded by an occurrence of that same type of event, indicating there a group is still performingthe same function, or one from the immediately following function in the sequence (e.g., a criteria specification statement should follow a problem analysis statement). All other sequences of events should be muchless likely. Lower probability sequences would be solution development followed by criteria specification or problem analysis. This reasoning can be extended for each temporally adjacent pair of acts. Further, problem analysis should have low probability of beingpreceded by any other functional event, since it begins the sequence theoretically.

The logic of the test for order is to compare the fitof models of successive order. The fit of the first-order model is compared to that of the zeroth-order model (a model that assumes no sequential structure in the event sequence). Then the fit of the second-order model is compared to that of the first-order model, the third-order model to the second-order model, and so on. A number of specific tests forthe assumptions of Markov chains, but the most straightforward approachuses loglinear models (Bishop, Fienberg, & Holland, 1975). 1

The order of the conditional probabilities is important both theoretically and descriptively. If the zero-order model fits best, then that indicates that there is no sequential pattern, a result that would run contrary to hypotheses concerning the group phases as illustrated above. It would also indicate that there was no reciprocity or connection between group members’ statements. First-and second-order processes indicate, among other things, that members are responding to one another. For instance, a second-order process suggests that there is a response to the first member's statement and that there is then a response to the first two statements. This suggests that members are paying attention to the discussion, because their response takes the preceding two statements into account. Third-and higher order processes suggest even more complicated relationships. To our knowledge, however, no study of interaction structure has identified processes beyond the third order, even when there was a sufficient number of acts to test for higher orders. This may be due to (a) the nature of conversational rules, which specify that we should attend to immediately preceding statements, and (b) limitations in short-term memory that prevent people from holding more than a few statements in mind at a time.

Stationarity

The second assumption necessary for a Markov process to hold, stationarity, is that the same transition matrix T operates throughout the entire discussion. Logically, T can only be used to model the event series if it does not change over the course of time. The logic of the test for stationarity is as follows. First, the sequence of units is divided into shorter segments and transition matrices are computed for each segment. Typically the test is done by dividing the sequence in half; sometimes it is divided into thirds or quarters or even more segments; the only requirement is that the segments be long enough to estimate the Markov transition matrices for each. Second, these transition matrices are compared to each other. The null hypothesis for this test is that any differences between matrices are due to chance alone. If the null cannot be rejected, then the assumption of stationarity is supported. On the other hand, if we reject the null, itimplies that the nature of dependencies among units changes as the discussion proceeds, and the Markov chain does not offer an adequate representation. Loglinear analysis can also be used to test for stationarity. 2

Homogeneity

The final assumption of a Markov process is that the same model holds for all subgroups of the sample; this is termed the assumption of homogeneity. Generally, homogeneity is evaluated by comparing the transition matrices of subgroups that might differ, with the null hypothesis that there is no difference between subgroups. Procedurally, the test for homogeneity would first partition the sample into meaningful subgroups. Then contingency and transition matrices can then becomputed for each subgroup and compared across subgroups. Again, loglinear analysis could be used to conduct these comparisons. The null hypothesis in this test is that there is no difference across subgroups, that is, that the Markov process is homogeneous. For example, one might compare the sequential structure for groups who differ in terms of theiradherence to the unitary sequence (logical necessity), as in our exampleearlier. Groups with highly complex tasks may have different transitionmatrices from those carrying out tasks lower in complexity. If so, representing group interaction with a single transition matrix is biasedandobscures real processes.

Details

Before wrapping up our discussion of the assumptions of sequential dependence analysis, an important question should be raised. In what order should these three assumptions be tested? They are like a house of cards in that if any one assumption is violated, the model does not hold. Despite the fact that all of these three assumptionscan be tested separately, it is better to test them together because that would let us pick up possible interaction effects among these assumptions. To give an example of one possible interaction effect, the orderof the process for high complexity groups might not be the order for the low complexity groups. In this case there is an interaction betweenthe order and homogeneity assumptions. No group studies to our knowledge have tested for these interaction effects among the three assumptions, even though it would be simple to use loglinear models to do so (Hewes,2009d).

An example

Table 18.5 a shows the Markov process model for Deliberation 1 (D1, the trial phase) in the Poole and Dobosh (2010) study, while Table 18.5 b shows the model for Deliberation 2 (D2, the penalty phase). Evaluating the assumptions of Markov models indicated that both deliberations were best modeled as a first-order Markov process that was stationary. The two deliberations, however, were nothomogeneous, and so it was not possible to develop a single Markov model for both deliberations. The relative magnitudes of the probabilities in the two matrices give us some indication of the nature of the group interaction in each and also possible differences. Here we will venturea single interpretation and refer you to the full article for additional detail.

Table 18.5 First-order Markov transition matrices for jury
Deliberation 1

image

FW, focused work; CW, critical work; OPP, opposition;
OD, open discussion; INT, integration; subscript 1 = time 1;
subscript 2 = time 2.

D1 and D2 show clear differences in how conflict was handled by the group. Focused work was more likely to sustain itself inD1 than in D2. In the transition matrix for D1, the most probable transitions for focused work were into more focused work or into integration, which was likely to transition back into focused work. By contrast, the D2 transition matrix shows a 0.41 probability that focused work would transition to either critical work or opposition, and a smaller probability that it would transition to integration and from integration back into focused work. This suggests that D1 sustained cooperation more than D2. It also suggests that conflicts may have been suppressed in D1. Opposition in D1 was most likely to lead to tabling and open discussion, but the distributional structure shown in Table 18.2 suggests that tabling was more common than open discussion, suggesting avoidance of conflicts. In D2, on the other hand, there was a much greater probability that opposition would sustain itself and move into open discussion. This suggests that in D2, whilethere was more conflict, it was generally routed in constructive directions through open discussion.

This is an example of what we can learn through modeling sequential dependencies in group interaction. If you compare the tables some more, you may well see other differences, some of which are discussed in Poole and Dobosh (2010).

Other methods for studying sequential contingencies

The basic three assumptions for Markov chain analysis apply equally to several other forms of sequential analysis that allow scholars to address additional questions about group interaction. Semi-Markov processes model sequential dependencies like Markov processes do, but they also include the actual durations of periods between acts. In any group discussion there are lulls or periods when the interaction picks up speed. Semi-Markov analysis allows researchers to include this in the model. This enables researchers to study nonverbal effects, such as pacing of group discussions, on group effectiveness. Semi-Markov models have been shown to model group decision-making better than regular Markov models do (Hewes et al., 1980).

Lag-sequential analysis has been used extensively in social psychology and other areas to model sequential dependencies. As its name implies, this technique is concerned with lags. For example, for a lag-one analysis we examine the odds that one type of eventis to precede the same or some other type of event immediately (a first-order process). For a lag-two analysis we look at the odds that some type of event is preceded by some type of event two intervals in the pass (not a second-order process), and so on for higher lags. So, for example, we might find a two-lag relationship between problem analysis and criteria specification. Any effect the intervening act might have on the relations between the lagged act and its “partner” is disregarded by lag sequential analysis. So the act in between the problem analysis act and the criteria development act is assumed to have no effect on their relation.

Advocates for lag-sequential analysis such as Bakemanand Gottman (1997) argue that lag-sequential analysis is simpler than Markov and semi-Markov process models because there is no need to assessthe assumptions of order, homogeneity, and stationarity. We would, however, caution that this is not actually the case. Any probabilistic model of group behavior (or anything else for that matter) has to establishthat it fits the data and this requires tests for the assumptions of Markov models. Lag relationships are embedded in interactions that are modeled by Markov processes and therefore they are simply taken out of context when a singular lagged relationship is identified. It is more advisable, we believe, to fit the appropriate Markov model and interpret lagged relationships in terms of the larger structure of interaction.

Another of sequential analysis is moderated dependency analysis. This relatively new method describes the connectionsbetween coded events in terms of the mental processes (cognitive, emotional, etc.) of individual group members. It assumes that the probability of an evaluation statement following an orientation statement, for example, is dependent on some underlying mental process, such as rule-following, that may be dynamic and undetectable through analysis of observed coded behavior alone. When the underlying mental process guiding members’ contributions to the group interaction is shared, as in the case of a shared mental model of how decisions should be made (Klimoski & Mohammed, 1994; Langan-Fox, 2003), then hypotheses like those we advanced previously can be tested using coded data only. One way to account for group interaction unfolding according to the unitary model is to posit that members have a shared mental model of logical reasoning that guides their behavior (Honeycutt & Poole, 1994; Pavitt & Johnson, 2001).

However, it is often the case that members do not share the same rules or mental processes. Some members might hold the logical reasoning model, while others hold a model that decisions are really occasions for bargaining. Such members could be expected to interjectcomments that are not “logical” given the current flow ofdiscussion. So we would expect “fuzzy” phases or even periods in which no coherent phase structure occurred; Poole and Roth (1989a) observed exactly this type of noncoherent period in the phases thatthey mapped. Over time, members might adjust their models or rules and interaction might become more coherent.

Moderated dependency analysis would still focus on sequential dependencies, but it also requires researchers to have a model of what interaction would occur if members with a given set of rules participated in the discussion. Since this cannot be constructed from the data, it requires building a simulation model of what the dependencies would look like if members with the hypothesized mental models interacted with each other and gradually adjusted their models to one another(see Larson, this volume). Hewes (2009a, 2009b, 2009c, 2009d, 2009e) discusses moderated dependency analysis in more detail.

Limitations of sequential contingency analysis

The various forms of Markov techniques offer important insights into the nature of group interaction. Some limitations, however, must be noted. In groups, as opposed to dyadic interaction, an individual's remarks may not be in response to the immediately previous remarks (Hewes, 2009a). Since individual group members do not have complete control over when they get a turn to talk, they may be responding to events not in the immediate past. They may instead have waited for their turn to speak. Sequential contingency techniques cannot describe or test for this aspect of group interaction. Second, the sequential analysis techniques discussed so far assume influence between events moves “from left to right.” Events a at t1 are said to engender b‘s at some later point in the discussion, moving from left to right as we read a page in English, with each later event determined only by the preceding few events. An alternative approach supposes that group members are working from a socially shared script or mental model such as the logical sequence (cf. Bales & Strodbeck, 1951; Hewes, 2009d; Poole & Baldwin, 1996). In this case, group communication might be viewed asunfolding based on group members’ knowledge of that script to which they refer while discussing. This is a hierarchical, rather than a left to right, organization, in that members are not only reacting to preceding statements, but trying to form the discussion based on this more general model. The large literature on shared mental models in groups suggests that this is a plausible alternative or addition to sequential structuring. If so, then phasic analysis is an appropriate alternative or addition.

Phasic analysis

The hypothesis that group processes such as decision making or long-term group development occur through a series of phases has a long and varied history (see, e.g., Bales & Strodtbeck, 1951; Fisher, 1970; Hirokawa, 1985; Poole, 1981, 1983b; Poole & Baldwin, 1996; Poole & Dobosh, 2010; Poole & Roth, 1989a, 1989b; for studies of decision-making phases; Wheelan, 2005, provides a good summary of work on group development). A phase is defined as a coherent period of group interaction and activity which serves an identifiable function, such as a period of problem definition, solution evaluation, integration or conflict.

As we have noted, many theories of group processes posit a unitary sequence of phases which all groups are presumed to follow, such as the normative sequence for group problem-solving (problem analysis, criteria definition, solution development, decision, implementation) or Tuckman's forming–storming–norming–performing sequence for long-term group development. As was previouslydiscussed, the generative mechanism behind these sequences is logical necessity (a group has to form before conflicts can emerge, then it has to resolve its conflicts by coming to agreement on norms, etc.) or institutional rules. However, a good deal of empirical research indicates that group processes sometimes follow multiple paths (Poole & Baldwin, 1996; Poole & Dobosh, 2010). The unitary sequences seem tobe normative ideals that group members try to follow in organizing their activities. In studies of multiple sequences in group decision-making, about 30 per cent of groups are observed to follow simple, normative sequences, 25 per cent focus mainly on solutions, while the remainder follow more complex decision paths, often recycling to previous phases or taking phases “out of order”. Both the nature of the group task and social factors such as group size and cohesiveness are related to the complexity of group decision paths, with social factors accounting for a greater share of the variance (Poole & Roth, 1989b).

How do we identify phases in group processes? Most scholars rely on direct analysis of group interaction. Phases are higher order constructs that are indicated by the behaviors that typically occur during the phase. So, for example, in an orientation phase we might expect to observe a lot of questions, suggestions about how to tackle the task and organize the group, and expression of uncertainty. Relevant coding systems are typically utilized to code thegroup interaction, resulting in a string of coded acts. This string is then partitioned into phases based on some criterion and function of the phases identified from the particular combinations of acts that occurwithin them.

One common way to partition strings of interaction into phases is to divide the string into the same number of equal length segments as phases in the theoretical model. So Bales and Strodtbeck (1951) divided their coded discussions into three segments and then tested for expected differences between segments in acts that indicated the three phases of orientation, evaluation, and control. The pattern of differences supported their prediction, so Bales and Strodtbeck concludedthat their model of problem-solving fitted the data.

Partitioning group interaction into equal length segments is a workable approach to phasic analysis. However, if groups follow several possible sequences of phases or if they “loop back” and revisit earlier phases (e.g., as they attempt to decide guiltor innocence, jurors realize they do not know the law that is at the basis of their decision, so they loop back to orientation to familiarize themselves with the law, then proceed with their decision-making), thensimply dividing the group discussion into equal segments is likely to cover over complexities and lead to misleading conclusions. In addition, even if there is a single sequence of phases, if they are of different lengths (e.g., a long orientation phase and a short criteria specification phase), taking equal length segments may also result in errors in identifying the sequence of phases.

To handle this problem, Poole and Holmes developed flexible phase mapping (Holmes & Poole, 1991; Poole & Roth, 1989a). Flexible phase mapping makes no assumptions about the number of types of sequences which will be found; only one type may be found, or many types may emerge, depending on the diversity of activities in the groups. Phase mapping is carried out through a series of incremental steps of data transformation and parsing. Coded interaction is transformed into phase indicators. The sequence of phase markers is thenparsed into a phase map by an algorithm that basically “crawls” along the coded string and marks places at which indicators of aphase different from the current phase begin to occur. This yields a phase map with some long phases and, often, numerous short “phases” of four or fewer acts. The phase map is then smoothed to eliminate very short “phases” and to yield a map with more substantial phases. These steps are governed by precise rules for the datatransformations at each step and an application, WinPhaser, is available to conduct the analysis (Holmes, 2000; for more detail, see Holmes & Poole, 1991).

In their phasic analysis, Poole and Dobosh first usedthe codings as direct indicators of a phase. So Focused Work indicated a phase of Focused Work, Opposition and Opposition phase, and so on. With more granular coding systems, several different codes and even combinations of codes occurring within a few acts of each other can be used as phase indicators. For example, Poole and Roth (1989a) used combinations of codes from two contiguous acts as phasic indicators. They utilized a ten-category coding system and so there were 100 possible combinations, each of which was an indicator of one of seven phases.

Poole and Dobosh used a first pass of the phase definition algorithm to divide the two deliberations into a primary set of phases. In a second pass, the phase data was smoothed in two respects. First, short phases of two or fewer units that were surrounded by a single type of phase were merged into that phase. Second, where relatively short phases alternated, they were merged into a phase that was identified as the combination of the two units. So, for example, if a 30-unit period of critical work was broken by several periods of focused work of three or four units, it was labeled “critical work-focused work.” This resulted in a layered analysis in which the basic phasesequence and then the smoothed phase sequence are displayed. It should be noted that in some cases flexible phase analysis will also demarcateperiods when no coherent phases can be identified due to admixture of disparate and unrelated phasic indicators; these are termed “null” or nonorganized periods. Owing to the nature of the GWRCS, there were no null periods in the deliberations.

In addition to delimiting phases, it is also often useful to mark “breakpoints” (Poole, 1983b), such as procedural discussions, votes, meals, or temporary adjournments. Breakpoints (Fisher & Stutman, 1987: Poole & DeSanctis, 1992; Poole & Roth, 1989a) have been shown to offer opportunities for groups to make discontinuous shifts in their approach to their tasks and relationships.

High-level phase maps of the two deliberations studying by Poole and Dobosh (2010) are shown in Figure 18.2. As the maps indicate, there were fewer and longer phases in the trial deliberation than in the penalty deliberation. Most of the trial deliberation had relatively low levels of conflict, because of the predominance of Critical Work and Focused Work–Critical Work phases (the Focused Work–Critical Work phase consisted of a number of cycles of the two activities). There were only a few, short, open oppositions in the trial phase. By contrast deliberations in the penalty phase exhibited a great deal of opposition (which signals open conflict and confrontation), which tended to be handled through open discussion. In general there was more variability among phase inthe penalty deliberation than in the trial deliberation.

image

Key: FW=Focused Work; CW=Critical Work; OPP=Opposition;OD=Open Discussion; INT=Integration; V = Decision Point; Combination phases are indicated by dashes, e.g., FW-CW

FIGURE 18.2 Phase sequences for D1 and D2 (normalized to the same length) with phases in proportion to their length in the original. FW, focused work; CW, critical work; OPP, opposition; OD, open discussion; INT, integration;V, decision point; combination phases are indicated by dashes (e.g., FW-CW).

Differences between the two deliberations may be traceable to the different tasks they involved. In thetrial phase, the jury had to make determinations on more than 20 counts (the points at which they did so are indicated by “v”s on the timeline. This enabled jurors to focus initially on those they could agree on, building a sense of progress and perhaps a degree of group cohesiveness. When there was opposition on a count, the jury tended to drop it and go on to an item they could agree on, then circle back tothose they could not, saving the most contentious issue for last. This is reflected in the cycling in much of the discussion and in the brief integration periods which often marked a decision on a count, a sort of“celebration” before moving on to the next count. In the penalty phase there was a single decision – death penalty or an alternative life sentence – and the only alternative to discuss was whether there were mitigating circumstances, a matter tied directlyto sentencing. Given different positions on the major task at hand, this deliberation exhibited open conflict throughout, as jurors grappled with their difference and tried to find solutions. Ultimately they couldnot agree and so the defendant received a life sentence instead of the death penalty.

One of the strengths of phasic analysis is that it enables us to get a “bird's eye” view of the decision process. Differences due to task, group composition, and other variables can be related to process in order to test their impact (see Poole & Roth, 1989b, for an example of this). But as the previous discussion indicates, phasic analysis can be even more informative if otherinformation, such as the content of the discussion or key breakpoints, are combined with the information from the phasic analysis.

The jury deliberation example compares only two group decision processes, but a number of studies have mapped and compared larger samples (Nutt, 1984; Poole & Roth, 1989a). In such cases it is helpful to characterize the sequences so they can be empirically compared. At least three types of characterizations are useful. First, the phase sequences can be sorted into a typology that differentiates them qualitatively. For example, Poole and Roth (1989a) found three majortypes of decision paths in a sample of 47 diverse decisions: unitary, solution-centered, and complex. Typologies are often derived via the researcher's qualitative analysis of the paths, as in Nutt (1984). There are also empirical methods, such as optimal matching, that calculate the similarity of phase sequences (Abbott, 1990; Sankoff & Kruskal, 1983). This similarity data can then be statistically analyzed to derive clusters of similar paths that can form the basis for identification of types. These types can then become dependent variables, which are predicted by independent variables such as task or level of groupcohesiveness (Poole & Roth, 1989b) or independent variables used to predict outcomes such as effectiveness and satisfaction (Nutt, 1984; Sambamurthy & Poole, 1992).

A second useful property of sequences are indices that measure the acts that occur in them, such as the level of idea development, the proportion of conflict in the sequence, or whether criteria defined early in a session were used to evaluate solutions later on. Many hypotheses about group processes involve predictions about relative levels of various acts. For example, Tuckman (1965) posited that groupsthat did not carry out their developmental phases (forming, storming, norming, and performing) adequately were destined to repeat them in the future in order to complete unfinished work in building the group. It would be possible to test this hypothesis by assessing the amount and proportion of time spent by a sample of groups in each phase of the groupdevelopment sequence. Groups that devoted a good deal of their time to each phase and had no proportionately short phases would be expected tobe more cohesive and effective than groups that gave short shrift to certain phases.

A third useful type of variable measures summary properties of sequences, such as how long they are, the degree to which they match a particular ideal type sequence, or the complexity of the sequences. Such variables can be used to characterize the process and also enable distinct event sequences to be compared. For example, Poole and Roth (1989b) assessed the “unitariness” of decision-making paths – that is, the degree to which they resembled the ideal decision-making sequence – and used this as a dependent variable, finding that task and level of group cohesiveness were good predictors of the degree to which a decision path fit the normative unitary sequence.

Phasic analysis, then, gives us a more macro-level view of group process. It traces the broader functional contours of a process and gives us a purchase on the question of whether a process unfolds as it “should” according to normative models. Many normative models of processes follow the recipe: first you should do this, then you should do this, then you should do this, etc. These are fairlyhigh level, granular statements in that they do not often tell one exactly how to do things at the level of execution (possibly because there is more than one way to “do this” well or correctly). Phases are fairly coarse, granular constructs and so phase maps offer a connecting point between normative models of a process and how that process is actually executed by a group. Phasic analysis can shed light on the question of whether normative models of how group processes should unfold correspond to how they actually unfold.

How Do We Explain Observed Patterns?

Once we have identified one or more patterns in group interaction the next question is: What kind of generative mechanismsand factors account for these temporal patterns? Having worked with andtaught interaction analytic techniques for decades, we have observed that those versed in more traditional methods of experimental and survey analysis have some difficulty thinking of hypotheses related to temporal patterns. Most of us tend to think of variables as states of behavior, personality, attitudes, beliefs, values, intentions, or states of groups (cohesiveness, motivation, etc.) rather than as temporal patterns or sequences of those states, yet developing thinking sequentially is crucial to analyze group interaction effectively. One must be able to ask and answer why-questions to explain sequential patterns or their role in determining group outcomes. Why do phases come in some specified order, or in no particular order? Why do cycles occurand what, if anything, causes them to stop or start? Why do non-stationary transition probabilities, changes in the order of those processes, or their nonhomogeneity occur?

Many studies simply identify patterns empirically, asdescribed in the previous section. They use ex post facto narratives to explain them. This is fine as a first step, but not for a last. The next step is to identify the causal factors and/or generative mechanisms that engender those patterns. In the remainder of this section wediscuss how to advance and specify generative mechanisms to explain patterns in interaction.

One approach to explaining interaction patterns is toidentify the “motors” that shape the overall pattern of group interaction. Poole and Van de Ven (2004) and Van de Ven and Poole (1995) defined four basic generative mechanisms that explain how processes, including group interaction processes, unfold over time. Two of these have been alluded to already in our discussion of the unitary and multiple sequence models of phase development, and here we will briefly define the four models in more general terms.

First, a life-cycle model depicts the process of change as progressing through a necessary sequence of phases. The order and content of these phases is prescribed and regulated by logic orby norms prefigured at the beginning of the sequence. The unitary sequence model we have discussed is a life-cycle model.

A teleological model views group process as the product of a cycle in which members formulate goals, then take actionbased on those goals, evaluate the results, and modify their actions orgoals in order to stay on track toward their desired end (McGrath & Tschan, 2004). The observed sequence of interaction that emerges from this cycling process will vary depending on the adjustments that aremade by members as the session proceeds. The multiple sequence model discussed previously is a product of a teleological model. Members have agoal (e.g., to make a decision) and mental models of how to reach that goal. If all members share the same goal and model, then the group discussion will follow the unitary sequence (provided the model held by members is the logical one). If the group runs into a problem or a conflict breaks out, then the sequence may be more complex. The sequence will also be complex if members have different goals or mental models.

A dialectical model of development is generated by conflicts that emerge when a thesis and antithesis are produced that collide to produce a synthesis, which in time becomes the thesis for the next cycle of a dialectical progression. For example, a group may become too cohesive and integrated (thesis), whichresults in the inability to think creatively in order to craft an effective response to some problem the group faces (antithesis – noteit is in contradiction with the thesis) (Sawyer, 2007). The group's attempt to remain in its comfort zone of integrated cohesiveness lowers its creativity, which results in a major failure. As a result of this several members are terminated and the group has to reconstitute itself so that it can be more flexible and creative while maintaining trusting relationships (synthesis). The conflict that drives the dialectical model is not simple interpersonal conflict or arguments over ideas. It stems from tensions and oppositions between conflicting demands on the group, such as the need to integrate versus the need for members to have some independence or the need to have structure versus the need toadapt to changing circumstances (Johnson & Long, 2002).

Finally, an evolutionary model of development consists of a repetitive sequence of variation, selection, and retention events in the group (for an application of this thinking in organizations, see Monge & Poole, 2008). We might, for example, consider solution options as generated and selected through an evolutionary process. Ideas would be suggested and would compete against other ideas in the group's discourse. Variation in an idea would occur during the discussion and there might even be a sort of cross-fertilization as ideas were combined and extended. Some ideas would drop out (be selected out), while others would thrive and be retained by the group in its final solution and in the memory of members. When a similar situation occurs in the future, the same idea might be revived as part of the population of ideas the group was entertaining.

Of these four models, only the first two have been actively applied to the study of group processes, but it is clear that the remaining two could be applied. Poole et al. (2000) defined empiricalcriteria that could be used to determine if a given model held. For example, for a life-cycle motor to hold, there must be a single, invariantsequence of phases and we must be able to identify the logic, norm, shared mental model or other structure that specifies and enforces the sequence. For a dialectical model to hold, there must be two competing demands that can be empirically documented and, usually, subgroups within the group that emphasize them. The resolution of the tension cannot be the “victory” of one or the other demand, but must instead be something novel that may contain aspects of the demands, but represents an emergent result. The various criteria defined in Poole et al.(2000) refer in part to properties of the interaction patterns, but alsorequire additional evidence. The moderated dependency sequential model outlined above incorporates these directly into the developmental model, but for the other models defined by Poole and colleagues, the other factors operate in addition to the sequential model.

It is also possible that combinations of the four motors may influence group processes. For example, a group might apply a shared mental model of the logical sequence as it made a decision, resulting in a unitary sequence, but the content of thedecision might be shaped by an evolutionary process in which various ideas compete for acceptance in each phase. In this case the life-cycle model sets the overall course of the decision process, while an evolutionary model governs specific micro-level decisions that determine the content of the final decision.

An alternative to fitting entire models is to posit hypotheses about variables that might influence sequential dependencies. This is a less ambitious strategy that can yield more definitive results with less effort and time. One type of hypothesis is to make theoretical predictions that explain why a sequential process should be homogeneous, and/or some specified order, and/or stationary. Poole and Dobosh expected that the two deliberations they studied would be secondorder, that is, that the sequential structure of the discussions would consist of three contiguous statements. They posited this because earlier work by Karl Weick (1979) and others proposed that the double interact, a sequence composed of an act, a response, and a response was the basic organizing unit of interaction. It turned out, as we saw above, that this hypothesis was not borne out.

A second type of hypothesis is to make theoreticalpredictions that anticipate violations of the assumptions of a sequential mode that result from static input values. For example, Poole and Dobosh expected the Markov processes and phase models for D1 and D2 to differ because in D1 the jury had to make decisions about more than 15 individual counts against the defendant, whereas in D2 it had to makeonly two decisions, would they impose the death penalty and, if not, what penalty would they impose on the defendant? The structural differences between tasks should make the two sequences nonhomogeneous. Poole and Dobosh also expected D1 to be nonstationary because of all the decisions required of the group, but it turned out it was in fact stationary.

Third, researchers can make theoretical predictions about dynamic variables that cause changes in stationarity and order. Rather than say that groups differ in the degree of stationarity based on some property that precedes group discussion, for instance, onemight posit that sequential patterns of interaction may change as goalsare met or not during the interaction or as the psychological or socialemotional states of the group change. For example, Poole and Dobosh might have posited that the longer it takes a jury to make progress on a decision, the lower the morale of that group (cf. Hewes, 2009d; SunWolf,2007). As morale drops, jury members might disengage from one another, resulting in lowering the order of the sequential process from a first-order process, where members are influencing each other, to a zero-order process, where individuals are not reacting to one another. This would make the deliberation nonstationary. To provide evidence that morale was related to the nonstationarity, the jury's interaction couldbe coded for statements that reflect levels of morale. The same reasoning could be applied to issues of order and interactions between those two assumptions and homogeneity.

The approaches discussed in this section are certainly not the only possible ones. What they do illustrate, though, is how far researchers can go beyond simply describing sequences. They also illustrate how sequential techniques can be integrated with theory and how sequential thinking can produce new directions for small group research.

Conclusion

Interaction analysis methodologies offer a broad range of possibilities for studying group interaction patterns. As this chapter shows, the logic behind interaction analysis is different from that for the analysis of experiments and surveys. Whereas the latter rely on traditional statistical techniques such as ANOVA, analysis of interaction processes requires describing and explaining patterns over time. To identify and explain patterns adequately requires special methods such as the various types of sequential dependency analysis and phasic analysis. These methods, while not that difficult to master, require usto learn different ways of thinking about group processes. We believe, however, that the effort is well worth it, because these methods can divulge a whole new world in group dynamics.

Notes

  • 1 To test for order using loglinear analysis, we would fit models incorporating three terms, U1, U2, and U3, where U1 represents the time 1 units, U2 the time 2 units, and U3 the time 3 units. If a zeroth-order process holds, then the best fitting model in ahierarchical loglinear analysis would be [ U1][U2][U3]; the notation for this model indicates that units at all lags are independent of each other. For the first-order process the hypothesized model is [U1U2][U3], which indicates that each unit isinterdependent only with the immediately preceding unit. For the second-order process the hypothesized model is [U1U2U3][U1U2][U2U3], which indicates that units at time 3 are dependent on the distributions of units at times 1 and2; the first-order terms [ U1U2] and [U2U3] also are included in the model because second-order processes also include first-order dependencies.
  • 2 To test for stationarity using loglinear analysis, the first step is to segment the sequence of interaction units (henceforth referred to simply as a “sequence”) into several pieces and compute a transition matrix for each sequence as described above. Once this has been done, a statistical test for the assumption of stationarity can be conducted byfitting a loglinear model for the U1× U2× T contingency table formed by stacking the contingency matrices for the three segments according to temporal order. In this table U1 represents the rows (prior events, time 1) of the table, U2 the columns (second events, time 2) of the table, and T (time) the number of segments (wheresegments are arrayed in the order of temporal occurrence). The model tobe fit is

L = [U1U2][U1T]

This model is missing the term [ U1U2T], which would indicate dependency between time and the structure of the contingency matrices, that is, nonstationarity. The term [ U1T] is included in the model because the row probabilities must sum to 1 for each time period; this creates an artifactual association between time and row values that this term models. If the model fits the data, then the assumption of stationarity is supported.

References

Abbott, A. (1990). A primer on sequence methods. Organization Science, 1, 375–392.

Arrow, K. J., Karlin, S., & Suppes, P. (1960). Mathmatical methods in the social sciences. Stanford, CA: Stanford University Press.

Bakeman, R., & Gottman, J. M. (1997). Observing interaction. Cambridge: Cambridge University Press.

Bales, R. F. (1950). Interaction process analysis. Reading, MA: Wesley. Bales, R. F., & Strodtbeck, F. L. (1951). Phases in group problem-solving. Journal of Abnormal and Social Psychology, 46, 485–495.

Bishop, Y. M., Fienberg, S. E., & Holland, P. W. (1975). Discrete multivariate analysis: Theory and practice. Cambridge, MA: MIT Press.

Coleman, J. A. (1964). Introduction to mathematical sociology. New York: Free Press.

Fisher, B. A. (1970). Decision emergence: Phases in group decision-making. Communication Monographs, 7, 53–66.

Fisher, B. A., & Stutman, R. K. (1987). An assessment of group trajectories: Analyzing developmental breakpoints. Communication Quarterly, 35, 105–124.

Folger, J. P., Hewes, D. E., & Poole, M. S. (1984). Coding social interaction. In B. Dervin & M. Voight (Eds.) Progress in communication sciences (Vol. 4, pp. 115–161). New York: Ablex.

Gouran, D. S., & Hirokawa, R. Y. (1996). Functional theory and communication in decision-making and problem-solving groups: An expanded view. In R. Y. Hirokawa & M. S. Poole (Eds.), Communication and group decision-making (2nd ed., pp. 55–80). Thousand Oaks, CA: Sage.

Hewes, D. E. (1980a). Analyzing social interaction: Some excruciating models and exhilarating results. In D. I. Nimmo (Ed.) Communication yearbook IV (pp. 123–141). New Brunswik, NJ: Transaction Press.

Hewes, D. E. (1980b). Stochastic modeling of communication processes. In P. R. Monge & J. N. Cappella (Eds.),Multivariate techniques in human communication research (pp.393–427). New York: Academic Press.

Hewes, D. E. (1985). Systematic biases in coded social interaction data. Human Communication Research, 11, 554–574.

Hewes, D. E. (2009a). The influence of communication processes on group outcomes: Antithesis and thesis. Human Communication Research, 35, 249–271.

Hewes, D. E. (2009b). Dual-level connectionist models of group communication: Formalism, justifications and unanswered questions. Paper presented at the National Communication Association Convention, Chicago, IL.

Hewes, D. E. (2009c). Developmental processes in group decision making:A dual-level connectionist theory. Paper presented at the National Communication Association Convention, Chicago, IL.

Hewes, D. E. (2009d). Emotional dynamics in group communication: A dual-level connectionist theory Paper presented at the National Communication Association Convention, Chicago, IL.

Hirokawa, R. Y. (1985). Discussion procedures and decision-making performance: A text of a functional perspective. Human Communication Research, 12, 203–224.

Hirokawa, R. Y. (1990). The role of communication in group decision-making efficacy: A task contingency perspective. Small Group Research, 21, 190–204.

Hollingshead, A. B., Wittenbaum, G. M., Paulus, P. B., Hirokawa, R. Y., Ancona, D. G., Peterson, R. S., Jehn, K. A., & Yoon, K. (2005). A look at groups from the functional perspective. In M. S. Poole & A. Hollingshead (Eds.) Theories of small groups: Interdisciplinary perspectives (pp. 21–63). Thousand Oaks, CA: Sage.

Holmes, M. (2000). WinPhaser, program available from Michael Holmes, Ball State University, Muncie, IN.

Holmes, M., & Poole, M. S. (1991). The longitudinal analysis of interaction. In B. Montgomery & S. Duck (Eds.) Studying interpersonal interaction (pp. 286–302). New York: Guilford.

Honeycutt, J., & Poole, M. S. (1994). Procedural schemata for group decision-making. Paper presented at the National Communication Association Convention, New Orleans, LA.

Klimoski, R., & Muhammad, S. (1994). Team mental model: Construct or metaphor? Journal of Management, 20, 403–437.

Krippendorff, K. (2004). Content analysis: An introduction to its methodology (2nd ed.). Thousand Oaks, CA: Sage.

Krippendorff, K., & Bock, M. A. (2009). The content analysis reader. Los Angeles: Sage.

Johnson, S. D., & Long, L. M. (2002).”Being a part and being apart”: Dialectics and group communication. In L. R. Frey (Ed.) New directions in group communication (pp. 25–42). Thousand Oaks, CA: Sage.

Langan-Fox, J. (2003). Skill acquisition and the development of a team mental model. In M. A. West, D. Tjosvold, & K. G. Smith (Eds.) International handbook of organizational teamwork and cooperative working (pp.321–359). London: Wiley.

Leik, R. K., & Meeker, B. F. (1975). Mathematical sociology. Englewood Cliffs, NJ: Prentice-Hall.

Mackie, P. (1995). Event. In T. Honderichs (Ed.) The Oxford companion to philosophy (p.253). New York: Oxford University Press.

McGrath, J. E. & Altman, I. (1966). Small group research: A synthesis and critique of the field. New York: Holt, Rinehart & Winston.

McGrath, J. E., & Tschan, F. (2004). Dynamics in groups and teams: Groups as complex action systems. In M. S. Poole & A. H. Van de Ven (Eds.) Handbook of organizational change and innovation (pp. 50–72). New York: Oxford University Press.

Mohr, L. (1982). Explaining organizational behavior. San Francisco: Jossey-Bass

Monge, P. R., & Poole, M. S. (2008). The evolution of organizational communication. Journal of Communication, 58(4), 679–692.

Nutt, P. C. (1984). Types of organizational decision processes. Administrative Science Quarterly, 29, 414–450.

Orlitsky, M., & Hirokawa, R. Y. (2001). To err is human, to correct for it divine: A meta-analysis of research testing the functional model of group decision-making effectiveness. Small Group Research, 32, 313–341.

Pavitt, C. & Johnson, K. K. (2001). The association between group procedural MOPs and group discussion procedure. Small Group Research, 32, 595–623.

Poole, M. S. (1981). Decision development in small groups I: A test of two models. Communication Monographs, 48, 1–24.

Poole, M. S. (1983a). Decision development in small groups: II. A study of multiple sequences in group decision making. Communication Monographs, 50, 206–232.

Poole, M. S. (1983b). Decision development in small groups III:A multiple sequence theory of decision development. Communication Monographs, 50, 321–341.

Poole, M. S. (2007). Generalization in process theories of communication. Communication Methods and Measures, 1, 181–190.

Poole, M. S., & DeSanctis, G. (1992). Microlevel structuration in computer-supported group decision-making. Human Communication Research, 19, 5–49.

Poole, M. S., & Dobosh, M. (2010). Exploring conflict management processes in jury deliberations through interaction analysis. Small Group Research, 41, 408–426.

Poole, M. S., Holmes, M., & DeSanctis (1991). Conflict management in a computer-supported meeting environment. Management Science, 37, 926–953.

Poole, M. S., & Roth, J. (1989a). Decision development in small groups IV: A typology of group decision paths. Human Communication Research, 15, 323–356.

Poole, M. S., & Roth, J. (1989b). Decision development in small groups V: Test of a contingency model. Human Communication Research, 15, 549–589.

Poole, M. S., Seibold, D. R., & McPhee, R. D. (1985). Group decision-making as a structurational process. Quarterly Journal of Speech, 71, 74 – l02.

Poole, M. S., Siebold, D. R., & McPhee, R. D. (1996). The structuration of group decisions. In R. Y. Hirokawa & M. S. Poole (Eds.), Communication and group decision-making (2nd ed., pp. 114–146). Thousand Oaks, CA: Sage.

Poole, M. S. & Van de Ven, A. H. (2004). Theories of organizational change and innovation processes. In M. S. Poole & A. H. Van de Ven (Eds.). Handbook of organizational change and innovation (pp. 374–397). New York: Oxford University Press.

Poole, M. S., Van de Ven, A. H., Dooley, K., & Holmes, M. E. (2000). Organizational change and innovation processes: Theory and methods for research. New York: Oxford University Press.

Rescher, N. (1996). Process metaphysics: An introduction to process philosophy, Albany, NY: State University of New York Press.

Sambamurthy, V. & Poole, M. S. (1992). The effects of variations in capabilities of GDSS designs on management of cognitive conflict in groups. Information Systems Research, 3, 224–251.

Sankoff, D., & Kruskal, J. B. (1983) (Eds.). Time warps, string edits, and macromolecules: The theory and practice of sequence comparison. Reading, MA: Addison-Wesley.

Sawyer, K. (2007). Group genius: The creative power of collaboration. New York: Basic Books.

SunWolf (2007). Practical jury dynamics 2. Charlottesville, VA: Lexis-Nexis.

Teichmann, R. (1995). Process. In T. Honderichs (Ed.) The Oxford companion to philosophy (p.721). New York: Oxford University Press.

Tuckman, B. W. (1965). Development sequence in small groups. Psychological Bulletin, 63, 384–399.

Wheelan, S. A. (2005). The developmental perspective. In S. A. Wheelan (Ed.) The handbook of group research and practice (pp. 119–132). Thousand Oaks, CA: Sage.

Weick, K. (1979). The social psychology of organizing. Boston, MA: Addison-Wesley Publications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset