17

CODING GROUP INTERACTION

Renee A. Meyers

UNIVERSITY OF WISCONSIN AT MILWAUKEE

David R. Seibold

UNIVERSITY OF CALIFORNIA–SANTA BARBARA


In the early 1980s, several group communication scholars at the University of Illinois (Dean Hewes, Bob McPhee, Scott Poole, and the second author) and their graduate students (including the first author) sought to peer deeper into “the black box” of group interaction processes. Their work was both theoretical (Structuration Theory perspective: Poole, Seibold, & McPhee, 1985; Socio-Egocentric Model: Hewes, 1986) and empirical (coding group argument, group development, group decision-making, group valence). Graduate courses, research team projects, and collegial conversations centered on how best to analyze and explain communicative processes in decision-making groups. The methodological solution to wide-ranging questions in this vein almost always involved coding group interaction. The questions could not be answered adequately and/or only by predicting from inputs to outputs, or by querying group participants about what they would say (or said they did) if they were in a group setting, or by asking students to write communicative responses to scenarios. In order to understand the complexity and seeming ambiguity of group communicative processes, it was necessary to scrutinize the data produced during members’ discussions.

This commitment to coding group interaction has framed our research program for over two decades. We still believe that one of the best ways to understand group communication is to code the discourse. Knowing that the majority almost always wins in group decision-making interaction (Davis, 1973) provides only half a picture. We need to know why – exactly how is it that they accomplish “winning” communicatively? And if the minority should prevail, we need to know how they persuaded the majority to change its stance (Gebhardt & Meyers, 1995; Meyers, Brashers, & Hanner, 2000). Similarly, as Poole (Poole, 1981, 1983a, 1983b; Poole & Baldwin, 1996) has so elegantly demonstrated, we cannot assume that groups go through predictable stages of development until we have thoroughly examined how development is constituted in, and through, the group's interaction.

In short, coding group interaction provides a method for examining deeper-level communicative processes that help explain surface-level input–output predictions. A useful way for thinking about these layers is offered by Prosser and Trigwell (1999). They posit that both surface- and deep-level comprehension are necessary for learning about a phenomenon, but when taken together the greatest insight is realized. For example, when we discover that an input predicts a group output (e.g., group member expertise predicts a group outcome; Woolley, Gerbasi, Chabris, Kosslyn, & Hackman, 2008), we understand this phenomenon primarily on a surface level. That is, we know that this prediction will hold true much of the time. However, if we wish to understand more fully how experts in the group (those members with more knowledge or skill) accomplish this task, we must look below the surface and rigorously examine what occurs in their group decision-making discussions. What we might find are several different interactive paths that group experts and nonexperts co-construct to produce outcomes. As we discover and uncover these paths, and determine their effectiveness, we build both theoretical and practical knowledge to explain the initial input–output relationship.

Researchers who code group interaction do so because they ask questions that require rigorous observation of group communicative processes. How do group members argue? How do groups communicate in conflict? How do group members share (or not) information? How are participative comments connected (or not) to other comments in group discussion? How do members influence others with verbal messages in the course of the group's symbolic exchange? Moreover, they do this work because they want to know how communicative processes impact (or do not impact) group outcomes.

In this chapter, we describe how to code group interaction. We focus primarily on content analytic methods (rather than qualitative coding procedures) since these are the methods we most often employ in our own research. We provide detailed descriptions of the coding procedures so that readers can replicate them. To prevent induction of insomnia, however, we also highlight these descriptions with examples of our own experiences (successes and failures) using these methods. After explaining the mechanics of how to code group interaction, we discuss its benefits and drawbacks, and we close by identifying innovative practices for future research. In many ways our chapter can be read as a companion to the treatment of quantitative coding of negotiation behavior provided by Weingart, Olekalns, and Smith (2006).

Procedures for Coding Group Interaction

As previously noted, the decision to code group interaction is predicated on the research question. Questions about the functions, distribution, patterns, and structures of group communication invite the close observation that coding allows. Bakeman and Gottmann (1986) described this form of systematic observation:

This approach typically is concerned with naturally occurring behavior observed in naturalistic contexts. The aim is to define beforehand various forms of behavior – behavioral codes – and then ask observers to record whenever behavior corresponding to the predefined codes occurs. A major concern is to train observers so that all of them will produce an essentially similar protocol, given that they have observed the same stream of behavior.

(Bakeman & Gottmann, 1986, p. 4)

Coding group communication involves at least five steps: (a) transcribing the discussion data; (b) unitizing the interaction data; (c) developing a coding scheme; (d) coding the data; and (e) determining coding reliability. Another step, although one that is less often undertaken by content analytic researchers, is determining the validity of the coding. Each of these procedures is detailed next, augmented by examples of our successes and our struggles with these methods.

Transcribing the discussion data

In order to transcribe, either audio-taped or video-taped group discussion data must be available. Video tapes provide a more complete picture because they afford accurate identification of speakers. Moreover, although it is possible to code directly from audio tape or video tape, it is much easier to code from transcriptions. So after collecting the data on tape, transcription can begin.

Transcribing group discussion data typically is time consuming and laborious. It is especially difficult to transcribe from videotape, so you may need to transfer video-taped data to audio tapes. We did this for a large dataset that we collected, and although it took a great deal of time, it made the transcribing process much easier. Hiring expert transcribers is the most efficient method for completing this task, but they are often quite expensive. We have used student transcribers in some of our investigations, utilizing payment or extra credit (in group communication classes) as enticements. Depending upon the competence and motivation of the students, this practice has proven both successful and disastrous for us. We have had students do an expert job (this happened more often when we were paying them), and we have had students return the video tapes un-transcribed or transcribed so poorly they were unusable. The lesson learned – if you are going to utilize students, especially undergraduates – is to identify competent, motivated, and conscientious individuals whom you trust to perform the task to your standards. One way to accomplish this is to invite the best students from your class one semester to serve as undergraduate research assistants for you the following semester. Meeting regularly with them as a group throughout the research process, and allowing them to share their insights and ideas regarding the research, fosters greater commitment and involvement from them, and often, a passion for research that they never imagined.

In addition, if student transcribers (or even experts for that matter) are employed, it is essential to provide very detailed instructions for accomplishing the transcription task (see Appendix A for an example of instructions we have used). We also found it useful to transcribe at least some of the data ourselves. Although we did not relish this lengthy and laborious task, it turned out to be quite important. Doing transcription illuminates the data in ways that even a close reading of the ensuing transcript may miss, and we think it benefits all researchers studying group discussion to complete at least some transcription of their own data.

To ease the cognitive load of transcribing, we typically do not ask transcribers (especially students) to identify the name of the speakers, nor do we require them to sort out complex multiple-speaker episodes. After the transcriptions are completed, we view the video tape with transcript in hand and insert the name of the speaker next to each turn-at-talk. To the best of our ability, we sort multiple-speaker episodes, especially those involving fast-paced interruption sequences, and talk-over message acts, by reading backwards and forwards in the transcript to identify potential speakers in these episodes. When this process proves futile, the turn is assigned as a generic “multiple speakers” unit.

The final stage of transcribing requires punctuating the data. Expert transcribers do this as a matter of course, but you may want to recheck their interpretations. If you have employed student transcribers, you certainly should verify their punctuation choices. We watch the video tape with the transcript in hand. Using members’ natural pausing, voice inflections, and intonation to determine sentences, questions, and exclamations, we insert periods, commas, capitalization, question marks, and exclamation points at appropriate places in the text. Although there is some debate as to whether this practice influences unitizing, Auld and White (1956) found that researchers who unitized a punctuated and capitalized transcript yielded unitized data that was reliable at 0. 93 with other researchers who unitized a nonpunctuated transcript.

Unitizing the discussion data

Once the group communication dataset is transcribed and punctuated, the next step is to unitize the data. Unitizing is the process of identifying units to be categorized or rated (Folger, Hewes, & Poole, 1984), and it occurs in several steps

Specifying the discourse unit

The first step in unitizing discussion data is to identify the unit that best fits the research question, coding scheme, and data analytic tools to be employed (Folger et al., 1984; Guetzkow, 1950). In much of our research, we study members’ arguments in group discussion (for a summary, see Seibold & Meyers, 2007), so we unitize as close to natural talk as possible.

In specifying the discourse unit, it is important to note both grammatical and functional considerations. Relying too heavily on functional considerations places greater interpretive demands on the coders (Auld & White, 1956; Hatfield & Wieder-Hatfield, 1998; McLaughlin, 1984), so we first defined our unit according to grammatical descriptors. Recognizing that group talk is often neither grammatically correct nor bound by grammatical rules, however, we also developed functional guidelines. We found that the discourse unit that best fit our parameters was the utterance or thought unit (McLaughlin, 1984). This discourse unit recognizes grammatically correct rule-bound statements co-equal with functionally appropriate, rule-independent structures like incomplete sentences, interrupted statements, co-produced agreements, functional dependent clauses, and singular words that function as complete thoughts. Since all of these structures are found in group argument discourse, the thought unit was deemed most appropriate for our research objectives (e.g., see Meyers, Seibold, & Brashers, 1991).

Specifying unitizing rules

Once the unit is determined, grammatical and functional unitizing rules are specified. Especially pertinent for determining thought units are rules about when to separate, and when to allow, run-on statements because more than one thought unit can occur in a single sentence. Grammatically, we identified two transition markers (after brushing up on our English grammar) as especially important to this task – coordinating and subordinating conjunctions. If definitions for these grammatical markers no longer come immediately from memory, as is the case for us, consult Appendix B. These types of conjunctions were identified as key for determining sentence division.

We identified sentences containing these types of conjunctions as illustrating functioning independent clauses (see Appendix B for examples). Hence sentence structures where two clauses were joined by parallel or syllogistic construction (e.g., if/then, on the other hand) remained as single units because the two parts of the statement did not function as independent clauses. We also developed rules about stand-alone utterances, false starts, introductory phrases, and interruptions (see Appendix B).

Identifying units

Once the unitizing rules are specified, unitizers (at least two individuals unfamiliar with the objectives of the research) are trained to identify the discourse units. The unitizers must first familiarize themselves with the unitizing rules, and then practice identifying units on discussion data similar to, but not included in, the final dataset. In the early days, our unitizers worked on paper copies of transcripts, placing a slash mark after each unit. Today, our unitizers identify the units in an electronic file by placing a keyboard return after each unit. Unitizers must work independently on the sample transcript. When the unitizing task is complete, they meet to talk over their unitizing decisions. Discussion at these meetings centers on sorting out differences, but this discussion process also is useful for reinforcing the reasoning behind similarities in unit identifications. Once 80 per cent reliability is reached in practice, each unitizer is given the final data and asked to identify the units. A note of caution. We have learned from experience that, although unitizing is not a difficult task, it is quite monotonous. So it is important to ask unitizers to do this work in increments and to stop when they feel fatigued.

Unitizing reliability

When the unitizing task is completed, it is time to compute reliability between the unitizers. The most common formula for determining unitizing reliability is Guetzkow's (1950) index of unitizing disagreement. It is based on the premise that two unitizers (A and B) each unitize a text into specifiable units (O). The formula is U = (OAOB)/(OA+ OB), and it indicates the discrepancy between either unitizer and the best estimate of the true number of units (the average of the two unitizers’ estimates).

Folger et al. (1984) contended this formula fails to quantify unit by unit agreement between coders. They argued, “Guetzkow's index only shows the degree to which two coders identify the same number of units in a text of fixed length, not whether those units were in fact the same units” (p. 123, emphasis added). They proposed an alternative method for computing unitizing reliability which involved segmenting the text into objective units that are smaller than the actual units. By segmenting more finely than the majority of actual units, the possibility of having two or more actual units occur within one objective unit is minimized. When the text has been objectively segmented, each objective unit is scrutinized to see if the unitizers agreed on the existence of an actual unit within that objective unit. If both coders agreed that an actual unit occurred, it is counted as an agreement. Reliability is then computed utilizing an index based on coder disagreement such as Scott's (1955) pi.

Folger et al. (1984) also suggested that it may be unnecessary to compute unit-by-unit reliability if “one is using an exhaustive coding scheme and Guetzkow's (1950) index is quite low, perhaps. 10 or below” (p. 124). Such a score indicates little overall disagreement between the two unitizers. For example, we found that a score of 0. 03 using Guetzkow's formula yielded a reliability of 0. 90 using the Folger et al. procedure and Scott's (1950) pi. Our reading of much of the content analysis literature indicates that most researchers use only Guetzkow's formula to determine unitizing reliability, and that most unitizing reliabilities fall below the 0.10 criterion.

At this point in the process, the data are transcribed and unitized, and you are now ready to construct the coding scheme and complete the coding tasks. In the next section, we take you through that process.

Developing the Coding Scheme

Examination of relevant literature

Developing a coding scheme is a theoretical act and represents what the investigator thinks should be extracted from the discussion data (Bakeman & Gottman, 1986). As Bakeman and Gottman explain, “It is, very simply, the lens with which he or she has chosen to view the world” (p. 19). A vital part of the development of any coding scheme within the content analytic tradition is to become familiar with relevant literature so as to build upon previous work. Sometimes a literature search reveals an existing scheme that fits your research purposes, or a scheme that can be adapted to fit. At other times, no relevant coding schemes emerge. When this happens (a generally common occurrence), it is necessary to create your own scheme. This is a creative, innovative, and intellectually challenging process.

The first step in creating your own coding scheme is to develop a familiarity with the principal theoretical and conceptual strands found in your literature review, and to draw those out in an organized form. In early development work on the Conversational Argument Scheme (CAS; see Appendix C), Canary, Seibold, and colleagues (Canary, Ratledge, & Seibold, 1982; Seibold, Canary, & Tanita-Ratledge, 1983; Seibold, McPhee, Poole, Tanita, & Canary, 1981) began by reading three prominent and representative argument theories: Toulmin (1958), Perelman and Olbrechts-Tyteca (1969), and Jackson and Jacobs (1980). In addition, when Meyers (1987) joined the research team, cognitive theories of argument in psychology and in communication were consulted (e.g., Burleson, 1981: Burnstein, 1982; Hample, 1985; Vinokur, Trope, & Burnstein, 1975). Taken together, this literature provided a strong foundation for conceptualizing group argument, and for developing a scheme that would capture that representation.

Developing the scheme

Once the relevant literature is digested, it is time to sketch out a tentative coding scheme. This scheme sets out initial categories based on your reading of the literature and your conceptions of the discourse unit of interest. This preliminary scheme is then used for an initial coding of the data. Revisions are made as deemed appropriate. It is often helpful to ask a colleague to work with you during this process so that ideas for categories and revisions can be discussed. This coding and revision process continues until you (and your colleague) have constructed what you believe to be an exhaustive coding scheme. Working independently, the two of you use this final scheme to code a sample of group discussion data. When finished, compare your coding choices, talk through similarities and differences, and make revisions to the scheme if necessary. If revisions are made, you must repeat the coding process. When no revisions are needed, you have a final scheme that can be used for training coders.

The goal is to develop a coding scheme that is exhaustive (contains a category for all possible units in your data) and exclusive (contains no overlapping categories). Most coding schemes are designed to place each unit into only one category. It is possible to ask coders to place content into more than one category, but this places greater interpretive burdens on the coders. If you determine that multiple coding of single units is necessary, explicit coding rules must be provided to help coders navigate that process. An “Other” category is often added to schemes for placement of units that do not fit elsewhere.

Finally, it is important to develop category definitions and rules to help coders interpret the categories reliably. It is helpful to capture these definitions and rules in a single document (Canary, 1992). See Appendix C for CAS category definitions, and Appendix D for examples of some of the coding rules that were created to help coders manage the coding task.

Determining validity of the coding

Poole and colleagues (Folger, Hewes, & Poole, 1984; Poole & Folger, 1981; Poole, Folger, & Hewes, 1987; Poole, Keyton, & Frey, 1999) have long advocated the importance of determining the validity of coding. Yet many content analytic researchers overlook this task, and in truth, we too have seldom focused on determining the validity of our coding.

Although there are many types of validity (see Folger & Poole, 1980), one that is particularly relevant to coding group interaction is representational validity – verifying that categories reflect meanings that are present in the culture/group being investigated (Poole & Folger, 1981). Poole (1987) distinguished observer-privileged meanings from subject-privileged meanings. Observer-privileged meanings are those available to observers from the outside (e.g., researchers or “blind” coders), and subject-privileged meanings are those available to insiders and participants. Establishing categories that reflect subject-privileged meanings is paramount to establishing representational validity. Of course, as Poole et al. note, “Clearly, a coding scheme designed to capture subject-privileged meanings is harder to design than an observer-privileged system” (Poole et al., 1987, p. 106). One method they have advocated for establishing representational validity is using multidimensional scaling for paired comparisons of interactive passages (for a more complete explanation of this method, see Poole & Folger, 1981).

An example of establishing representational validity from our research would include asking research participants to define assertions (statements of fact or opinion). If they chose terms similar to those of the researchers, there would be greater assurance that the researched and the researchers understood assertions in the same way. This category would be said to have representational validity because it would depict a concept similarly for both parties in the research process.

Researchers may be reluctant to determine the validity of their coding because this task adds additional work to what is already a fairly onerous task. Moreover, representational validity should be re-assessed each time the scheme is applied to a new interactive context. Yet, as Poole and Folger (1981) indicated, it is “precisely because there is such a tremendous investment of time and effort in coding, (that) validity is a crucial issue” (p. 39). Indeed, by verifying coding validity, a researcher can establish greater confidence in the coded data and conclusions drawn from it.

Coding process

Training coders

The complexity of the scheme and the competence of the coders will dictate the length and difficulty of the coder training process. In past work employing the CAS, training has typically taken 50–60 hours. This scheme admittedly is complex, and we often use graduate student coders as well as senior-level undergraduates. Our experience indicates that, with proper training, both sets of students make competent coders. We typically train four coders for data sets of 40 or more group discussions.

The first step in training is familiarization with the coding scheme. We usually ask coders to do background reading on the research topic prior to working with the scheme. Content knowledge is an asset. We have found that students who have debated or studied argumentation theory made good coders with the CAS. We also require coders to read through the scheme, category definitions, and coding rules carefully. In the first training session, we discuss the coders’ understanding of the scheme, address their questions, and work to shape consensual interpretations of the categories. Then we give each coder an identical excerpt of data that is similar to, but not included in, the final dataset, and we ask each coder to code that data independently before the next training meeting. We code these data too. At the next session, we compare all codes (including our codes). Differences are discussed and revisions are made to the scheme categories or rules when all coders agree that the revision is necessary, and when it does not deviate from the theoretical underpinnings of the scheme. Coders then receive another excerpt and the cycle is repeated. Our experience with the CAS training is that coders spent approximately ten hours in private coding each week, and discussion sessions lasted approximately four hours per week over a five-week period.

After each coding interval, simple percentage of agreement reliability checks are computed. In our training with the CAS, reliability levels began at 45 per cent agreement and rose to 80 per cent by the end of the five weeks. At this level, we decided that coders were adequately prepared, and training sessions were terminated. Each coder was provided with a final revised copy of the coding scheme, the coding rules, coding protocols (either hard copy or electronic), and half of the transcripts and video tapes. Coders were instructed to read through the coding scheme categories and rules before coding each transcript and to access the video tape if needed. They were asked to return two coded transcripts each week until the task was completed. This schedule was used to insure that coders would not forget the rules or categories, but it also ensured a pace that would guard against coder fatigue. When each coder had finished coding all the transcripts, they were compared for points of disagreement. All disagreements were clearly marked on the coding sheet and returned to the coders. They discussed each disagreement until a consensual agreement on a final single code was reached.

One of our early disappointments with use of the CAS was that coders were only able to achieve moderate category-by-category reliabilities (Meyers & Brashers, 1995). Moreover, they appeared to be simplifying the coding process by reducing the number of utilized categories (Meyers et al., 1991). Low reliabilities pose a central problem in this type of research because they indicate that the scheme could not be used by others to attain similar results. Moreover, if coders were dealing with the coding complexities by reducing the number of categories they utilized, then the validity of our results was also in question. So we entertained modifications to the coding procedures and created a multistage coding process for the CAS (Meyers & Brashers, 1995) that involved first parceling the interaction data and then coding them in successive iterations. This created a more prolonged coding process, but was less frustrating for the coders and resulted in improved reliability.

Multistage coding procedures

The initial task in the multistage procedure is to parcel the data so that a more coherent picture of the group's argument, in our research, is available. This task involves three levels of parceling. First, the nonargument statements are sorted from the argument statements. At the second level, all argument statements are separated according to the final decision alternative they support. Decision alternatives are identified initially, and trained coders next read through the transcripts and code each statement according to the decision alternative it favors.

At the third level of data partitioning, messages are further distinguished according to lines of argument (based on similar content features). This stage of parceling is accomplished in two steps. First, a category system identifying various lines of argument is constructed. In the initial investigation, we derived lines of argument by asking group participants, prior to entering group discussion, to generate lists of arguments that pertained to the task. Three judges then sorted these arguments into content categories and labeled each category. If two of three judges agreed that a content category existed, we treated it as a separate category; these consensual, labeled categories became the coding scheme for identifying lines of argument in this dataset. Alternatively, these categories could be deductively derived from the task scenario using the procedures described above for creating coding schemes (Lemus & Seibold, 2008).

Trained coders, using this set of argument content categories, classified each argument unit in the group discussion. Once they completed the coding, we used low-tech highlighters to color-code each content message unit. In more recent investigations, we have used computer highlighting to accomplish the same task. The color highlighting provided a visual picture of how group members moved from one content topic to another, and indicated when arguments were new or merely continuations of arguments offered earlier in the discussion. This procedure helped coders sustain a cognitive representation of the entire argument as it developed and persisted over a given period of time. We also have used text-based software to provide coders with these options in our analyses of argument in computer-mediated groups (Lemus & Seibold, 2008).

Successive coding iterations

Once the data were parceled so that the structure and organization of the groups’ arguments were clear, further coding unfolded in six iterative sessions (refer to Appendix C for category names). First, coders placed each message statement into one of the four global-level categories contained in the scheme – Arguables, Convergence Markers, Disagreement-relevant Intrusions, or Delimitors. (They had done coding of the Nonarguables at an earlier stage). Second, the coders returned to the data to categorize each Arguable statement into one of the six subcategories – Assertion, Proposition, Elaboration, Response, Amplification, or Justification. Third, they returned to the data to code all Convergence Marker statements as Agreements or Acknowledgements. Fourth, coders recoded all Disagreement-relevant Intrusions into Objections or Challenges. Fifth, they recoded Delmitors as Frames, Forestall-secure, or Forestall-remove. Finally, they recoded Nonarguables as Process, Unrelated Statements, or Incompletes.

For each iteration, the coders practiced with data extraneous to the investigation, and coding choices were discussed until they were able to apply the codes reliably. For each set of tasks, coders returned to the same transcripts but focused on different aspects of the group argument. They not only fractionated their task into manageable parts, but with each additional reading of the transcript they became increasingly familiar with the complete discussion. Appendix E offers a short example of the final coded results.

Employing these multistage procedures required that more time be devoted to the coding process. However, we were happy to discover that they also resulted in improved reliabilities (Meyers & Brashers, 1995, 1998). In the next section, we discuss the process of determining coder reliability.

Determining reliability

Scholars debate the best basis for computing coding reliability (Krippendorff, 2004; Lombard, Snyder-Duch, & Bracken, 2002, 2004). Most researchers use one of three formulas: Cohen's kappa (1960), Scott's (1955) pi, or Krippendorff ‘s alpha (1980, 2004). All of these measures provide a more conservative estimate of inter-coder reliability than does percentage agreement. Although there is no firmly established tradition for what constitutes acceptable reliability, most researchers would agree that 0.80 or higher (using one of these three formulas) is clearly acceptable. Depending upon the complexity of the data, the scheme, and the consequences of being wrong, reliabilities between 0.67 and 0.80 are considered moderately acceptable (Krippendorff, 2004). Reliabilities below those levels raise questions as to whether further use of the scheme could yield consistent results, as well as the representativeness of the coded data.

In sum, coding group interaction is a complex, time-consuming, but intellectually stimulating process. We aver that it is one of the best ways to get a firm grasp on what is happening in the group's discussion. Coding interaction provides opportunities to study discourse patterns, distributions, and structures. It brings greater coherence to seemingly chaotic conversations, and suggests linkages to group outcomes. Perhaps most important, coding group interaction stimulates new research questions, poses new communicative puzzles to solve, and uncovers often hidden elements of group discussions. In the next section, we reflect more fully on the benefits and drawbacks of this method.

Critical Reflection on Coding Group Interaction

Given the daunting details of interaction coding, you may be asking yourself, “Why would I want to use this method? Aren't there easier and less time consuming methods that involve fewer challenges?” Of course there are (see review in McGrath & Altermatt, 2001). Both of us have used such other methods at various times in our research careers. But when we are curious about what group communication really looks like, or wonder about how it manifests in team situations, and/or have questions about whether communication occurs as theorists have hypothesized, we feel compelled to do a close analysis of actual group interaction. When our research objective is to discover the possibilities of group communication in all its complexity, messiness, and sedimented nature, we always return to interaction coding.

Benefits of coding group interaction

We find three primary benefits from coding group interaction: (a) it provides a picture of the distribution of communicative acts; (b) it showcases the interactive structure of the discourse; and (c) it makes detection of communicative patterns and sequences possible. Identifying distributions, structures, and patterns in discussions helps us to understand both the development and predictability of interaction processes. In addition, they help us to explain unexpected interaction functions or outcomes (see description of process statements and humor sequences below), and/or to rule out alternative explanations for group decisions.

Distribution of communicative acts

Much of our work using the CAS has focused on identifying the distribution of discrete argument acts in group discussions (Seibold & Meyers, 2007). Coding argumentative discourse in groups, thought unit by thought unit, has allowed us to obtain a more exact picture of this distributive structure. As previously mentioned, we initially were surprised, and a bit disappointed, to find that student groups discussing hypothetical tasks produced a simple, and relatively simplistic, distribution of argument acts (Assertions, Elaborations, Agreement). This finding raised new questions for our research program. Do groups argue simplistically? Is this a function of their student status and/or the hypothetical task? Is this distribution related to the coding scheme or to group processes?

As indicated, the complexity inherent in the CAS influenced our decision to first focus on the coding scheme and procedures associated with its use. We constructed multistage coding procedures that resulted in a more complex distribution of argument acts. We believe that some of the early distributive simplicity was due to coder confusion and fatigue. But more recent investigations suggest additional answers. Coding of online student groups working on consequential tasks also showed a more complex distribution of argument acts (Lemus et al., 2004). So perhaps some of the earlier distributive simplicity was due to task type.

Recently we have begun work coding an actual jury discussion using the CAS, and initial findings suggest that the distribution of argument acts is much more complex, and may even support revisions to the present version of the CAS (Huber, Johnson, Hill, Meyers, & Seibold, 2007; Kang, Meyers, & Seibold, 2008; Meyers & Brashers, 2010; Meyers, Seibold, & Kang, 2010). We have discovered forms of Process statements that we have not seen elsewhere. For example, although the jury produced traditional types of Process statements (orienting the group to its task or specifying the process the group should follow), they also generated argument-relevant process statements. For example, jury members discussed how legal terms could or should be defined, how the arguments should be considered in time, and how legal definitions and restrictions could or should be used. These Process statements certainly were not employed to organize and facilitate group decision-making (as traditional Process statements do). Rather, they functioned to explore definitional possibilities, identify viewpoints, and set the groundwork for the group to be able to do its arguing work. Hence, we think there may be varying forms of Process statements. We currently are puzzling over how to code these statements and whether to add additional Process categories to the scheme.

Interactive structures

Coding interaction also provides an avenue for observing group communication structures. Early work by Canary, Brossmann, and Seibold (1987) revealed four group argument structures: simple, compound, eroded, and convergent. Simple arguments followed a straightforward argument pattern (assertion, elaboration, amplification, and so forth). Compound arguments included extended arguments, embedded arguments, and parallel arguments. Eroded arguments dissembled or fell apart. Convergent arguments used others’ points to create an argument through agreement or tag-team communication. Moreover, Canary et al. found that groups reaching consensus had greater proportions of convergent argument structures than did dissensus groups, in which eroded structures were more prominent.

Similarly, in a study of differences and similarities in subgroup argument, Meyers, Brashers, & Hanner (2000) found that majority and minority subgroups produced different argument structures. Majorities were more likely than minorities to build their argument structures around convergence statements (and tag-team arguments), and they were less likely to disagree. Minority subgroups produced arguments with more disagreement messages to defend their positions against the unified majority. These differing subgroup structures suggest that some patterns may be unique to the interactive status of the group members.

Finally, Lemus et al. (2004) coded computer-mediated group (CMG) interactions to test the predictive utility of argument structures. Based on analyses of 477 distinct argument structures across eleven CMGs, the researchers found that the development of argument structures was a significant predictor of the success or failure of decision proposals. When argument structures in support of a decision proposal were more argumentatively developed than were argument structures against a decision proposal, CMG members were likely to endorse the decision proposal. Conversely, when argument structures in opposition were more argumentatively developed than argument structures in support of the decision proposals, CMG members were not likely to endorse the decision proposal. This work has been extended in subsequent studies (see review in Seibold, Lemus, & Kang, 2010).

Although coding group interaction can be time consuming, it also provides exciting insights that are potentially unavailable using other methods. For example, in coding German work groups’ decision making, we found participants complained frequently and that complaining encouraged more of the same thereby producing a cycle of complaining behavior (Kauffeld & Meyers, 2009; Lehmann-Willenbrock & Kauffeld, 2010). The prominence of this unhappy discourse surprised and interested us. We think that other methods would not have enabled us to uncover complaining behaviors in quite the same way. Would team members responding to a survey, or a focus group, or an interview admit that “yes, I complain all the time.” Would they be able to recall what they complain about, the form of those complaints, or how others in the group spur production of complaining cycles? Coding the actual interaction allowed us to view complaint behavior as it occurred, unmediated by members’ recall, biases, or perceptions of prosocial norms.

Likewise, in an investigation of minority subgroup influence in teams (Meyers et al., 2000), we wanted to know what these subgroups can do communicatively to get their proposals accepted by the group. To discover the answer to this question, we had to analyze (code) the actual group interaction. What did we find when we did this close analysis? Minority subgroups can “win” by maintaining, and sustaining, a consistent line of argument throughout discussion. Refusing to change direction was a strategy that worked. These results have important applications for social justice and ethical decision-making and are simple strategies we can teach our students. Only by coding group interaction could we best understand how group members use communication to fashion a winning proposal (even when they are in the underdog position).

Communication patterns and sequences

Although much of our work has been focused on argument distribution and structures, researchers using the CAS in other communicative domains have attended more to identifying sequences of argument (see Canary & Sillars, 1992; Canary, Brossmann, Sillars, & LoVette, 1987; Canary, Weger, & Stafford, 1991; Ellis & Maoz, 2002). Recent work on complaints in work groups also is germane. Using Kauffeld's (2006) act4teams ® scheme to code organizational group decision-making interaction, Kauffeld & Meyers (2009) found that complaints followed by a supportive statement beget yet another complaint, resulting in a repetitive complaining sequence. The discussion becomes increasingly negative. Alternatively, when complaints received no support or were followed by statements that moved the group back to its task, the complaining stopped.

Another example of how coded interaction can lead to discovery of communicative sequences comes from a recent examination of humor in these same work groups. Using data coded with the act4teams ® scheme (Kauffeld, 2006), Hebl at al. (2009) showed that humor most often occurs in sequences of humor statements–laughter–humor statements. Less common, but significant in their occurrence in these work groups, were sequences of humor–laughter– terminating discussion or humor–laughter–empty talk. Hence the sequences discovered in these data suggest that humor can serve both positive and inhibitive functions. Coding the interaction and subjecting it to sequential analysis allowed us to explore the sequences and patterns that are shaped by, and shape, group humor.

Limitations of coding group interaction

Several limitations are evident in the preparation of data and the coding process. First, and as we noted, transcription is a lengthy, and often tedious, task. Second, development of one or more coding schemes, and attendant rule books, is a large undertaking. As Bakeman and Gottman (1986) state, “developing an appropriate scheme (or schemes) is often an arduous task. … There is no reason to expect this process to be easy” (p. 46). Third, training coders is neither simple nor quick, especially in projects involving large datasets of groups. Such time and resource requirements give many researchers pause. Fourth, achieving acceptable reliabilities can be difficult if the coding scheme is complex or highly interpretive. The reputation of content analysis rests on acceptable reliability, so if you find you have not achieved that, revision to the coding scheme or training process is typically required. Fifth, data from the coding process is often nominal in form. This can limit the types of statistical analysis that can be performed on the data.

As with any method, coding group interaction has benefits and drawbacks. Yet we think that the complexity of group interaction is best illuminated when investigated with commensurate tools. Coding group interaction is one method that enables us to explore group interaction in all its complexity and to discover its structures, distributions, sequences, and links to group outcomes.

However, it is possible to simplify this process. One option may be to investigate communication produced in online environments. Online interactions offer immediate transcriptions of team members’ statements, and remove the tedium and/or expense involved in transcribing face-to-face (f 2f) group discussions. In addition, there are no interruptions, talkovers, or incomplete statements in online data, which are often difficult to code. In these ways (and others), coding online discussions may be less time consuming and easier than coding f 2f discussions. Conversely, in online discourse, it is harder to interpret emotion or paralanguage (laughter, for example). Emoticons can help but not everyone uses them, or uses them in the same way. Still, given current trends toward more global and dispersed teams, we must continue to investigate both f 2f and online groups if we are to best understand communication practices in teams.

Constructing simpler coding schemes is another way to address some of the complexity issues. For example, Poole, McPhee, and Seibold (1982) utilized a coding scheme with only two categories (positive and negative valence) to code comments in group decision-making interactions. This simple coding scheme yielded important information about the amount of this type of support (or lack thereof) for specific decision proposals. So simple coding schemes can also provide very useful information about communication in groups.

Constructing simpler coding schemes is particularly pertinent if you want to code interaction in situ. Coding group interaction as it occurs demands a scheme with fewer categories that address very specific communication behaviors. Suppose, for example, you wished to code ‘humor’ in team meetings and no recording equipment was permitted to be used to capture the data. You might develop a coding scheme of three categories: (a) positive humor, (b) negative humor, and (c) participant identification. This very narrow set of codes may allow for ‘on the fly’ coding but you would still want to assign two coders to code the discussion so as to check reliability of the codes later. If your definitions of positive and negative humor were well honed, and coders could reliably identify these two types of humor, you could answer some very interesting questions from these three categories alone. Who most often initiates humor in these groups? What types of humor are most common? Do all members contribute to humor production or is the distribution skewed? Are there positive and negative humor leaders? Do groups differ in frequency and type of humor production? All of these questions, and others, could be answered with this simpler category system.

If you were using the CAS, and you wanted to simplify the process, you could employ the five primary categories only (Arguables, Reinforcers, Promptors, Delimitors, and Nonarguables) as your coding scheme, thereby making coding much less complex. Indeed, in cases where the answers to your research questions do not depend on fine descriptions of the data, this is both a viable, and more efficient, process to follow. Coding with these five categories can still produce interesting findings regarding group argument (albeit at a more macro-level). Moreover, struggles with intercoder reliability should be markedly reduced.

As is surely evident, there are always tradeoffs in the research process when coding is involved. How can you best answer your research question versus how much time and effort can you afford? What is the best way to code the data versus how many resources can you muster? Answers to these and many other conflicting questions will frame your decisions about scheme development and use. Regardless of whether your coding scheme is simple or complex, micro-level or macro-level, the end goal for all content analytic group communication researchers is the same – understanding and explaining group interaction practices.

Innovations to Advance Future Research

Three innovations would greatly enhance this form of research: computerized transcription of data, coding via computer software and/or by a computer, and development of a cyberspace structure for archiving and retrieving group data and research tools. We hope that each of these innovations will come to fruition soon, and we speculate on that potential in this final section.

As we discussed, one of the most time-consuming processes involved in coding group interaction is data transcription. It is sometimes possible to code from actual video tapes, but it is not easy to do so. Hence, it would be particularly helpful if software that transcribed video-taped interactions was produced. Although voice recognition software is currently available, it is not accurate enough to transcribe group discussions. If computer scientists, working with group communication specialists, could develop software to accomplish this task accurately, the coding of group interaction would be significantly enhanced.

Second, coding itself is difficult and time consuming. However, recently developed computer software programs now make it possible for researchers to code transcribed or video data more efficiently at the computer. For example, the INTERACT system designed by a German firm, Mangold International (Mangold, 2005; www.mangold-international.com), allows a researcher to view a video of a group discussion and code directly from a customized keyboard. Figure 17.1 depicts a screenshot of the software with the coding units, video, time duration of each unit, identification of group member speaking, and the category assigned by the coder (this figure is taken, with permission, from Lehmann-Willenbrock & Kauffeld, 2010). The keyboard in the bottom right corner is programmed to contain the coding categories, group members’ identifications, and some keys for cutting and editing the video.

If needed, the coder can stop the videotape or replay sections. This system works best if the unit of analysis is the sentence, turn-at-talk, or other larger unit, so that unitizing and categorizing can occur at the same time. If the unit selected is smaller than the sentence, it may be necessary to unitize the data prior to the computerized coding task. For more information, and relevant citations, see Kauffeld, 2006; www.mangold-international.com/en/service/publications/some-citation-references.html.

image

FIGURE 17.1 INTERACT Coding System
(from Lehmann-Willenbrock & Kauffeld, 2010 ).

Such computer software programs can help streamline the research process. Even more useful would be a software program that could directly assign units into relevant categories. If this were possible, the lengthy tasks of training coders and of coding the interaction would become unnecessary. Currently, the second author is working with Noshir Contractor and Scott Poole on a project that would use the Structuration Argument Theory version of CAS to test this possibility. 1

Finally, it would be useful for group researchers to have access to datasets of group discussions. Finding sets of groups (especially naturally occurring groups), and getting permission to video tape them, is increasingly difficult. Currently, the first author and two other communication colleagues (Joe Bonito, John Gastil) have begun work on development of a GroupBank in cyberspace where all things group-related could be housed, including group data, coding schemes, coded data, measures of group process and outcomes, among other materials. This project is still in its early stages, but if it comes to fruition, it will offer a place where group researchers from any discipline can access group data and coding schemes.

This GroupBank would make the study of group processes easier and more efficient, and hopefully more enticing for faculty and graduate students alike. It would also allow for greater collaboration among group researchers across disciplines. Investigators could share findings, combine results, add to data, work together on datasets, and develop an interdisciplinary learning community to extend, and enhance, the current Interdisciplinary Network for Group Research venue.

Conclusion

In this chapter, we described how to code group members’ communication. We provided descriptions of coding procedures and highlighted these details with examples of our successes and struggles using these methods. We discussed the benefits and limitations associated with this method, and closed by identifying innovations related to coding that could advance research in the future.

We continue to believe that coding group interaction is one of the best ways to investigate group discussion. Although this method is not without its drawbacks (especially time and resources), it affords opportunities to view communicative distributions, discourse sequences, and interaction structures that other methods do not illuminate. Choice of method is always predicated on the research question being asked, and this method is especially suited to group communication puzzles and challenges.

Note

  1. 1 Poole and Contractor are working with colleagues to develop GroupScope, an analytical tool that reduces the task of studying large dynamic groups (LDGs) to manageable proportions. Advances in computational video, audio, and text analysis, and in middleware are enabling these researchers to construct an integrated analytical environment for management and analysis of the huge and complex datasets needed to study LDGs. The authors of this chapter, and other colleagues, will be aiding their study of human interaction systems by making the Conversational Argument Coding Scheme (CACS; Canary & Seibold, 2010) among the GroupScope multiple coding options. The argument coding processes noted in this chapter will be facilitated through first-order annotation procedures. Transcripts of group discussions will be annotated by human coders, and machine learning will utilize this coding to associate various first-order cues with classifications in the CACS usually accomplished through the iterative procedures we have described.

References

Auld, F., & White, A. M. (1956). Rules for dividing interviews into sentences. Journal of Psychology, 42, 273–281.

Bakeman, R., & Gottman, J. M. (1986). Observing interaction: An introduction to sequential analysis. Cambridge: Cambridge University Press.

Burleson, B. R. (1981). A cognitive-developmental perspective on social reasoning processes. Western Journal of Speech Communication, 45, 133–147.

Burnstein, E. (1982). Persuasion as argument processing. In H. Brandstatter, J. H. Davis, & G. Stocker-Kreichgauer (Eds.), Group decision making (pp. 103–124). New York: Academic Press.

Canary, D. J. (1992). Manual for coding conversational arguments. Department of Speech Communication, Pennsylvania State University, University Park, PA.

Canary, D. J., Brossmann, B. G., & Seibold, D. R. (1987). Argument structures in decision-making groups. Southern Speech Communication Journal, 53, 18–37.

Canary, D. J., Brossmann, B. G., Sillars, A. L., & LoVette, S. (1987). Married couples’ argument structures and sequences: A comparison of satisfied and dissatisfied dyads. In J. W. Wenzel (Ed.), Argument and critical practices: Proceedings of the fifth SCA/AFA conference on argumentation (pp. 475–484). Annandale, VA: SCA.

Canary, D. J., Ratledge, N. T., & Seibold, D. R. (1982). Argument and group decision-making: Development of a coding scheme. Paper presented at the annual meeting of the Speech Communication Association, Louisville, KY.

Canary, D., & Seibold, D. R. (2010). Origins and development of the conversational argument coding scheme. Communication Methods and Measures, 4(1–2), 7–26.

Canary, D. J., & Sillars, A. L. (1992). Argument in satisfied and dissatisfied married couples. InW. L. Benoit, D. Hample, & P. J. Benoit (Eds.), Readings in argumentation (pp. 737–764). New York: Foris.

Canary, D. J., Weger, H., Jr, & Stafford, L. (1991). Couples argument sequences and their associations with relational characteristics. Western Journal of Speech Communication, 55, 159–179.

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Davis, J. H. (1973). Group decision and social interaction: A theory of social decision schemes. Psychological Review, 80, 97–125.

Ellis, D. G., & Maoz, I. (2002). Cross-cultural argument interactions between Jews and Palestinians. Journal of Applied Communication Research, 30, 181–194.

Folger, J. P., Hewes, D. E., & Poole, M. S. (1984). Coding social interaction. In B. Dervin & M. Voight (Eds.), Progress in the communication sciences (Vol. 4, pp. 115–161). Norwood, NJ: Ablex.

Folger, J. P., & Poole, M. S. (1980). Relational coding schemes: The question of validity. In M. Burgoon (Ed.), Communication yearbook 5 (pp. 235–247). Newbury Park, CA: Sage.

Gebhardt, L. J., & Meyers, R. A. (1995). Subgroup influence in decision-making groups: Examining consistency from a communication perspective. Small Group Research, 26, 147–168.

Guetzkow, H. (1950). Unitizing and categorizing problems in coding qualitative data. Journal of Clinical Psychology, 6, 47–58.

Hample, D. (1985). A third perspective on argument. Philosophy and Rhetoric, 18, 1–22.

Hatfield, J. D., & Weider-Hatfield, D. (1978). The comparative utility of three types of behavioral units for interaction analysis. Communication Monographs, 45, 44–50.

Hebl, M., Pederson, J., Hill, R., Meyers, R. A., Kauffeld, S., & Lehmann-Willenbrock, N. (2009).Exploring humor in task groups. Paper presented at the annual conference of the International Communication Association, Chicago.

Hewes, D. E. (1986). A socio-egocentric model of group decision making. In R. Y. Hirokawa & M. S. Poole (Eds.), Communication and group decision-making (pp. 265–291). Beverly Hills, CA: Sage.

Huber, J., Johnson, M., Hill, R., Meyers, R. A., & Seibold, D. R. (2007). Examining the argument process in jury decision making. Paper presented to the Group Communication Division, National Communication Association, Chicago.

Jackson, S., & Jacobs, S. (1980). Structure of conversational argument: Pragmatic cases for the enthymeme. Quarterly Journal of Speech, 66, 251–265.

Kang, P., Meyers, R. A., & Seibold, D. R. (2008). Examining argument in naturally occurring jury deliberations. Paper presented at the Third Annual Conference of the Interdisciplinary Network for Group Research (INGRoup), Kansas City, KS.

Kauffeld, S. (2006). Kompetenzen messen, bewerten, entwickeln [Measuring, evaluating, and developing competencies] . Stuttgart: Schäffer-Poeschel.

Kauffeld, S., & Meyers, R. A. (2009). Complaint and solution-oriented circles: Interaction patterns in work group discussions. European Journal of Work and Organizational Psychology, 18, 267–294.

Krippendorff, K. (1980).Content analysis: An introduction to its methodology. Beverly Hills, CA: Sage.

Krippendorff, K. (2004). Reliability in content analysis: Some common misconceptions and recommendations. Human Communication Research, 30, 411–433.

Lehmann-Willenbrock, N., & Kauffeld, S. (2010). The downside of group communication: Complaining cycles in group discussions. In S. Schuman (Ed.), The handbook for working with difficult groups: How they are difficult, why they are difficult and what you can do about it (pp. 33–5 3). San Francisco: Jossey-Bass/Wiley.

Lemus, D. R., & Seibold, D. R. (2008). Argument development versus argument strength: The predictive potential of argument quality in computer-mediated group deliberations. In T. Suzuki, T. Kato, & A. Kubota (Eds.), Proceedings of the 3rd Tokyo conference on argumentation: Argumentation, the law and justice (pp. 166–174). Tokyo: JDA.

Lemus, D. R., Seibold, D. R., Flanagin, A. J., & Metzger, M. J. (2004). Argument in computer-mediated groups. Journal of Communication, 54, 302–320.

Lombard, M., Snyder-Duch, J., & Bracken, C. C. (2002). Content analysis in mass communication research: An assessment and reporting of intercoder reliability. Human Communication Research, 28, 587–604.

Lombard, M., Snyder-Duch, J., & Bracken, C. C. (2004). A call for standardization in content analysis reliability. Human Communication Research, 30, 434–437.

Mangold, P. (2005). Interact handbook. Arnstorf: Mangold Software & Consulting.

McGrath, J. E., & Altermatt, T. W. (2001). Observation and analysis of group interaction over time: Some methodological and strategic choices. In M. A. Hogg & R. S. Tindale (Eds.), Blackwell handbook of social psychology: Group processes (pp. 525–556). Malden, MA: Blackwell.

McLaughlin, M. L. (1984). Conversation: How talk is organized. Beverly Hills, CA: Sage.

Meyers, R. A. (1987). Argument and group decision-making: An interactional test of persuasive arguments theory and an alternative structurational perspective. (Doctoral dissertation, University of Illinois, 1987). Dissertation Abstracts International, 49, 12.

Meyers, R. A., & Brashers, D. E. (1995). Multi-stage versus single-stage coding of small group argument: A preliminary comparative assessment. In S. Jackson (Ed.),Argumentation and values: Proceedings of the ninth SCA/AFA conference on argumentation (pp. 93–100). Annandale, VA: SCA.

Meyers, R. A., & Brashers, D. E. (1998). Argument and group decision-making: Explicating a process model and investigating the argument-outcome link. Communication Monographs, 65, 261–281.

Meyers, R. A., & Brashers, D. E. (2008). Extending the conversational argument coding scheme: Categories, units, and coding procedures. Paper presented at the annual meeting of the National Communication Association, San Diego, CA.

Meyers, R. A., & Brashers, D. E. (2010). Extending the conversational argument coding scheme: Argument categories, units, and coding procedures. Communication Methods and Measures, 4(1–2) , 27–45.

Meyers, R. A., Brashers, D. E., & Hanner, J. (2000). Majority/minority influence: Identifying argumentative patterns and predicting argument-outcomes links. Journal of Communication, 50, 3–30.

Meyers, R. A., Seibold, D. R., & Brashers, D. (1991). Argument in initial group decision-making discussions: Refinement of a coding scheme and a descriptive quantitative analysis. Western Journal of Speech Communication, 55, 47–68.

Meyers, R. A., Seibold, D. R., & Kang, P. (2010). Analyzing argument in a naturally occurring jury deliberation. Small Group Research, 41, 452–473.

Perelman, C. H., & Olbrechts-Tyteca, L. (1969). The new rhetoric: A treatise on argumentation. (J. Wilkinson & P. Weaver, Trans.). Notre Dame, IN: University of Notre Dame Press.

Poole, M. S. (1981). Decision development in small groups I: A comparison of two models. Communication Monographs, 48, 1–24.

Poole, M. S. (1983a). Decision development in small groups II: A study of multiple sequences in decision-making. Communication Monographs, 50, 206–232.

Poole, M. S. (1983b). Decision development in small groups III: A multiple sequence theory of decision development. Communication Monographs, 50, 321–341.

Poole, M. S., & Baldwin, C. (1996). Developmental processes in group decision making. In R. Y. Hirokawa & M. S. Poole (Eds), Communication and group decision making (2nd ed., pp. 215–241). Thousand Oaks, CA: Sage.

Poole, M. S., & Folger, J. P. (1981). A method for establishing the representational validity of interaction coding systems: Do we see what they see? Human Communication Research, 8, 26–42.

Poole, M. S., Folger, J. P., & Hewes, D. E. (1987). Analyzing interpersonal interaction. In M. E. Roloff & G. R. Miller (Eds.), Interpersonal processes: New directions in communication research (pp. 220–256). Newbury Park, CA: Sage.

Poole, M. S., Keyton, J., & Frey, L. R. (1999). Group communication methodology: Issues and considerations. In L. R. Frey, D. S. Gouran, & M. S. Poole (Eds), The handbook of group communication theory and research (pp. 92–112). Thousand Oaks, CA: Sage.

Poole, M. S., Seibold, D. R., & McPhee, R. D. (1985). Group decision-making as a structurational process. Quarterly Journal of Speech, 71, 74–102.

Prosser, M., & Trigwell, K. (1999). Understanding learning and teaching: The experience in higher education. Buckingham, UK: SRHE and Open University Press.

Scott, W. (1955). Reliability of content analysis: The case of nominal scale coding. Public Opinion Quarterly, 17, 321–325.

Seibold, D. R., Canary, D. J., & Tanita-Ratledge, N. (1983). Argument and group decision-making: Interim report on a structurational research program. Paper presented at the annual meeting of the Speech Communication Association, Washington, DC.

Seibold, D. R., Lemus, D. R., & Kang, P. (2010). Extending the conversational argument coding scheme in studies of argument quality in group deliberations. Communication Methods and Measures, 4(1–2), 46–64.

Seibold, D. R., McPhee, R. D., Poole, M. S. Tanita, N. E., & Canary, D. J. (1981). Argument, group influence, and decision outcomes. In C. Ziegelmueller & J. Rhodes (Eds.), Dimensions of argument: Proceedings of the second SCA/AFA summer conference on argumentation (pp. 663–692). Annandale, VA: SCA.

Seibold, D. R., & Meyers, R. A. (2007). Group argument: A structuration perspective and research program. Small Group Research, 38, 312–336.

Toulmin, S. E. (1958). The uses of argument. Cambridge, UK: The University Press.

Vinokur, A., Trope., Y., & Burnstein, E. (1975). A decision-making analysis of persuasive argumentation and the choice-shift effect. Journal of Experimental Social Psychology, 11, 127–148.

Weingart, L. R., Olekalns, M., & Smith, P. L. (2006). Quantitative coding of negotiation behavior. In P. Carnevale & C. K. W. de Dreu (Eds.), Methods of negotiation research (pp. 105–119). Leiden: Martinus Nijhoff.

Woolley, A. W., Gerbasi, M. E., Chabris, C. F., Kosslyn, S. M., & Hackman, J. R. (2008). Bringing in the experts: How team composition and collaborative planning jointly shape analytic effectiveness. Small Group Research, 29, 352–371.

Appendix A: Transcription Instructions (Meyers, 1987)

  1. 1. Transcribe the group discussion exactly as you hear it on the tape. Transcribe each word even if the sentence does not make sense to you. Be as complete as possible.
  2. 2. Each time a different speaker talks, start a new line on the transcription sheet – even if the person just says “yes” or “no.”
  3. 3. If one group member interrupts another, place three dots (…) at the point in the sentence where the person is interrupted, and transcribe the interruption on the next line. If the first person (the member who was interrupted) continues the earlier statement, start a new line with three dots again (…) to indicate completion of the earlier statement and finish transcribing the interrupted members’ statement on that line.
  • Example
  • John: I think we should advise him to board the plane because if he …
  • Tim: I disagree completely with that idea.
  • John: … doesn't take the trip, he will always regret it.
  1. 4. If two or more people talk at once (which is common in group discussion), transcribe each person's remarks to the best of your ability. Put each person's statement on a separate line and use quotation marks at the beginning and end of each statement to indicate that the statements are simultaneous.
  • Example
  • John:“Right, that makes sense to me.”
  • Mary:“Sure I think that's OK.”
  • Tom:“I guess I can go along with that.”
  1. 5. Do not worry about which member is talking at any given time. When you have finished transcribing the discussion discourse, I will go back through the transcripts while watching the video tape and place members’ names next to each transcribed line.
  2. 6. Finally, remember that some parts of the group conversation may not appear to make sense or it may appear unorganized. Do not worry about that. Group discussion sometimes appears disjointed and muddled. Just transcribe the conversation as you hear it as completely and carefully as you can.

Appendix B: Unitizing Rules (Meyers, 1987)

  1. 1. A unit is any statement that functions as a complete thought or change of thought.
  2. 2. A unit is typically defined as any statement that contains a subject (explicit or clearly implied) and predicate/verb (explicit or clearly implied) and/or can stand alone as a complete thought (including terms of address, acknowledgments, nonrestrictive dependent clauses, etc.) as indicted next.
  3. 3. Simple sentences constitute separate units. They contain a subject and predicate and constitute a complete thought.
  • He should go to the doctor.
  • She should go to University Y.
  1. 4. Independent clauses constitute separate units. They are a subset of a sentence, contain a subject and predicate, and can stand alone as a complete thought. Divide compound sentences into separate units when independent clauses are connected with coordinating conjunctions such as those that follow, or if the two parts of the sentence can stand alone as two complete thoughts.
  • Additive: and, also, besides, moreover, furthermore, in addition, etc.
  • Opposing: but, yet, however, rather, nevertheless, instead, on the contrary, on the other hand, etc.
  • Alternative: or, either/or, nor, neither/nor, etc.
  • Temporal: then, next afterwards, previously, now, meanwhile, subsequently, later, thereafter, henceforth, etc.
  • Causal: for, so therefore, thus, consequently, hence, accordingly, as a result, otherwise, perhaps, indeed, surely, clearly, etc.
  • Example
  • He should go to the doctor and he should postpone his vacation.
  • This sentence contains two independent clauses and should be unitized as two units:
  • He should go to the doctor
  • And he should postpone his vacation.
  1. 5. Functioning independent clauses constitute separate units. In group discussion, individuals often make statements that function as complete and independent thoughts (i.e., serve as independent clauses) even though grammatically they would not be classified as such. These statements often begin with dependent clause conjunctions – because, like, since, so – and are therefore, in a strict grammatical sense, dependent, rather than independent clauses. But when these types of clauses function in group talk as complete and independent thoughts, they should be unitized as separate units. Consider as separate utterances these types of functioning independent clauses which are joined with explanatory subordinating dependent conjunctions such as the following:
  • when, whenever, because, just because, like, since, although, though, while, as, after, before, unless, until, in order than, so, so just, it's like, etc.
  • Example
  • I think he should go to the hospital, just because I think he is seriously ill.
  • This statement contains one clear independent clause, and one “functioning” independent clause and should be unitized as two separate units:
  • I think he should go to the hospital
  • Just because I think he is seriously ill
  • Example
  • I think he should go to University Y, because, it's like he would have so much pressure at the other university.
  • This statement contains a clear independent clause, and a “functioning” independent clause. It should be unitized as two separate units:
  • I think he should go to University Y
  • Because, it's like, he would have so much pressure at the other university.
  1. 6. Agreement/disagreement (yeah, right, no, no way) is counted as a separate unit if it stands alone and functions as a complete and independent thought (i.e., it is not part of a connecting statement that contains a subject and verb).
  • Example
  • No way should he go to University X – is unitized as a single unit.
  • No way! He should go to University X – is unitized as two units.
  1. 7. Multiple agreement/disagreement spoken in immediate succession by the same person (yeah, right, uh-huh) should be unitized as a single unit.
  2. 8. False starts or introductory phrases do not count as separate units, and should be unitized with the next complete statement.
  • Example
  • Well, I put, I put, I think I, I put 4 in 10 for this one – is unitized as a single unit.
  1. 9. Phrases like “you know,” “I guess,” I mean,” and “isn't it” when preceding a statement or added onto the end of a statement are not considered as separate units.
  2. 10. Interruptions are considered as separate units if they contain a complete thought. If a statement is interrupted and a complete statement is evident both before and after the interruption, it is unitized as two units. If a statement is interrupted, and only one complete statement is evident – what precedes the interruption or what follows the interruption does not constitute a complete unit – it is unitized as only one unit.
  • Example
  • Ann: He should board the plane because …
  • Vic: I don't think that's a good idea.
  • Ann: … because he needs a vacation
  • This sequence contains three separate units. Both statements before and after the interruption are complete independent thoughts.
  • Example
  • Ann: He should board the plane …
  • Vic: I don't think that's a good idea.
  • Ann: … right away.
  • This sequence contains only two units. Following the interruption, Ann merely completes her initial statement and does not produce a second independent thought.

Appendix C: Conversational Argument Scheme (CAS) (from Meyers & Brashers, 2008, Figure 2)

I. Arguables

A. Generative mechanisms

  • 1. Assertions: Statements of fact or opinion.
  • 2. Propositions: Statements that call for support, action, or conference on an argument-related statement.

B. Reasoning activities

  • 3. Elaborations: Statements that support other statements by providing evidence, reasons, or other support.
  • 4. Responses: Statements that defend arguables met with disagreement.
  • 5. Amplifications: Statements that explain or expound upon other statements in order to establish the relevance of the argument through inference.
  • 6. Justifications: Statements that offer validity of previous or upcoming statements by citing a rule of logic (provide a standard whereby arguments are weighed).

II. Convergence seeking activities (reinforcers)

  • 7. Agreement: Statements that express agreement with another statement.
  • 8. Acknowledgment: Statements that indicate recognition and/or comprehension of another statement, but not necessarily agreement with another's point.

III. Disagreement-relevant intrusions (promptors)

  •  9. Objections: Statements that deny the truth or accuracy of any arguable.
  • 10. Challenges: Statements that offer problems or questions that must be solved if agreement is to be secured on an arguable.

IV. Delimitors

  • 11. Frames: Statements that provide a context for and/or qualify arguables.
  • 12. Forestall/secure: Statements that attempt to forestall refutation by securing common ground.
  • 13. Forestall/remove: Statements that attempt to forestall refutation by removing possible objections.

V. Nonarguables

  • 14. Process: Non-argument related statements that orient the group to its task or specify the process the group should follow.
  • 15. Unrelated: Statements unrelated to the group's argument or process (tangents, side issues, self-talk, etc.).
  • 16. Incompletes: Statements that do not contain a complete, clear idea due to interruption or a person discontinuing a statement.

Appendix D: Coding Rules for Using the CAS (Meyers, 1987)

  1. 1. If the function of the statement is clear, code it into the appropriate category using the number code.
  2. 2. If the function of the statement is not immediately clear, coding should precede along the following sequence:
  • Arguables
  • Reinforcers
  • Promptors
  • Delimitors
  1. 3. Attributions of meaning should be limited to the text as much as possible. If a cogent idea is readily inferred from the statement in the text, code it into the appropriate category. When meaning is not evident in a given utterance, read ahead in the transcript to ascertain the meaning assigned to the utterance by the group, or read previous parts of the transcript to determine if prior conversation provides a context. If a statement is not cogent or is impossible to interpret, do not infer its meaning. Instead code it in the Non-arguable category.
  2. 4. Questions should be coded according to their function in the group's argument.
  1. a. Questions which call for conferral, support, or action on an argument-related issue should be coded under the category Proposition. These include:
  1. i. Requests for additional information, clarification, justification, or support
  1. How do you know that is true?
  2. Do you have any evidence for that?
  3. What do you say that?
  1. ii. Requests for direct action
  1. Why don't we talk about this argument a little more?
  2. Do you think we should consider Tom's argument valid?
  3. What do you think about Sam's statement?
  1. b. Questions that reflect statements of the speaker's opinion should be coded in the appropriate category. These are usually indirect Assertions that state the speaker's opinion and should be coded in the Assertion category.
  1. i. You don't really believe he should have the operation, do you?
  2. ii. C'mon, how can you really think he should board the plane?
  1. c. Questions that relate to non-argument related issues (how to organize the discussion, simple requests for repetition, off-the-trace questions, etc.) should be coded in the Non-Arguable category.

Note: Additional coding rules can be found in Canary (1992).

Appendix E: Sample Coded Transcript Using Multistage
Procedures (from Meyers & Brashers, 1995)

image

A/NA indicates Argument/Non-Argument message code.

DA indicates Decision Alternative code. In this case, there were three decision alternatives:

Risky (R), Cautious (C), and Neutral (N).

In the transcript, each Content code would be highlighted in a different color.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset