NOTES

Introduction

1 Often, these social science fields or subfields assume that this approach characterizes work in the natural and physical sciences; and given the sense of inferiority carried by many social scientists vis-à-vis those other modes—captured in the language of “soft” versus “hard” sciences—they seek to emulate what they perceive as “true,” and better, scientific practices. (In French, interestingly enough, the distinction is drawn between sciences dures (hard) and souples (literally, supple), the latter fitting nicely with the notion of flexibility in interpretive research.) That many of those other sciences do not conduct their work following the steps of “the scientific method” goes unnoticed. We do not have space to pursue this fully here, but we return to the point at the end of the chapter when we take up textbook presentations of research methods versus their practice.

2 This formulation omits the possibility of a critical realist perspective (see, e.g., A. Collier 1994), which we do not take up in this volume for reasons of space and because we think that, for purposes of research design, the critical realist would unlikely find traditional, positivist designs problematic. For alternative perspectives on critical realism and design, see Blaikie (2000).

3 That this is not a tale only of behavioralism triumphant is clear in Mirowski's (2003) analysis of post–World War II US developments in science and its curriculum, with growing implications for social science worldwide under pressure, today, to follow the US model.

4 Although both qualitative–positivist and qualitative–interpretive methods use one or more of the same three methods in generating data—observing(-participating), talking (interviewing), reading documents—the difference between them is most clearly seen in what they do with data once they have them in hand. If one looks closely, however, one can see that either positivist or interpretive sensibilities inform what researchers indeed do—not only in analyzing data, but also in generating them; so even the orientation toward and enactment of data generation methods is different, as the example of interviewing, above, illustrates.

5 We intentionally cite Ferguson's book here, for several reasons: it is a useful example as it crosses disciplinary boundaries both because of its feminist theoretical argumentation and because several disciplines trace their theoretical origins to Weberian bureaucracy theory; and it is an interesting bit of interpretive theoretical work that has profound implications for empirical analyses of and interventions in organizations and management practices.

6 See, e.g., Mihic et al. (2005). We do know it is also an issue in others of the human sciences (How 2011).

7 We note that the Western Political Science Association has an organized section entitled “Political Theory and Its Applications.” The section “welcomes papers at the intersection of political theory and empirical concern, creating a critical dialogue between theory and practice in which events push our thinking further and intellectual labor is performed to conceptualize historical and contemporary developments.” Many of the methods of analysis used by political theorists are also used by field researchers analyzing linguistic materials generated from observations, interviews, and other sources: deconstruction, semiotics, poststructuralism, and other exegetical ways of treating texts. Much like anthropologists learning their research methods, these scholars learn and manifest their methods in implicit ways, through the reading, discussing, and writing of texts, rather than through methods courses. Perhaps for these reasons, as well as structural ones having to do with the political science discipline, “theorists” are not recognized as having methods. Our thanks to Mary Bellhouse, Anne Norton, and Elizabeth Wingrove for educating us in these matters.

8 The term “positivism” encompasses the three initial nineteenth-century schools of positivist thought—social positivism, evolutionary positivism, and critical positivism (or empirio-criticism)—along with the early-twentieth-century logical positivism that surpassed them, which emphasizes verification, and mid-twentieth-century post-Popperian neopositivism, which emphasizes falsifiability (see Hawkesworth 2006a, P. Jackson 2011). Reading across disciplines today, we find both verificationist and falsificationist philosophies present, and so we have chosen to use the broader positivist label here. It is often the case that methods texts that treat their subject in keeping with positivist presuppositions make no mention of it.

9 The term “interpretivism” encompasses a broad array of schools with a variety of specific methods that are united by their constructivist ontological and interpretive epistemological presuppositions. For discussion, see Chapter 2 .

10 John Van Maanen (2011) has recently added a fourth term to this trilogy—head-work—in reference to the conceptual work that informs research. This includes prior knowledge of both a theoretical–academic and an experiential kind, which we take up in subsequent chapters, although without his terminology.

11 Paradoxically enough, in some influential feminist political theory, to be a “subject” implies having existence and agency. Ella Myers (personal communication, 1 November 2010) notes that Judith Butler, drawing on Foucault, argues that “the status of . . . ‘subject’ always carries a double-meaning: . . . one is simultaneously ‘subject to’ constraining conditions and a ‘subject’ (not object) who is capable of action within those conditions (which are enabling and not only constraining).” This makes agency “a key dimension of what it is to be a subject, even within a social constructionist frame such as Butler's.” As Butler (1997: 17) writes, power both initiates the subject and constitutes the subject's agency, so that the subject is “neither fully determined by power nor fully determining of power (but significantly and partially both).” Our thanks to Ella Myers for help on this point.

12 We acknowledge that there are other forms of positivist research that are not overtly variables-based, such as some forms of historical analysis. (To see the extent to which some positivist historical analysis follows a variables-based logic, see the brief discussion by Falleti, 2006.) Still, we have not experienced challenges to interpretivist researchers along the lines of why they did not undertake process tracing or how they dealt with selection bias with respect to choices of period or event, whereas we have witnessed many such encounters concerning variables and have heard even more stories about such challenges from others.

13 In doing so, we hear political scientist Raymond Duvall's words in our ears, and we thank him for continuing to sound this caution!

14 Nor do we delve into whether practicing positivist or interpretivist researchers can articulate their philosophical presuppositions, beyond the brief discussion on page 4. The extent to which members of each group do so reflects, to some extent, the hegemonic position of the former and the “minority” position of the latter. In “bicultural” fashion (see Bell 1990), interpretivists need to be conversant with the culture and language of the majority as well as with their own. Whereas scholars doing positivist research are rarely called upon to articulate the methodological presuppositions underlying their research, interpretivists often, if not usually, need to do so. We would hope for a certain “bilingualism” and the ability to “code-switch” for both groups, in the name of better communication.

We also do not discuss whether researchers believe in these presuppositions. As a colleague expressed to one of us, in his experience, it is rare at a conference to hear claims of general laws from positivist, behavioral researchers. Instead, they recognize their studies as imperfect descriptions of a complex world.

15 For an interesting historical fictional account of this period, based in 1630s Oxford, see Pears (1997).

1  Wherefore Research Designs?

1 In other words, settings do not determine methodology! Many kinds of settings, as well as methods, can fit either positivist or interpretivist approaches.

2 Just to be clear, we see a distinction between understanding the broad treatment of a concept in a particular body of research literature, on the one hand, and a priori concept formation, on the other.

3 One exception is the fifth edition of Singleton and Straits (2009), whose revised chapter on ethics, repositioned more prominently from near the end of the book in previous editions to the third chapter, goes somewhat beyond IRB issues.

4 For those who assume that ethics concerns do not apply to their form of research because they are not interacting directly with living human beings, see the cautionary tales in Marks (2005) and Wylie (2005).

5 We are indebted to Lee Ann Fujii for helping us bring out these implications.

2  Ways of Knowing: Research Questions and Logics of Inquiry

1 Our thanks to Markus Haverland for noting that this needs saying—and for saying it!

2 These cases of research questions developing after the research has, in a sense, already started to pose problems for many US IRBs, although it is a long-common way of doing participant observer research and is not (yet) problematic for EU member states. We take this up in Chapter 7 .

3 We thank Joe Soss for help drawing out this point.

4 Patrick Jackson (personal communication, Toronto, 1 September 2009) notes that we should be cautious in invoking Peirce's ideas for interpretive methodological purposes, given their origins in positivist thought. Peirce's ideas about abduction, however, apparently changed between his early writings and his later ones (Benjamin Herborth, personal communication, Potsdam, 12 September 2009). (Friedrichs and Kratochwil, 2009: 715, also imply contending interpretations of Peirce's intended meaning.) We suspect that Herborth's point resolves Jackson's. What is presented here would seem to be in keeping with Peirce's later views. Jackson's point, however, supports the research reality that non-interpretive research can also begin with a puzzle or surprise. But abductive reasoning enables methodologists to articulate a number of characteristics particular to interpretive research which the logic and language of inductive reasoning do not explain, such as the focus on puzzles as the starting point and the iterative–recursiveness of the research process.

5 See also Glynos and Haworth (2007) on retroduction, used synonymously; but see Blaikie (2000) for a usage that distinguishes between the two. For historical background, see Menand (2001). Kuhn's notion of “puzzle-solving” (1996/1962, chapter 4 ) appears to be a rather different activity from the puzzling that launches abductive inquiry.

6 Thanks to Xymena Kurowska (personal communication, 16 July 2010) for suggesting a turn to etymology and metaphoric meaning to “normalize” the resonance of abduction as a concept.

7 As discussed in Chapter 6 , researchers assess these provisional explanations using various checks on their own sense-making.

8 Once again, Joe Soss has helped us articulate what was just below the surface.

9 Although Campbell (1989: 8, original emphasis), in his foreword to a revised edition of Yin's widely read text on case study research, claims that hermeneutics means “giving up on the goal of validity and abandoning disputation as to who has got it right,” it does not follow from the hermeneutic circle that disputation is abandoned: even among members of an interpretive community who share an understanding as to how data are to be interpreted, disagreements over interpretations are possible. They would be resolved, however, by appeal to those shared understandings (e.g., by appeal to ethnographic data, rather than to statistical analyses). Contra Campbell, this general reasoning about disputation applies to both interpretive and positivist research: there can be different ways to interpret statistical results, for example, and such disagreements are resolved by logic and argumentation within a shared epistemological framework.

10 Hatch and Yanow (2008) used the metaphor of painting styles to try to evoke differences in research approaches, seeing in Jackson Pollock's drip paintings this same trace or echo of human action which interpretive researchers seek to grasp.

11 This “front-loading,” i.e., working out issues in advance of data collection, means that quantitative research can be comparatively easy to write up, by contrast with both qualitative and interpretive research. Statistical testing means that the “logic” advanced in the design is either supported or refuted, and “writing up” means reporting and assessing the implication of the “results.” Without the apparatus of significance testing, both qualitative and interpretive research requires different kinds of attention to the meaning of the evidence, such that writing is, literally, less “formulaic.” Finally, as is well recognized, quantitative research also tends to take up less space because “findings” are often presented as equations rather than in the “word” detail necessary to many forms of qualitative and interpretive research.

12 This is one of the things that distinguishes research interpreting theoretical texts—e.g., readings that seek to make sense of Aristotle or Arendt—from the interpretive analysis of empirical data, including the documentary texts that might be drawn on in either historical or contemporary research.

13 See also Sandberg and Alvesson (2011) on what they call “gap-spotting” in existing literature as the source of research questions.

14 Our thanks to Lee Ann Fujii for the analogy and other help articulating this point.

15 In positivist social science traditions, theorizing is understood as “formal” in language and logic, with mathematical systems that are abstract and impersonal still considered the ideal in several disciplines (although see Whitehead, 1997/1925, on the “fallacy of misplaced concreteness”: the notion that mathematical formulations are more concrete than descriptions of lived experience). These systems are often described by their creators and advocates as “parsimonious” or “elegant,” displaying a clarity in their postulated causal relationships that other modes of theorizing supposedly cannot achieve. See Lincoln (2010) on these differences and their connections to knowledge accumulation.

16 This does not mean that positivist modes of research do not begin with puzzles, too. But textbook discussions of design, even if they do mention puzzles as sources of research questions, typically do not engage abductive logic. They are more likely to emphasize the “stages” of research, presented in a linear fashion (e.g., Singleton and Straits 1999).

3  Starting from Meaning: Contextuality and Its Implications

1 Others include the historical turn (McDonald 1996), cultural turn (Bonnell and Hunt 1999), pragmatist turn (S. K. White 2004), and so on. What they share is a repositioning of meaning-expression and -communication, along with interpretation, at the center of theorizing about ways of seeing (J. Berger 1972) and knowing.

2 There is an extensive literature on proper question construction and phrasing in survey instruments, intended to control for “interviewer effects” and other forms of researcher influence on participants’ responses, something that does not trouble interpretive researchers in the same way, given their different methodological presuppositions about social realities and the ways in which these can be known. We discuss these points further in Chapter 6 .

We do not mean to suggest, however, that survey researchers are completely unconcerned with context. They are, for instance, attuned to changes in meaning over time and whether, with repeated surveying, this would require changing questionnaire language as particular phrases or words become outdated. For instance, in a 2012 version of a survey initially conducted in 1972, should the researcher replace “women's liberation,” used when the question was first asked, with the more common contemporary phrase “the women's movement” (Conway et al. 2005)? A survey researcher would be concerned with whether “women's liberation” means the same thing in 2012 as it did in 1972 or if it has dropped out of use altogether, thereby rendering the question useless.

Less discussed are the cultural assumptions underlying survey methodology. Standard techniques may not be possible in countries or with populations that have little experience with surveys. Tessler and Jamal (2006: 436) describe how, in Egypt, those they approached “wanted to think through their responses very carefully” (which was hugely time-consuming), asked for follow-up explanations, or wanted to hear the surveyor's opinion before responding—things not customarily accepted in survey research. And random selection also worried them! Rudolph (2005) describes administering a survey in rural India in 1957, assuming the interaction would involve only one resident and one “woman within” as respondent—only to discover that it took a village, so to speak, to deliberate over the questions and provide answers. See also Chabal and Daloz (2006: 177–84).

3 Williams contests the value of this approach: “There is nothing [in the text] to help the reader decide what is of value in the situation, what they [sic] will find insightful, or on what basis they [sic] might do so” (2000: 219). From our perspective this position misses the point of thick description in this specific situation, which is to enable readers to compare the study context to their own. Moreover, the criticism paints a rather passive portrait of readers that is inconsistent with seeing them as more active meaning-constructors, as suggested by reader-response theory (e.g., Iser 1989) and other interpretive presuppositions.

4 These purposes have been discussed across a wide range of theoretical and methodological fields, among them feminist theorists and researchers (e.g., Cancian 1992, Ackerly et al. 2006, Hawkesworth 2006b); critical legal studies (e.g., Halley and Brown 2003); critical race theory (e.g., Crenshaw 1995, Delgado and Stefanic 2001); critical theory (e.g., Prasad 2005, Ch. 9); and action research (e.g., Greenwood and Levin 2007).

5 Williams (2000: 215) argues that interpretive researchers do generalize in a form he calls “moderatum generalization.” Noting that Geertz wants to “say something of something” (p. 213), he argues that Geertz is “inferring from specific instances to the characteristics of a wider social milieu” (p. 212). We do not dispute this understanding of generalization, but we note how it is tied to context and, as important, is not in the service of building general, a-historical, a-cultural theory. For a brief discussion of Geertz’ understanding of the value of the general in relation to the particular, see Adcock (2006: 60–3).

6 This orientation toward contextual meaning-making is seldom acknowledged in general research methods texts, leading their treatment of design to be implicitly, if not explicitly, positivist (Schwartz-Shea and Yanow 2002). Interpretive researchers consulting such texts will find little guidance for producing designs that link meaning and context, along with some advice (e.g., the need to define concepts a priori) that would sever that connection.

7 It is important to note, also, that “thickness” is a relative measure, not an absolute one. For one, both the level and the kind of detail have to be pertinent to the research question: one would not likely report the number of tiles in the ceiling, for instance, in a research project focused on, say, a school principal's management style. Additionally, one needs to take account of one's readers and what they already know, or can be reasonably assumed to know, about the subject. Such judgments lead to accounts that may be “thicker” in some parts than in others.

8 “Local knowledge” is often credited to Geertz (1983), but it has many conceptual antecedents, especially in the field of urban, regional, and international (development-related) planning and its 1970s emphasis on participation in planning and design (see, e.g., Arnstein 1969, Gans 1968, Peattie 1970, and Piven and Cloward 1977).

9 “Formal” models include game-theoretic and other forms of theorizing using mathematical tools. We are not sure how it is that mathematical theorizing has claimed exclusive ownership of the adjective “formal.” Referring to other modes of theorizing as “non-formal” (see, e.g., Aldrich et al. 2008: 834) is presumptuous, at best.

10 This choice of terminology may confuse those readers acquainted with parallel debates in the field of International Relations. In an influential article, Wendt (1998) distinguished between causal and “constitutive” theorizing. Wendt, however, argued for and used a constructivist ontology combined with an objectivist epistemology—putting him at odds with the constructivist ontological and interpretivist epistemological approach articulated in this book.

11 Some scholars identified with interpretive research have used the language of “mechanisms” in their efforts to explain the distinctive ways that qualitative or interpretive research can contribute to causal explanation (see, e.g., Lin 1998). Informed as we are by the arguments developed here, however, we do not find this approach to be helpful. It is also not clear how “mechanisms” in that literature is different from its meaning and use by positivist–qualitative researchers in the comparative case study literature.

4  The Rhythms of Interpretive Research I: Getting Going

1 For a thought experiment on a positivist reviewer's encounter with an interpretive manuscript, see Schwartz-Shea (2006: 90–1).

2 This is the case even when conducting a pilot study, whose results may be used to modify the research instrument prior to beginning the full research project.

3 There are exceptions to this separation of data collection and analysis among positivist– qualitative researchers, most notably in work by Ragin (1997) and Brady and Collier (2010). As the latter put it, “. . . many qualitative researchers view the iterated refinement of hypotheses in light of the data to be essential” (2010: 329, original emphasis). This view has not, however, been incorporated yet in most research design textbooks or course discussions, which tend to articulate the more classic model.

4 This is the sort of advanced preparation and practice undertaken by improvisers in theater and music (Renaissance, jazz, and other forms), which lays the groundwork on which flexible, adaptive responses in the field can be built. See discussion at the end of this chapter.

5 For a comparison between positivist–qualitative and interpretive(–qualitative) ethnography, see Schatz (2009). The fact that ethnographic, case study, and some other forms of research can be informed by either positivist or interpretive presuppositions is what has given rise to the terminological distinction between qualitative and interpretive methodologies, a point discussed in this book's introduction.

6 But see Russell et al. (2002: 14) on rapport and its conceptual difficulties: “[N]eo-positivist claims about the technical function of rapport in field research rest on assumptions about the possibility of collecting ‘accurate’ or ‘unbiased’ data from and about one's subjects.”

7 Paying participants is a debated topic in academic ethics, with practices varying across the social sciences. In experimental research in psychology and economics, it is accepted practice to pay subjects for their participation; in psychology, undergraduate student subjects often receive course credit for participating in experiments. In field settings in other social sciences, it is usually frowned upon (not only by IRBs). That Walby (2010) paid the men he interviewed is of note.

8 This perspective on researcher identity is a far cry from the common positivist view that factors such as personal contacts or language skills are methodologically irrelevant to case selection: “[T]hese features of a case have no bearing on the validity of the findings stemming from a study” (Gerring 2007: 150). That perspective follows from the methodological assumption that “the case should stand for the population” (Gerring 2007: 147), a position consistent with the goal of building general theory, which leads to the severing of a “population” from its context. From an interpretive perspective, this denial of the embodied aspects of research obscures the ways in which researcher characteristics—gender and race-ethnicity, but also ablebodiedness, age, and other factors—may affect access (and, ultimately, the character of social science knowledge).

This is a point that has been central to feminist theory and methods, including the debates on standpoint, as well as to science studies (see Haraway 1991, Harding 1993, Hartsock 1987, Hawkesworth 2006a). Gerring (2007: 146) does recognize that the contemporary social science knowledge base is skewed by attention to “a few ‘big’ (populous, rich, powerful) countries.” He goes on to argue that “a good portion” of the disciplines of economics, political science, and sociology, in particular, is built primarily on familiarity with one country, the US. One might consider the extent to which not treating contacts and language skills as methodologically relevant is responsible for producing this skewing and how seeing such access-related issues as linked, methodologically, to knowledge claims might remedy the problem.

9 She traces her ability to deal with stressful experiences in this way to prior experience as intake director at a crisis center for homeless teens in New York City—another example of the unanticipated ways in which prior knowledge can play a role in research.

10 Our thanks to Lee Ann Fujii, Tim Pachirat, Joe Soss, and Dorian Warren for pointing us to these and other sources.

11 A rhizome is a plant form which reproduces by sending out shoots underground, each of which might give rise to a new plant. The term was introduced by Deleuze and Guattari (1987) as a way of highlighting features that entail connections among multiple nodes which can be entered at any point (a concept theorized also as “networks”). It has since caught on as a way of describing a form of research process.

12 We note that conducting survey research among hard-to-reach groups (e.g., those trying to avoid calling attention to themselves, such as immigrants without official papers) poses its own difficulties.

13 “Small n” research is used across many fields, from sociology (Ragin 2005) to business (S. Jackson 2008), from history (Snow et al. 2004) to medicine (Cowan et al. 2004). In the field of comparative politics, the literature devoted to the proper selection of cases is voluminous. What constitutes “a case” is the first question; the possibilities include individuals, decisions, social groups, events, and countries, among others. For an overview of the nine possible selection techniques suited to a “small n” case study approach, see Gerring (2007, Chapter 5 ). The concern with “selection bias” in the choice of country case studies was classically articulated by Barbara Geddes (1990; see also D. Collier et al. 2004).

14 Additionally, the selection of cases according to “most similar” or “most different” design logics assumes that what is “similar” and “different” can be determined by an external judge, the researcher. As one of the manuscript reviewers put this point, in reference to the epigraph at Chapter 3 : “[T]he fact that we don't know whether or not Indians prefer cold milk in the morning also means that we don't really know what a similar or different case is, in most circumstances, at least not without an already-established ‘thick’ knowledge of contextual factors” (original emphasis). Our thanks to the anonymous reviewer for this key point.

15 We note the implicit and completely unreflective bias in methods textbooks and discussions towards “Western” values of openness when it comes to scientific inquiry, as well as to the implied impartiality and accuracy of governmental and other sorts of data. Tessler and Jamal (2006) and other essays in “Field Research Methods in the Middle East” (2006) provide several examples. See also Sadiq and Monroe (2010).

16 Compiling a complete sampling frame is challenging in other areas as well. US pollsters once found landline phone lists to be comparatively complete for certain tasks (e.g., predicting voting patterns), but cell phones have eliminated this possibility. For particular populations, such as the homeless, immigrants, or others who fear authorities, complete lists are also problematic. Fear of authorities can also have a historic reference point: synagogue and other groups in The Netherlands do not compile lists of members today because of what was done with those lists during World War II.

17 By this positivist logic, a researcher can and should be concerned only with choosing the “best case” for testing theoretical propositions developed a priori. This is not to say that positivist researchers don't encounter obstacles to achieving this ideal (e.g., available sampling frames are inadequate or panel data are distorted by attrition), but that these obstacles are not theorized in relation to researcher identity. (Our thanks to an anonymous reviewer for pushing us toward a more subtle treatment of these points.) For a related discussion of the relevance of researcher identity to knowledge generation, see Note 8, this chapter.

18 This point is what the title of Lincoln and Guba's (1985) seminal text Naturalistic Inquiry emphasizes—investigating research participants in their “natural” settings. We have not used this terminology for reasons explained in the introduction.

19 The matter of researcher control seems not to be problematized in methods discussions of these kinds of research, an unintended consequence, perhaps, of extending experimental logic to such settings.

20 The citation is to Charles Ragin. 1987. The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. 164–71. Berkeley: University of California Press.

21 On listening skills in interviewing and sense-making, see, e.g., Forester (2006), Spradley (1979), and Weiss (1994). Forsey (2010) calls for a shift of emphasis to include “participant listening,” and not just participant observing.

22 Use of multiple interviewers or research assistants is still relatively rare in US interpretive research, although it is more common in Europe, with the demands of EU funding driving multi-state studies. The time- and labor-intensive nature of the work for the primary investigator means that “measures of productivity” that are increasingly being developed to assess university researchers, which are attuned to survey and laboratory work, may systematically undervalue interpretive research and researchers.

5 The Rhythms of Interpretive Research II: Understanding and
Generating Evidence

1 Thus far, neo-positivists, i.e., those endorsing Popperian falsifiability over logical positivism's verification criterion to demarcate science from non-science, would agree; our discussion is consistent with the idea that observation is theory-laded. (For a clever explication of this relationship, see Shapiro's, 2004: 26, example of questions and evidence about a woman saying “I do” in a conventional marriage ceremony.) However, neopositivists do not take the next step, to the co-generation of data, which is elaborated in the rest of this section.

2 This is a point where our introductory caveats matter! We are drawing attention to the implications of the commonplace notion of the “collection” of data and to the positivist philosophical presuppositions that imply that scientists’ theories (can) “mirror” the world. No doubt many practicing positivist scientists are aware of the ways in which the research questions they formulate “create” or “generate” their data. Experiments are purposely set up by scientists to produce data relevant to their questions. Survey researchers emphasize the ways in which their phrasing of survey questions produces particular answers (e.g., Zaller 1992, Walsh 2009). Yet it does not follow that they have necessarily rejected the overarching goal of approximating “reality” through objectivist means.

3 “Pure” autoethnography, in which the researcher uses his own experience in the setting in question as a vehicle for understanding, is a key exception to the idea of data that are co-generated. But some autoethnographies also incorporate others’ sense-making. See, for example, Greenhalgh's (2001) study of her own experiences with illness and medical diagnoses. In other respects, autoethnographies generally follow the lines sketched out in these chapters for thinking about research design, and we do not otherwise single them out.

4 The first citation is to J. L. Austin. 1962. How To Do Things with Words. Oxford: Oxford University Press. Watson's argumentation with respect to the implications of a speech-acts-and-presentation-of-self take on field research is that it is only the context of daily, ethnographic immersion in a setting that enables researchers to situate what they hear in interviews: “. . . the people who supply us with information would be far more circumspect about what they tell us if they saw us as a person they knew and encountered everyday in the workplace rather than as ‘that researcher from the university up the road’” (2011: 210, on organizational ethnography). This leads him to doubt the utility of interviews for understanding, when those interviewed are only encountered in interview events. On the tradeoffs between interviewing and ethnographic research, see Soss (2006).

5 We note in passing that little has been written about the “lies” that researchers tell in the field, typically in the context of masking parts of their non-research identity. See Ellis (1995).

6 This produces what Harding (1993) has called “strong” objectivity—strong because prior knowledge and embodiment are theorized, rather than ignored. We discuss objectivity further in Chapter 6 .

7 Although we have not found a published statement to the effect that qualitative (i.e., non-numerical) data are inferior, that quantitative data are better appears to be a tacit assumption among many positivist researchers. This inference is based on the emphasis on measurement in standard methods texts, the ubiquitous equation of science with “measurement,” the unreflective privileging of “hard” over “soft” data, the apologetic tone of some qualitative researchers when presenting their work at conferences, and qualitative researchers’ efforts to model their research after quantitative practices. For example, King et al. (1994: 23, 25) state that data “can be qualitative or quantitative in style,” but they then proceed to discuss the need for improving the character of qualitative data in terms of their measurability.

In the second edition of their edited volume, Brady and Collier (2010: 325) state that a “piece of [qualitative] data that begins as an isolated causal-process observation can subsequently be incorporated into a rectangular data set”—presumably in order to improve the status of word-data by rendering them in tabular form similar to that of quantitative data sets. Ragin (1997) has been one of the most forceful voices speaking against this oft-tacitly accepted ideal.

8 Our thanks to Shaul Shenhav for helping us draw out the implications of our thinking.

9 Shaul Shenhav (personal communication, 2 June 2011) draws a distinction between intertextuality in interpretive methods and in positivist ones. In the former, it is “a process of interaction between the scholar's mind and the object of investigation, . . . a living process where the researcher brings whatever he has [acquired] . . . to help him to understand the object of investigation or to address the [research] questions he has. In [positivist] methods this process is rather different. You have many predefined guidelines, the potential arenas are much more narrow, creativity is bounded by predefined procedures, [and so on]. In other words, intertextuality in [positivist] methods is restricted to predefined arenas (data sets, statistical procedures, accepted visualizations . . .). Obviously, it affects the mapping and exposure. . . . While both in quantitative and qualitative [positivist and interpretive] methods you have to work very hard to make sense of what you find, in qualitative–interpretative approaches the human efforts for each study start right at the beginning when you [begin to look for this kind of] intertextuality. . . . The difference is not about numbers or the deductive-inductive dichotomy and it is different from Charles Ragin's way of seeing the two approaches. It is more a question of mind-set or cognitive schemes applied while doing research.”

10 This does not mean that interpretive researchers do not study similarities. They often ask: What are the shared, yet tacitly known assumptions of members of this group, that make for common ground? Sir Geoffrey Vickers (n.d.) once observed, in fact, that social scientists pay more attention to the “mismatched signal” than they do to the “matched signal”—i.e., to differences, rather than to the shared assumptions and values, including with respect to what needs to remain unspoken, that make a social unit work.

11 “Auditing” also appears in interpretive methodological discussions to designate a similar process of keeping track of major decisions during the research. The term derives from anticipating an “audit” of one's manuscript by future reviewers (Schwartz-Shea 2006).

12 This is one among many reasons why the sort of “data archiving” championed by quantitative and some qualitative scholars (“Data Collection and Collaboration” 2010) is problematic for interpretive research. Additional considerations include the sheer volume of notes generated in the field, the cost of transcription, issues of academic freedom, and, most important, ethical concerns, including promises of confidentiality and the need to protect participants from possible harm. See Chapter 7 .

6 Designing for Trustworthiness: Knowledge Claims and Evaluations
of Interpretive Research

1 Debates over the general utility of significance testing have even spilled over onto the pages of the New York Times (Carey 2011). For a scholarly review see Gill (1999).

2 The problem of construct validity is especially clear in secondary data analysis where researchers adapt indicators that were created by another researcher for her specific purposes to their own research needs. This problem is relevant to data archiving, discussed in Chapter 7 . (For an in-depth, nuanced discussion of different measures of construct validity, see Adcock and Collier, 2001.)

3 A National Science Foundation report on standards for assessing qualitative research describes replicability thus: “The description of the methodology should be sufficiently detailed and transparent to allow other researchers to check the results and/or reanalyze the evidence. All reasonable efforts should be undertaken to make archival and other data available to scholars” (Lamont and White 2009: 85). As discussed in this section, “checking the results” presumes a “mirroring of the world” and the irrelevance of researcher identity (see also Chapter 4 , Note 8).

4 See, e.g., King et al.’s (1994: 31–2, 151–68) emphasis on estimating and reporting uncertainty attributable to “measurement error,” a preoccupation for positivist researchers that drives their research design. Behind this assumption lies an aspect of the “unity of science” debates—that the social world can be understood through the same sorts of general, a-historical, a-cultural laws that are understood to characterize the natural and physical sciences. This is an older, and now generally rejected, understanding of how those sciences do their work. For more, see Cat (2010).

5 Although the degree of stability of social phenomena is an empirical question, the positivist gestalt, in searching for causal laws, encourages a neglect of context that often produces a-historical, presentist research agendas.

6 Although some interpretive researchers might emphasize the stability of reified patterns and institutions in their studies, at the more philosophical level these are also understood as humanly constructed and historically constituted and, therefore, potentially changeable, although not necessarily with ease. See P. Berger and Luckmann (1966) for the classic discussion of objectification and reification in the context of social construction processes.

7 This is a methodological point that we do not have the space to discuss in detail, although we emphasize that it concerns interpretation and interpretive communities, not the character of human “nature” (as in the “rational man” arguments in economics). For a philosophical treatment of this understanding of interpretive processes as similar across humans, see Fay (1996), Chapters 1 , “Do you have to be one to know one?” and4 , “Do people in different cultures live in different worlds?”

8 The relationship between forms of distance and types of bias has not been engaged in textbook discussions of objectivity. On the other hand, this is the idea at the heart of the notion of “going native” which textbook discussions of qualitative methods so often warn against: that a researcher, in dwelling physically in close proximity to those being studied, would lose the cognitive–emotional “distance” required to study them. All manner of epistemological assumptions are built in to this phrase, as well as ontological ones concerning the character of “member” and “stranger,” not to mention a residual colonialist paternalism or even racism (Nencel and Yanow 2011).

9 Experiments have found confirmation bias to affect the judgment of research subjects such as “political experts” (e.g., intelligence analysts, Tetlock 2005), as well as of psychologists and other research scientists (e.g., Shadish 2007). In their report on the National Science Foundation Workshop on Interdisciplinary Standards for Systematic Qualitative Research, Lamont and White (2009: 85–6) treat confirmation bias as a problem in the analysis of qualitative data, presumably because of this experimental literature, although they neither cite that literature nor clarify why confirmation bias should be a specific concern in qualitative research. To guard against it they suggest that researchers test their “novel insights or facts” against evidence gathered independently from the case under study or taken from other cases developed by other researchers.

10 Ethnographers and participant observers often draw on observations of the sort that Webb et al. initially designated “unobtrusive” (e.g., noting the amount or character of laundry hanging to dry on a line behind an apartment building as indicative of the kinds of people living there, their age, size of families, etc.). But the original 1966 Unobtrusive Measures was retitled in the 1981 edition as Nonreactive Measures in the Social Sciences, and the discussion in the text itself clearly signals that the authors were aiming not only at an unobtrusive observer, an accustomed role for participant observers, but one who could achieve uncontaminated, objective observations. Title and text also signal that they intended “measures” to be taken more literally than as a synonym for indicators, for instance.

Reactivity is also understood as a threat to “external validity” (Campbell and Stanley's, 1963, phrase for generalizability): if reactivity combines with the independent variable to cause an effect in the experimental or quasi-experimental setting, the findings of that research may not generalize to, or obtain in, other settings or groups in which researchers are not present. Placebos and their measured effects in medical research, on which there is an extensive literature, are a standard way in which positivists assess reactivity.

11 As survey researchers know, however, respondents often are perplexed by survey questions or categories when these seem not to fit their experiences or views, and they try to get the researcher to clarify them in ways that would mean adapting or adjusting the questionnaire. Researchers in those areas of the world where surveys are not commonly part of the societal culture may find their expectations of respondents’ compliance thwarted, with various impacts on their research timetables as they are asked not only to explain questions that are not understood, but to offer their own personal answers to the questions so that respondents can know better how to frame their answers (Tessler and Jamal 2006; see also Rudolph 2005). Researchers who alter or explain the questions to survey respondents have, within the methodological parameters of survey research, introduced bias into their survey results. In this view, researchers are not meant to make “ad hoc” responses to individuals even if, in the researcher's judgment, such responses facilitate respondent understanding of what they are being asked.

12 Again, this is an idea that carries over from experimental research design, in which researchers certainly do not—to the best of our current knowledge—interact with cells in petrie dishes, for instance, affecting research outcomes (although this is precisely the point that Heisenberg, in his “uncertainty principle,” was articulating with respect to measuring distances in physics and the ways in which the act of measuring itself affects that which is being measured).

13 The concept of reflection also has a place in practice studies and, even more specifically, in management studies, due to the writings of Donald Schön on reflective practice (e.g., 1983). Although methodological reflexivity shares a sense with reflective practice—both of them, after all, are intended to turn the reflector's attention back onto prior acts and to attend to sense-making in and of those acts—reflexivity as a methods practice has received much more elaboration and specification than reflective practice, and they are not identical in their implementation.

14 For instance, one of us is relatively short in height, which, combined with her gender, leads some people in US settings to “see” her as non-threatening and to open up in conversations. But the same traits lead others to question her competence as a researcher. She has learned to anticipate such construals of her identity and to prepare ahead of time a variety of possible responses.

15 Just to underscore the point, we hold that hearing or listening should have equal standing with seeing—what Forsey (2010) calls “a democracy of the senses”—and that emotions can also be a source of knowing and knowledge generation. Consider the interview participant who, seeing the interviewer's eyes well up with tears in response to a pain-filled narrative, decides to open up even further and share additional personal experiences that she might have otherwise withheld (see, e.g., Bayard de Volo 2009).

16 This approach to reflexivity is associated with Bourdieu, who does not emphasize reflection by the researcher on how her personal characteristics may have affected data generation and analysis processes. Instead, he emphasizes “the systematic exploration of the ‘unthought categories of thought which delimit the thinkable and predetermine thought’ [citation to Bourdieu], as well as guide the practical carrying out of social inquiry. . . . What has to be constantly scrutinized and neutralized, in the very act of construction of the object, is the collective scientific unconscious embedded in theories, problems and (especially national) categories of scholarly judgment” (Bourdieu and Wacquant 1992: 40, original emphases).

17 To the extent that those working with others’ databases do not investigate and report how those data were originally generated, they fail to enact this key value.

18 Conflating transparency with replicability, as in the NSF Report (Lamont and White 2009, quoted in Note 3, this chapter), is unwarranted because it is logically possible to be committed to transparency (i.e., being forthcoming about how one did one's research) without assuming that others can duplicate either the reported research processes or their associated results. Interpretive research endorses transparency as essential to science even as it sees the standard of replicability as inconsistent with interpretive presuppositions.

19 Harrell (2006) has a brief, but very useful overview of the evolution of the methodological debates in anthropology concerning the ways in which the researcher represents those studied. Starting from a third-person authoritative voice, criticized as “colonial” and “patronizing” in its treatment of “natives,” anthropology underwent a “crisis in representation” in which researchers doubted whether they could “speak for others” (Alcoff 1991, Clifford and Marcus 1986; for overview and analysis, see Atkinson et al. 2003).

Reflexivity in research manuscripts can appear stunted or seamless depending on, perhaps, author uncertainty over these unresolved methodological issues and, frankly, the writing talents of particular authors. Researchers in various fields have tried selfconsciously experimental reporting styles and degrees of self-revelation in their reflexive accounts, some of which have been criticized by other interpretive methodologists as narcissistic navel-gazing. We find Alvesson and Skolberg's (2000: 246) critique on this point useful: they are “against the type of self-reflection that leaves little energy left over for anything else,” a highly subjective assessment which, if we understand it, is a position we ourselves tend to share.

20 We would, in fact, argue that it begins before, at least during the proposal development stage of research, if not even earlier during degree-related coursework and even in prior experiences, to the extent that these inform the subsequent development of a research question. Lest this line of thinking lead to an infinite pre-research regress, however, we formulate our discussion in the context of a researcher formally developing and carrying out a research project.

21 Forthcoming volumes in this series will engage several of these, including postcolonial analysis, narrative analysis, interviewing, and ethnography.

22 Transcripts of interviews or sections of texts may also be sent back when the initially negotiated permission to quote or cite needs to be confirmed, e.g., when promised confidentiality cannot be kept. This is not what the US literature on member-checking typically refers to, although it is one aspect discussed in the German-language literature (Beate Littig, personal communication, January 2011).

23 King et al. (1994: 19, n. 6) note that this “is probably the most commonly asked question at job interviews in our department and many others.”

24 We take this specific language, “empirical implications of theoretical models” (EITM), from the National Science Foundation-funded summer institutes that offer training in this approach. See Aldrich et al. (2008).

25 Shah (2006: 212), quoting Howard Becker, remarks that “‘putting on a show’ becomes difficult to sustain for individuals who tend to be more drawn in by the social reality that is more important to them than the researcher's presence.” Watson (2011: 210–12) argues a similar point in noting that the researcher's ongoing presence makes the “manufactured data” of one-shot interviews and focus groups less likely to occur.

7 Design in Context: From the Human Side of Research to
Writing Research Manuscripts

1 The Western “imperialism” of categories (Rudolph 2005) and concepts (see Schaffer 1998) is another, albeit related matter.

2 Malinowski's diaries, published in 1967, were shocking at the time because the texts revealed the fieldwork methods pioneer of anthropology to have had racist and perhaps classist attitudes, along with an active sexual imaginary. See Geertz (2010: 15–20).

3 Attention to sexual harassment and rape of women in the US military and among news journalists as we were preparing the final version of this manuscript, in Spring 2011, reminds us that both the event and one's emotional responses to it are also silenced in the methods literature, along with other dimensions just discussed. That both are real is known among researchers. We think it time that this conversation, too, come out of the closet, at the very least to prepare newer field researchers so they can think carefully about their movements in the field.

4 Which is not to say that those who can pass unnoticed (or who are “forced to pass,” as Hamilton puts it) are not themselves challenged, at times, by the unfetteredly able for not being visibly “handicapped”! As Karen Mogendorff (2010: 330) writes, in answer to why she declines to use the bus seat reserved for the disabled, “[A]s long as I am walking it is apparent to everyone in the bus that I am entitled to sit in the seat reserved for disabled people. It is when I sit in the seat reserved for disabled people that my right to sit there is sometimes contested; then it is not visible to the untrained eye that I am entitled to use disability arrangements.” See M. Jones (1997), Hamilton (1997), Lingsom (2008).

5 Such “normalization” may, in fact, be beginning. As we were preparing this book, we learned of a new field of study, “crip theory” (McRuer 2006; thanks to Lisa Johnson for pointing us toward the idea and this work). Growing out of disability studies, most of this work is being conducted in philosophy, literary theory, and intersections with feminist and queer theories. But its methodological and methods implications cannot lag far behind (e.g., M. L. Johnson 2011). Historian Mary Felstiner's (2000) essay, concerned with rheumatoid arthritis, illustrates other issues of physical impairment (for its implications for research, see the section entitled Shift Key, 278–80). For a personal account of growing blindness and research, see Krieger (2005a, b). In the book, she wrote: “Because my vision has been gradually growing worse, last summer I took a series of lessons in the use of a blind person's white cane. . . . A man came out to my house. He walked with me along the streets nearby, showing me how to use the cane, feel the sidewalk, go up and down steps, know if a car was parked across a driveway and then how to get around it. As I walked with him, I learned to listen” (Krieger 2005b; emphasis added). Another area of silence in design and methods discourses concerns aging bodies in the field, a topic suggested by Harry Wels in light of the research of social gerontologist Kees Knipscheer on aging dancers. This and other discussions, in particular with Mike Duijn and sparked by conversation with him and with Erwin Engelman, led to a 2009 methodological seminar, “The body in the field,” organized by Wels and the second author (VU University, Amsterdam, 3 April).

6 The rest of this section draws on research published in Yanow and Schwartz-Shea (2008).

7 These are widely understood as having harmed participants, who were selected along racial lines among prisoners; but recent research (Shweder 2004) calls this view into question. We do not have the space here to review this more fully.

8 On IRB “mission creep,” see Gunsalus et al. (2007).

9 We accept the point raised by one reader of an earlier draft that an English version of the form makes sense for an English-reading board. But the same evaluative purpose might have been served through a summary of the form's contents, which are usually fairly standard; a full, formal translation seems unnecessary—and it is in keeping with the ethnocentric myopia and thoughtlessness of the required US telephone number.

10 As a result of IRB policies, US field researchers enjoy less autonomy today than they did in the past. Unlike the journalists mentioned in the section's opening, field researchers must obtain ethics approval before proceeding—even if the ultimate decision is that their research is of minimal risk and therefore adjudged to be “exempt” from some or all IRB requirements. Under these strictures, it is possible that some earlier, path-breaking interpretive field research would have been disallowed. Some scholars (e.g., Shweder 2006) also question whether the current review system has unnecessarily curtailed the principle of academic freedom so central to US higher education (and elsewhere).

11 Most IRBs have “amendment” procedures if researchers decide they need to change their study designs. For example, the University of Utah website states: “The IRB requires an amendment to note any changes related to an approved study. The amendment must describe the modification(s) requested including reasons for the change, whether the modification will increase or decrease the risk of harm to the subject, and whether the consent form requires modification” (http://www.research.utah.edu/irb/submissions/amendments.html , accessed 9 July 2011). If “any changes” is read literally, it would clearly make interpretive research infeasible. If the logic of interpretive research design (and the ways in which flexibility and researcher judgment may be essential to the protection of research participants) were understood by IRB reviewers and staff, such language would have to be modified.

12 See the Association for the Accreditation of Human Research Protection Programs, Inc. (AAHRPP) website at www.aahrpp.org/www.aspx  (last accessed 9 July 2011).

13 The burgeoning literature includes several special issues of journals across disciplines and practice areas, among them The ANNALS of the American Academy of Political and Social Science (in 2005), Northwestern University Law Review (in 2007), Qualitative Inquiry (various), and Social Science & Medicine (in 2007). For an optimistic view of the possibility of educating one's IRB about the particularities of ethnographic research, see Librett and Perrone (2010). Among other things, they decry IRB conflation of ethics with research validity (p. 737), a key point with which we fully agree.

14 As this book goes to print, existing IRB policies are under federal review. We do not know whether they are likely to be changed and if so, how these changes might affect what we have written.

15 Advocates of archiving argue that “user-access controls” can protect confidentiality (Elman et al. 2010). Yet releasing information to an archivist could be understood as violating the researcher's promise to her research participants. Requiring interpretive researchers to archive their research materials—e.g., interview transcripts and field notebooks, the forms that their “data” come in—could make certain kinds of research projects undoable as it limits the kinds of confidentiality promises that researchers can legitimately make.

16 Elman et al. (2010: 23) proposes that a variety of electronic forms of qualitative data might be archived, including “interview tapes, text files (transcripts from interviews, focus groups, and oral histories; case notes; meeting minutes; research diaries), scans of newspaper articles, images of official documents, and photographic, audio, and video materials.” To the extent that data archiving is mandated rather than voluntary, such policies raise issues of academic freedom. They also imply an unlimited research budget, something not available to all researchers and privileging those at elite schools with greater access to such funding and outside grants, as well as those doing more traditional forms of research for which such funding is more readily available.

17 He also observes that some archaeologists, revisiting previous excavations, have made some interesting reinterpretations of such detailed fieldnotes.

18 McHenry further notes that the database was not developed on the basis of direct experience of events in India, being based, instead, on reportage in the New York Times.

19 As mentioned in Chapter 6 , Note 2, a key question when researchers reuse indicators developed for other research purposes is construct validity—whether the existing indicator is congruent with the new user's own theoretical understandings of the concept so measured. One contributor to the American Political Science Association's Political Methodology Section's listserve argued: “Grabbing someone else's numbers and running analyses on them should no longer be acceptable in political science (or anywhere else, for that matter)” (Monday, 10 January 2011, 2:20 pm, POLMETH list). This position would seemingly put him at odds with those in the same association advocating for data archiving.

20 See Sadiq (2009: 35–7) on the limitations of data quality in non-Western settings.

21 One of us heard the hourglass notion in 1981–1982 from her dissertation advisor Suzanne R. Thomas-Buckle, then at MIT.

22 Why we should speak of “writing up” field research notes is an oddity. Police “write one up” for an offense by describing the event in detail; a “write-up” is a summary; “write this up” means to turn informal language into formal language, or notes into a formal report. Perhaps it is another way of referring to the detailed character of this sort of writing.

23 The best way for those endeavoring to learn more about the crafting of interpretive research writing is to read published interpretive research, as so much depends on disciplinary traditions, particular journals or book publishers, and genres (i.e., article-versus book-length manuscripts). Methods chapters also make for excellent ways to learn, inductively, about what is required in such research. William Foote Whyte's appendix to the second edition of his Street Corner Society (1955) is a classic, and we have recently become enamored of Liebow's (1993) appendix, too (thanks to Reviewer 1 of this manuscript). To some extent this is a matter of personal reading preferences, but our own favorites include Pierce's (1995) and Lin's (2000) appendices, Fujii's (2009) second chapter, Shehata's (2006) and Pachirat's (2009a) explanations of why they did what they did in their ethnographies (Shehata 2009, reproduced there as Chapter 6 ; Pachirat 2011), and Soss's (2006) on how he thinks about interview-based and other field research.

8 Speaking across Epistemic Communities

1 They also name two other forms of mixed methods: having both quantitative and qualitative research questions; and developing research questions in participatory fashion and having research questions that are “preplanned.”

2 She claims that multi-method team research balances “internal validity and external validity . . ., gaining comparative breadth without sacrificing qualitative depth” (Poteete 2010: 33).

3 There is also a double-blind, peer-reviewed, online International Journal of Mixed Methods for Applied Business and Policy Research based in Australia and published by Academic Global Publications; but as of this writing, it seems not to have published any articles since its 2009 inception, and its domain statement does not add any further clarification of what is meant by “mixed methods.”

4 This is manifested, for instance, in conversations between non-anthropology doctoral students wanting to do ethnographic research and their advisors, who raise the specter of lowered job prospects if the students are not able to demonstrate mastery of quantitative methods alongside their field research. The misunderstanding repeated in King et al. (1994) and other works concerning natural and physical sciences as they are actually done and the role of “the scientific method” within them has been critically assessed by others (see, e.g., Becker 2009), and we will not repeat those analyses here. We note, however, the negative effect of such misunderstandings on research designs in discussions of proper sequencing of research “segments,” discussed below.

5 This might appear to be similar to the “multiple methods” advocated by Ostrom and her collaborators. In their research, however, the mixing all takes place within a positivist framework for the purpose of lessening the assumed tradeoff between internal validity/causal inference and external validity/generalizability (see Poteete 2010).

6 Empirical research shows that the same variability holds for randomized clinical trials (e.g., Epstein 1996).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset