8

SPEAKING ACROSS EPISTEMIC
COMMUNITIES

This book has been an effort not only to elucidate what it means to conduct research informed by an interpretive approach, but also to enable scholars from different epistemic communities to converse with one another. For the sake of scientific discourse about substantive research topics that matter to their researchers, scholars need not only to be able to recognize distinctive research approaches, including those designed from an interpretive perspective; they need also to be able to communicate with one another about the varied contributions of the research, even when—or perhaps especially when—those conversations cross epistemic communities and their respective methodological presuppositions.

At the beginning of the book we deferred one methodological topic that, in some of its uses, calls for putting interpretivism and positivism in direct engagement with each other: mixed methods. We pick that up now, briefly, engaging both its referential clarity and its challenges to logics of inquiry, and then turn to another arena of conversation that potentially crosses epistemic communities: research design reviews. In conclusion, we pick up some threads woven throughout the book, including the matter of the abilities and training required to conduct interpretive research.

Designing for “Mixed Methods” Research

In some research arenas, primarily those in which interpretive or qualitative methods are still challenged as not fully legitimate modes of doing science (i.e., labeled as “soft” or treated as useful only as a preliminary stage subordinate to the “hard,” quantitative research), arguments have arisen on behalf of “mixed methods.” What it means, precisely, to “mix” methods—or, even more specifically, what the use of the phrase “mixed methods” signals—is, however, not always spelled out, leading to one sort of potential confusion originating in linguistic meanings, both denotative and connotative. A second sort of confusion emerges at a methodological level.

At the denotative level of language, ethnographic and participant observation practices in the various disciplines that use them (whether in positivist or in inter-pretivist approaches), for example, have long “mixed” “methods”—if by methods one means the three fundamental processes drawn on in generating data: observing (with whatever degree of participating), talking to people (including more formal interviewing), and examining research-relevant materials. As these kinds of approaches have not been thought of as mixing methods, this insight points to one of the difficulties in parsing the meaning of the term. Does “mixed methods” refer only to mixing methods of data generation, as when realist interviews are used in combination with a survey questionnaire in a single research project? Or does it mean “mixing” at analytic stages, as well, as in the combination of semiotic squares and category analysis? Or “mixing” in the same project and research question, or across projects and questions addressing the “same” research topic?

In one stream of work, the term “mixed methods” has come to denote combinations of “quantitative” and “qualitative” methods in the same research project, at whatever phase (data collection, analysis). In this context, what is “mixed” has acquired a wide range of specific referents. In the inaugural issue of the Journal of Mixed Methods Research, for instance, editors Tashakkori and Creswell (2007a: 4) list seven types of mixing. Their categorization scheme includes techniques that combine data collection and data analysis (e.g., focus groups and surveys; statistical and thematic analyses), as well as different forms of data (e.g., numerical and textual), types of sampling procedures (probability and purposive), and “two types of conclusions (emic and etic representations, ‘objective’ and ‘subjective,’ etc.).”1 Another exploration of mixed methods (R. Johnson et al. 2007) identifies nineteen different definitions. The term has also been used, particularly in the comparative politics subfield of political science, in reference to the admixture of distinctive forms or traditions of research in the same study, such as the combination of case study methods with either formal modeling or statistical analysis (Ahmed and Sil 2009, Chatterjee 2009).

Still other forms of mixture appear under the name “multiple methods,” promoted in particular by Nobel Laureate Elinor Ostrom and her colleagues (Poteete et al. 2010). Poteete (2010), who uses “multi-method research” seemingly with the same meaning, adds, in our reading, two other ways of defining the combining of methods. In one, the mix entails tacking back and forth between different traditions of research while focusing on the same research topic (common pool resources, for Ostrom and her co-authors). In Ostrom's research agenda, for example, findings derived from field observations were reformulated as formal models and then tested in a laboratory setting. For Poteete (2010: 29), this means that “the generality [a.k.a. generalizability] of field observations [from their setting of origin to another] can be evaluated using experiments.” In a second usage, multiple methods refers to teams of researchers with expertise in different methods who collaborate on a large project—a design common in clinical research—each team contributing its particular methods expertise to the study of a common question. Poteete (2010) describes multi-disciplinary research teams, such as the International Forestry Resources Institutions research program that combines the work of natural scientists, economists, and others who “conduct field research in forest sites using a common set of data-collection protocols and contribute data to a common database” (Poteete, 2010: 32), as well as multi-sited research teams investigating gender–state relations, such as the Research Network on Gender Politics and the State.2

Beyond its jumble of denotative uses, the term also conveys other meanings. “Mixed methods” appears to be held out in some research communities across the social sciences as a solution to the perceived crisis concerning the scientific status of traditional qualitative—i.e., interpretive—methods. In this use lie both a move to combine quantitative and qualitative or interpretive methods in the same research project and an implicit strategic rationale for that move. This strategy can be seen in the editorial statement of the new Journal of Mixed Methods Research, begun in 2007, which defines both its aims and “mixed methods” quite broadly as the publication of articles that “explicitly integrate the quantitative and qualitative aspects of the study” as used in “collecting and analyzing data, integrating the findings, and drawing inferences using both qualitative and quantitative approaches or methods” (Journal of Mixed Methods Research 2007).3 The strategic rationale can also be heard in less formal discussions of the need for mixing methods. What is animating this movement is not necessarily the needs of a research question that calls for such mixing, but a climate in many social sciences today that disparages qualitative research, not to mention interpretive research. In some disciplines, this seems to be a response to the challenge posed by King et al. (1994) to make qualitative research resemble more strongly the scientific apparatus those authors present, normatively, as characteristic of natural and physical science (the singular used intentionally). The feeling among those who adopt this view appears to be that adding quantitative methods to research projects whose approach is, at heart, qualitative or interpretive will somehow still critiques of them as “lesser” forms of science.4

In some disciplines, debates about “mixed methods” appear to be increasing. Some, such as Poteete (2010: 28, emphases added), assert claims about the desirability of multi-method research as if they were uncontroversial:

The complementarities of qualitative and quantitative research are well known. In multi-method research, the strengths of one method can compensate for the limitations of another. Furthermore, confidence in findings increases when multiple types of evidence and analytical techniques converge.

This statement assumes that those strengths and weaknesses are self-evident and widely accepted, as well as that their pairing will cancel out each other's limitations. Although we argued (in Chapter 5) that intertextuality, in drawing on multiple types and/or sources of evidence, can enhance the trustworthiness of an interpretive analysis, we do not imagine this author or other advocates of mixed methods have this kind of mixture within an interpretive approach in mind.

For other scholars, mixed-methods inquiry promises to be a new paradigm for research, although they recognize that it still poses many unresolved questions (Greene 2008). Specifically, some argue that these various types of combinations of methods create additional problems for the scientific quality of such research. Ahmed and Sil (2009: 2, original emphases), for instance, worry that the emphasis on multi-method research “is quickly turning into a new dogma that researchers must, or ideally should, incorporate ‘all means available’ to validate their work.” Arguing that “there is no epistemologically sound reason to elevate [multi-method research] above others” (meaning single-method approaches to research; 2009: 3), they speculate that the movement to advance the mixing of methods “may ultimately hurt the quality of scholarship, producing thin case studies, shoddy datasets, and unsophisticated models” (2009: 5).

This concern points to a second kind of confusion, one raised by the use in a single research project of methods underpinned by different ontological and/or epistemological premises. From our perspective, mixing methods within a single methodology (e.g., combining a survey with regression analysis, or metaphor analysis with semiotics) is, on the face of it, unproblematic. But when the methods that are mixed rest on different, and conflicting, notions of social realities and their know-ability, the mixing can produce research that is not logically consistent or philosophically coherent (a point made also by Blaikie 2000 and P. Jackson 2011: 207–12). To wit, it requires accommodating both realist and constructivist ontological positions and objectivist and interpretivist epistemological ones within research addressing a single question, and these mixings pose tremendous difficulties of logic. A researcher who uses a survey instrument, for instance, with its a priori concept formation and analytic presuppositions of a singular reality, along with its promise of producing results that closely mirror social realities, would be violating those principles by also drawing on data-generating and analytic methods grounded in local knowledge and multiple social realities—in addressing the same research question. That is, in such a combination, the search for the singular truth would require disregarding or dismissing the very ambiguities and multiplicities of meaning-making on which interpretive research questions often center. And vice-versa, a researcher who is interested in a meaning-focused inquiry exploring the lived experiences of particular persons in particular settings or as recounted in archived documents would be contradicting the ontological and epistemological presuppositions underlying those approaches in turning to a data analytic technique (e.g., quantitative content analysis) that strips away that very context.

In such situations, we have “mixed methodologies” more than “mixed methods.” It is hard, if not impossible, to square research that rests on constructivist ontological presuppositions and interpretive epistemological ones with research that rests on realist ontological and objectivist epistemological ones. Blaikie (2000: 274) makes the same point in saying,

[W]hat cannot be done is to combine data that are produced by methods that each deal with different (assumed) realities. It is not possible to use data related to a single “absolute” reality to test the validity of data related to multiple “constructed” realities, regardless of what methods are used in each case.

Ontological and epistemological presuppositions affect the very articulation of a research question itself. In the “combining” or “mixing” of approaches, the question is likely to transmogrify. It can happen that a researcher wants to explore a research topic that encompasses several research questions, each of which necessitates adopting a different approach. We see this, for example, in research exploring changes in social welfare policies (the research topic), in which the researcher wants to measure the economic impact of the changes on various categories of recipients (a quantitative research question), as well as to understand what the new policy means to welfare recipients from the perspective of its effects on their everyday lives (an interpretive or qualitative research question; see, e.g., Schram 2002). In such a situation, the specific research question itself shifts, and along with it, the design that outlines a plan to address that question. Blaikie (2000: 274) also sees the possibility of taking up what he terms qualitative and quantitative methods in sequence, “possibly with switches between approaches/paradigms,” as long as the ontological assumptions within each are the same. The resulting research can be said to mix methodologies within a single research topic, and perhaps to mix methods within a single methodology; but it does not mix methodologies within a single research question.5

Equally important is the implication for interpretive methods of some methodological discussions of mixed methods research, in which the preeminent design issue concerns the appropriate sequencing of methods to be mixed. The mixture under discussion in this particular literature clearly concerns qualitative (or interpretive) and quantitative components of a study. The central question is whether qualitative/interpretive and quantitative components are to be undertaken simultaneously or sequentially, and if the latter, in which order. But in these discussions the distinctions between approaches (i.e., positivist–qualitative and interpretive–qualitative) are submerged in ways that tend to emphasize a logic of inquiry and nomenclature that is more positivist than interpretive in tone (e.g., invoking the need for “sampling” and “consistency”; see Collins et al. 2007, Tashakkori and Creswell 2007b). Such treatments not only subordinate qualitative or interpretive research to quantitative research. They leave little room for fleshing out the interpretive component's logic of inquiry in the research design, including its associated standards (such as reflexivity), to the fullest, depriving interpretive–qualitative methods of their scientific grounding. The consequence is a weakening of their scientific standing, precisely the opposite of what the mixed methods movement has stipulated as its intended achievement.

We hasten to note that advocating for “methodological pluralism” is not the same as arguing for mixed or multi-methods research. The former constitutes an appeal within a discipline to give equal standing to all research that draws on one or more of the full range of methods in use within that discipline. Within human or social geography, for instance, it would mean accepting research conducted on the basis of qualitative methods, such as walking the terrain (e.g., Hall 2009), as well as research using quantitative methods, as having claims to equal scientific standing. When interpretive purposes and presuppositions, and their scientific status, are not well understood, confusions of methodology and methods with respect to what is getting mixed, and what is mixable, are more likely to occur.

Crossing the Boundaries of Epistemic Communities:
Proposal Review and Epistemic Communities’
Tacit Knowledge

With a draft of the research design in hand, researchers of all epistemological persuasions may seek feedback from interlocutors, whether from classmates or a thesis or dissertation advisor (in the case of graduate students) or from a mentor or colleague. In order to focus on matters methodological in these sorts of “reviews,” we bracket such issues as the significance of the research question and the adequacy of the literature review, each of which will be judged within the context of the specific area of research being proposed, and look instead at the kinds of questions a research design might raise in general.

The following kinds of questions are commonly on the minds of many readers of all manner of research designs, including those proceeding along interpretivist methodological lines:

  1. •   What is the purpose of your research?
  2. •   What is the relationship between theory and empirical research (or data) in your project?
  3. •   Where are your independent and dependent variables?
  4. •   Where is your control group?
  5. •   What sorts of causal relationships does your research intend to explore?
  6. •   What manner of prediction can/will you make on the basis of this project?
  7. •   Are your findings going to be generalizable?

We hope the preceding chapters of this book make clear that all but the first two of these questions are inappropriately asked of an interpretive research project— and that the inability of the researcher to engage those questions is not a fault of the research design or a manifestation of the unpreparedness and inadequacy of the researcher. Instead, this kind of cross- or mis-communication is a manifestation of what happens when designs for interpretive research are read by members of other sorts of epistemic communities who are unfamiliar with their methodological grounding.

Assessments of the design will be shaped by its readers’ assumptions about the general purposes of research and the forms of explanation that are accepted within the discipline in which it is proposed—that is, within which the particular conversation about the topic and its problematics is taking place—and at times, even within an epistemic community within that discipline. As we have noted, research designs present choices, along with the argumentation that explains and justifies their selection among alternatives, in light of the intended purposes of the research project. Readers will be looking for decisions and choices—of settings, events, actors, times of year for the study, and so forth—that are justified in light of the stipulated research question and which make sense as ways of addressing and exploring it. An interpretive researcher might be asked, for example, for additional justification for the choice of a particular participant role or of a particular archive as a starting point, given the research question defined. In advancing their rationales, authors communicate certain things to their readers, frequently without spelling those out. Often, these are not spelled out because the author is writing for members of the epistemic community of which she is a member, and these ideas are part of the tacit knowledge they share and therefore need not be said.

Within a single epistemic community, with its shared understandings of research purposes and customary practices, feedback will likely be appropriate to the methodological presuppositions underlying the research design. When proposal readers and reviewers come from epistemic communities other than the one in whose presuppositional context the researcher has been working and writing, however, communication may be stymied by any of the sorts of issues we have taken up so far. This can happen when readers-evaluators and author do not share the assumptions and presuppositions common within an epistemic community concerning what constitute appropriate and expected research procedures for the question at hand. A deductive, positivist approach, for instance, with its operationalized variables and promise of refuted or supported hypotheses, implies an architectural or engineering blueprint that can be executed with precision (assuming the competence of the researcher). An abductive, interpretive approach, with its recursive–iterative flexibility and promise of substantive insights about a particular case, implies a more improvisational tack to be taken in response to local, situational social, political, and cultural realities. When the latter sort of research design is read by those expecting the former, who are not attuned to its own logic of inquiry, it may be negatively assessed (as may its written “products” later on when submitted for journal review, etc.) for not meeting the criteria of the first sort of methodological approach.

Such judgments can be made, for instance, when what constitutes the purpose(s) of research is a matter of disagreement across epistemic communities— from contributing to generalizable knowledge to providing insights about the case under study to providing knowledge that will aid in emancipation. Imagine, for instance, how a reader familiar with survey research design, with its orientation toward realist–objectivist knowledge, might be surprised by an ethnomethodological design focusing on the details of participants’ meaning-making practices in their daily lives. Interpretive projects may be put at a disadvantage if funders or other reviewers expect interpretive research designs to include positivist methods (or a justification for their absence, a possible outcome of increasing attention to “multi-method research”). Given standard research proposal page limits, it can be challenging both to fully develop the logic of inquiry for an interpretive project and to explain the methodological inconsistency of positivist methods with the articulated research question (and, hence, their absence). We note that quantitative researchers are seldom asked for such explanation, although it might equally be an occasion to ask them to explain why they are not also using interpretive or qualitative approaches to address their research question.

Depending on research purpose, “design” can be understood as an unvarying roadmap or as a flexible plan for guiding situated improvisation in response to local circumstances. Social science reviewers outside of cultural-social anthropology often have not understood the methodological centrality of design flexibility and its necessity for the proposed research project (see, e.g., Ragin et al. 2004, Lamont and White 2009; cf. Becker 2009). In the current environment of methodological multiplicity, intended purposes need to be carefully communicated by proposal writers and attended to by proposal reviewers in their assessments. Increased awareness of the scientific grounding of these several methodological approaches should lead to proposals being evaluated according to the standards appropriate to the specific logic of inquiry of each.

Should there remain doubters among our readers as to the scientific grounding and contributions of interpretive research—something we have until this point asserted implicitly, without making it the explicit subject of argumentation—should such skeptics still be reading at this point, we have one thing more to say. You may have noticed that at the same time that we have been citing recently published literature, we have also cited works published in the 1940s through 1960s. What they, at the time of publication, called qualitative research and which we have been calling interpretive, such as Becker et al.’s on physicians (1977/1961), Dalton's on managers (1959), Roy's (1959) on shop floor workers, Whyte's on the social organization of neighborhood life (1955/1947), and others of that vintage, remains widely read and cited, outstanding examples of what can be achieved through interpretive research methods. Liebow's Tally's Corner (1967), for instance, not only remains in print, but “has been translated into multiple languages and has sold more than a million copies” (J. Kelly 2011). As one of the reviewers of this manuscript remarked, these works “have stood the test of time—still read, still taken seriously, after all these years.” This is, he noted, a criterion “that is so often proposed by positivist [researchers] as the mark of real science. . . . [That these older works are still] being read now, so many years after their publication, gives a strong warrant for the methods by which they were done.”

Practicing Interpretive Research: Concluding Thoughts

A certain degree of mythologizing characterizes discussions of research design. Across the social sciences, the equivalence between “promises” made in formal research designs and what appears in published research is variable.6 Experienced researchers know that what gets done in the field or in the archive often does not match what was proposed in the research design. Formal methodological discussions contrast with what researchers know “informally,” in practice, and what they reveal when they talk among themselves (or “let down their guard,” as Gerring, 2007: 148, puts it). Defining scientific purposes exclusively in terms of generalizable knowledge may contribute to such mythologizing among those disciplines or epistemic communities that hold on to that image of science. Research methods textbooks and course syllabi, and perhaps course discussions as well, in some cases also convey this notion that “science” is uniform, and universal, in its prosecution. IRB practices on many US campuses add to this sense of the timelessness and placelessness of science: the imagined ideal-typical form of scientific inquiry is being further reified and mythologized, extended as fact to the non-experimental social sciences. Moreover, as discussed in Chapter 7, campus IRBs may make efforts to control the variability across research designs and their implementation as they seek to assess finished research projects at random against the designs that had been approved.

Still, as noted at the outset of this book, research designs are central to the scholarly gate-keeping processes that characterize the modern university system. Others decide whether the individual achieves the Ph.D., obtains time off from teaching to pursue a project, or receives the grant for travel and other expenses necessary to conduct the research. Independent scholars unaffiliated with universities, colleges, or research organizations are also likely to be subjected to such gate-keeping processes when they seek support for their endeavors. A research proposal with a coherent logic of inquiry articulated in its design is more likely to pass muster with such gatekeepers if there is broader understanding of the scientific bases for both interpretive and positivist approaches.

Given its density, we have not delved deeply in this book into the philosophical terrain which provides the ontological and epistemological underpinnings of the unspoken assumptions behind the reviewers’ evaluative questions listed above. We hope that at this point, it is clear that these questions are not generic, applicable to all research designs, but are, rather, reflective of particular philosophical—methodological—assumptions about reality and its know-ability; about the possibility of standing outside that which one is studying and generating scientific knowledge of it from that point; indeed, about the very meaning of “science” and the character of being “scientific.” On the one hand, the differences between positivist and interpretive approaches presented and discussed in this book may seem subtle—a “mere” tweaking of such terms as “validity” and “trustworthiness.” On the other hand, these differences reflect radically different conceptions of the role of the social sciences in society, perhaps best captured by entertaining the idea of the social sciences as “human sciences” (Pachirat 2006, Polkinghorne 1983).

We hope to have provided a way to think about the differences in logics of inquiry across various approaches to science and a conceptual vocabulary for naming and talking about those differences. As we said at the outset, we are pluralists: we do not think that interpretive research designs hold for all modes of doing science any more than we think that positivist ones are universal in their application. Although all scientists may share a belief in and a value orientation toward the systematicity and suspension of faith to be followed in the pursuit of knowledge, the ways we go about enacting both systematicity and doubt, along with the standards and criteria we hold up to evaluate those processes, vary across epistemic communities. We do not wish a world of inquiry governed by “methodism,” the slavish attention to the dictates of technological, methodological, and philosophical purity, but we do wish a social scientific world that makes a place at its table for interpretive and qualitative modes of doing research alongside other modes. With meaning-making and the understanding of ambiguity and multiplicity at their center, interpretive methodologies make essential contributions to knowledge. Research design concepts and processes that recognize these aspects better serve those researchers committed to them.

Finally, some scholars who recognize the skill that is needed to do “sophisticated quantitative research”—we can point to several “boot camps” set up to train graduate students in statistics and other “advanced” analytic methods—hold that “anyone” can do interpretive (or qualitative) work, no special training required. We are hopeful that the discussion in this volume shows that this is far from the case—that knowing how to observe, how to listen, how to ask, including of archival materials, and which choices these entail and how to think about and make them are learned skills, mastered only with repeated practice. Carol Cohn (2006: 106–7) remarks on the fit between her personal proclivities and her choice of research methods: she is genuinely interested in others, she says, temperamentally; a listener, conflict-avoidant, attentive to feelings, and compelled to honest openness about her views—all traits related to skills used in interpretive research. We join with Forsey (2010: 560) in holding that there are “important links between methodology and the personality traits of a social researcher.” There are reasons beyond the merely intellectual that some are led to master and enjoy regression analyses, while others are led to master and enjoy narrative analyses. Such a view is in keeping with research on various kinds of intelligence, not all of them held in equal measure by all (Gardner 2006, Goleman 1995). In seeking to explicate the concepts and processes entailed in designing interpretive research, we have engaged in skill-related discussions only briefly (in Chapter 4). We encourage interested scholars to seek out the kinds of readings, courses, and exercises that foster such learning, which will in itself lead to a greater understanding, from the inside, of the interpretive research design concepts and processes we have explored here.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset