INTRODUCTION

[Such research is characterized by an] intensive focus on the empirical world; on seeing and understanding behavior in its particular and situated forms. Data that do not stay close to the events, actions, or texts being studied are always suspect. There is a hostility to generalizations at any level that are not connected to description, to immersion in substantive matter. . . . The preference for descriptive material and observation made us suspicious of . . . material torn from the context of their creation. Action was too situated, too contextual to be understood at the high levels of much macroanalysis. Meanings were often not assuredly understandable without an experience with those we were describing.

—Joseph R. Gusfield (1995, xii)

How does one begin to design an empirical research project? Many scholars across the social sciences, socio-cultural anthropology perhaps excepted, would reply with the steps associated with “the scientific method”: articulate your hypotheses, define your concepts, operationalize these in the form of variables, establish the relationships among the latter, and then plan to test them in your research setting, checking for validity, reliability, and generalizability. This is the formula for research design found in most methods textbooks. And yet this way of proceeding does not describe very well a whole segment of scientific research: that conducted under the heading of interpretive social science, a term increasingly being used in some disciplines or fields of inquiry to refer to qualitative social science in the Chicago School tradition. This is research that, similar to 1920s–1960s anthropology and sociology field research conducted at the University of Chicago, focuses on specific, situated meanings and meaning-making practices of actors in a given context, as described in the epigraph by Joseph Gusfield, reflecting on his own experiences and role there, then (see also Calhoun 2007: 26–33). It is to address this missing conversation that we have written this book.

Given the increasingly inter- and cross-disciplinary research and publishing practices within the social sciences, what it means to do interpretive empirical research needs articulation and development such that scholars from various epistemic communities can appreciate the full extent of its practices and, in particular, their methodological underpinnings. This volume lays out the grounding for the design of research projects that build on interpretive methodological presuppositions, with such scholars, among them newer researchers, as our imagined readers. Notable within this group are those reviewing interpretive research, whether for thesis, dissertation or ethics committee assessments (such as Institutional Review Boards, IRBs, in the US), funding or publication reviews, or promotion and tenuring evaluations.

Research design is about making choices and articulating a rationale for the choices one has made. As a term, “design” evokes expectations of a carefully formulated plan. Many elements are common across research designs, whatever ontological and epistemological presuppositions inform the specific work. But these seemingly common elements can mask significant differences in approaches to research. We engage here both those elements that are shared and those that are clearly distinctive to interpretive research designs. If this distinctiveness is not understood, an interpretive research design may be judged by those unfamiliar with its premises to be weak, sloppy or underdeveloped, rather than adequate, well-developed or even “strong.” In attending to and articulating the differences between design forms, it becomes crucial at times to explore other terms for design concepts that are well known, but which inadequately express ideas that are central to interpretive research processes. Other vocabularies make these differences clear, and they articulate the design concepts’ underlying ideas in ways that more closely fit interpretive presuppositions. Engaging alternate terms can help both interpretive researchers and reviewers of various sorts from other epistemic communities understand the philosophical grounding of interpretive research and its design requirements.

In writing this book, then, we had three broad readerships in mind. One of these is graduate students, who in particular need information about interpretive concepts and processes so that they can do empirical research that genuinely allows for an interpretive approach without having their confidence undermined at this stage of the game by uninformed critiques. These include, for instance, comments that suggest that interpretive research does not stand on its own, being useful only as a preliminary stage to generate information that can serve as the basis for a quantitative study; or criticisms that inquire about the variables used in the study, misunderstanding the purposes of interpretive research, which is not variables-based. The treatment of interpretive research design presented here counters prevailing misinformation about the methodological grounding for “qualitative” methods (even among research methods textbook authors) and the widespread ignorance of interpretive methods. The volume discusses interpretive methodologies’ and methods’ distinctive concepts and processes and the reasoning that underlies them in ways that enable students to think and talk about the particulars of the interpretive research designs they are developing or conducting. In several places, discussions of interpretive approaches are situated adjacent to discussions of positivist approaches to the same topic, especially when methodological concepts from the latter are widespread and commonly used. Through that contrast, we hope to make clear the claims and processes of both approaches.

Second, we are writing for more experienced researchers—academics, policy analysts, independent scholars, and consultants—who apply for funding, for research-related release time, and/or for other resources to conduct such research (e.g., entrée/access to field settings). They will find here a way to talk about interpretive methodologies and methods that can be useful in those applications. Such research-speak is needed in order to explain the rationale behind the more flexible, open-ended approach to research design that is common in this sort of empirical research, manifest, for example, in the lack of formalized hypotheses and random sampling. More flexible approaches and the absence of hypotheses, variables, and sampling are commonplaces in social or cultural anthropology, where interpretive methodologies have received their fullest expression in the conduct of research. When used in disciplines in which other methodological approaches are dominant, these commonplaces often are treated as outliers, and even as signs of poorly designed research. As a result, the research proposal, as well as subsequent manuscripts, is often found wanting. Yet these characteristics of interpretive research designs are neither haphazard nor sloppy, but systematic (i.e., “rigorous”) in their own right, as we explain in Chapters 16 of the book.

Third, those teaching research methods courses will find the book useful, for the same reasons, for curricular purposes. Most treatments of research design across the social sciences (social-cultural anthropology excepted) take a variables-based, hypothesis-testing, (quasi-)experimental approach to the topic that is quite different from the word-based, abductive, field and archival research approach common to interpretive empirical work. Most methods textbooks, even when presenting and discussing qualitative methods, lack a full understanding of the ways in which many kinds of qualitative research design, let alone interpretive ones, are different from “traditional” research designs—the latter influenced by the forms and logic of inquiry dominant in economics, psychology, and other fields that follow “positivist”-inflected methodological argumentation (which is not to say that those fields do not have their own forms of interpretive research; see, e.g., McCloskey 1985 in economics, Giorgi et al. 1983 or Wertz 2005 in psychology).1

Our approach is informed by a science studies or sociology of knowledge perspective that sees scientific work as a practice—and one that seeks to persuade others of the “goodness” of its findings. As such, we are asking ourselves, constantly, about the political (or power) dimensions of what scientists do, including social scientists. Although this statement is strongly reminiscent of Foucault's engagement with the intersections of knowledge and power (1984), we are influenced more by ethnographic analyses of various kinds of natural and physical scientific practices (see, e.g., Latour 1987, Latour and Woolgar 1988, M. Lynch and Woolgar 1990, Traweek 1992) and by the utility of bringing such a perspective to bear on the practices of social scientists (see, e.g., Brandwein 2000, 2006, Büger and Gadinger 2007, Woolgar et al. 2009, Yanow 2005).

Some points of clarification concerning concepts that run through this volume are in order. First, the discussion rests on a distinction between methodology and methods. Methodology commonly refers to the presuppositions concerning ontology—the reality status of the “thing” being studied—and epistemology—its “know-ability”—which inform a set of methods. It might be thought of, in a way, as applied philosophy. If methodology refers to a logic of inquiry, the conduct of the inquiry itself might be thought of in terms of the particular tools—the methods—with and through which the research design and its logic are carried out or enacted. So, in this sense, interviewing might be seen as a tool—a method; and it is one that can be informed by different, and often conflicting, methodological presuppositions.

A researcher can interview based on the belief that she is going to be able to establish “what really happened” in a setting. This reflects a realist–objectivist methodology that rests on three things: faith in the existence of an objective social world that is external to the researcher; knowledge of that world which can be achieved through observation from a point outside it; and the belief that this knowledge can yield an understanding of what the researcher holds to be the truth of that external world, an understanding that mirrors that world. Or a researcher can interview based on the belief that there are multiple perceived and/or experienced social “realities” concerning what happened, rather than a singular “truth.” In this view, the researcher would assume that event narratives are likely to vary depending on the perspective (political, cultural, experiential, etc.) of the persons being interviewed. This approach reflects a constructivist–interpretivist methodology that rests on a belief in the existence of (potentially) multiple, intersubjectively constructed “truths” about social, political, cultural, and other human events; and on the belief that these understandings can only be accessed, or co-generated, through interactions between researcher and researched as they seek to interpret those events and make those interpretations legible to each other.2

Attending to their methodological underpinnings makes it less reasonable to think of any method as an item in a “tool box,” a metaphor commonly found in textbooks that do not distinguish between methods and methodologies (if they discuss methodology or philosophy of science issues at all). Underlying the tools metaphor is an assumption of neutrality among methods: the researcher is methodologically—philosophically—agnostic as to whether she picks up an open-ended interview or a survey instrument to use in her research. Yet no method is methodologically neutral: each one—modeling, ethnomethodology, modes of interviewing, “styles” of participant observation—rests on the choices that researchers make when they enact their ontological and epistemological presuppositions. These are “pre-”suppositions less in time than in logic: they typically are part of researchers’ tacit knowledge (in Polanyi's, 1966, sense), such that instead of being able to declare them at the outset of a research career, it is only when researchers reflect on research already conducted, and perhaps even published, that their knowledge of their own presuppositions becomes explicit (often when a colleague or reviewer points out the ontological and epistemological ground on which the research stands). This understanding leads us to focus, instead, on the language of methodological “approaches” and choices among them, rather than of tools, the second point. Referring to approaches emphasizes the inevitable intertwining of the many choices that a researcher makes in bringing research question, methodology, and methods together. These choices give expression to, or enact, the methodological approach—interpretive, positivist, critical realist, or some other—informing the work that a researcher carries out.

Third, we draw a distinction not only between quantitative and qualitative research and their attendant designs, but among quantitative, qualitative, and interpretive research. The older, and still widely known and used, two-part taxonomy developed at a particular point in time to demarcate University of Chicago– style observational and interview-based research from the kind of quantitative and survey-based research developed at Columbia University and the University of Michigan. As survey research instruments, statistical science, and the computer hardware and software that could process ever greater quantities of data further developed, along with behavioralist theories, “quantitative” research ascended over “qualitative” research in many social science departments and/or disciplines.3 As a consequence, researchers using qualitative methods came under increasing pressure to adopt the evaluative criteria central to quantitative ones. Qualitative research continues to use one or more of three common data generating methods: observing, with whatever degree of participating; talking to people (a.k.a. interviewing); and the close “reading” of research-relevant materials. But in many fields, it has grown to resemble less and less Chicago-School–style field research, drawing increasingly, instead, on analytic methods that enact positivist philosophical modes of scientific knowing (e.g., a realist ontology, the possibility of objective knowledge, generalizing universal laws). The bipartite “quantitative– qualitative” taxonomy of methods has, more and more, come implicitly to stand in as proxy for a distinction between positivist and interpretivist methodologies.

In many fields, the dual taxonomy has increasingly lost that sense of methodological difference, although in some, such as parts of sociology and educational studies, “qualitative” still carries its older meaning intact. In other fields, reflecting the “interpretive turn” that took place across the social sciences in the 1970s– 1990s (see, e.g., Geertz 1973, Rabinow and Sullivan 1979, 1985, Polkinghorne 1983, 1988, Hiley et al. 1991), Chicago-School–style qualitative methods resting on a phenomenological hermeneutics that privileges local, situated knowledge and situated knowers has increasingly become known as “interpretive” research. This yields a three-part taxonomy of research approaches: quantitative–positivist methods drawing on realist–objectivist presuppositions, qualitative–positivist methods drawing on similar presuppositions, and qualitative–interpretive methods drawing on constructivist–interpretivist presuppositions.4 Properly speaking, then, we should use those three compound adjectives when describing methods; but to make the language simpler, we will use quantitative, qualitative, and interpretive, instead. In some places, where qualitative and interpretive methods are similar in their approaches to a topic, we use them together. In others, in order to emphasize interpretive design's distinctiveness, we contrast it with positivist design elements found in both qualitative and quantitative approaches.

In part because of this history, what gets included or counted as “interpretive empirical” research can be confusing. Does it include analyses interpreting theoretical texts, such as those seeking to understand the implications of Weber's writings from a feminist perspective (Ferguson 1984),5 for example, or of some other writer whose work is considered canonical or otherwise central to a discipline? Clearly, analyzing documentary materials, whether historical or contemporary, draws on similar methods of text-treatment and thought. This was precisely Taylor's (1971) argument: that in studying human actions, researchers render them as “text analogues” for purposes of analysis (see also Ricoeur 1971). And it is equally clear in how interpretive empirical scholars approach physical artifacts, such as governmental buildings and other built spaces in which acts of research interest take place (see, e.g., G. Mosse 1975, Yanow 2006a).

In political science, where we are most familiar with these issues and debates,6 the interpretation of theoretical texts is often explicitly framed as “non-empirical” research, leading political theory graduate students in some programs to be exempted from research methods courses required of all others (Schwartz-Shea 2003). But this understanding of textual analysis rests on meanings of “empirical” that are narrowly cast and increasingly contested. Political theorists interview (Bellah et al. 2007 [1985]), for instance; work in archives on contemporaneous materials in ways that parallel historical research (especially social history; see, e.g., Darnton 1984, 2003, Davis 1983) situating correspondence, diaries, paintings, and other texts and text-analogues in contemporary social, political, and cultural contexts (e.g., Ferguson 2011, Bellhouse 2011); and analyze college catalogues (Kaufman-Osborn 2006) or methodological practices (Norton 2004).7 In emphasizing that this book engages “interpretive empirical” research, we also have these kinds of work in mind (although we also note that the manuscripts reporting on such research often have a rather different “voice” from those reporting on field observations, likely due to different intended audiences and dissemination outlets, including conferences, journals, and book publishers).

Fourth, although it is itself something of a misnomer, we use the shorthand “positivist research” to refer to those forms of research that rest on realist ontological and objectivist epistemological presuppositions,8 in order not to have to repeat what is a linguistic and conceptual mouthful every time we want to refer to that kind of research; we do the same with “interpretive research.”9 Likewise, we use the phrases “positivist researcher” or “interpretive researcher” as shorthand references to the approach a researcher uses in a particular project. We do not intend thereby to equate a research approach with an individual's identity or to reify this link, as some researchers choose to move between approaches, depending on the research question they are engaging. Some researchers do specialize in one approach or another; for them, personal identity and research identity may be more intertwined than for others who are more ambidextrous, so to speak. The possibility and ease of such movement depends on an individual's inclination toward and specialization in certain forms of research, as well as on the breadth or narrowness of graduate methods training and what is made available to students as they are socialized to their discipline's practices. It can be challenging, for instance, to develop a “research ear” for both metaphor analysis and formal modeling and to master the technical intricacies of both. The ability of a single researcher to “mix” methods or methodologies—so-called mixed methods research—is related to this point. We defer a consideration of such mixing to Chapter 8.

Fifth, we make reference at times to phases of a research project, distinguishing “fieldwork” (which we use in reference to archival research as well as to its more traditional participant observer, ethnographic, and interviewing designation) from “deskwork” (more focused analytic activities, typically away from the field) and “textwork” (the more focused preparation of the research report).10 We do so in full recognition of the fact that these activities are intertwined: although fieldwork itself may be separate in both time and space from the other two phases, analysis often begins in the field, if not beforehand, and continues through the preparation of the research manuscript or presentation; and chunks of text may come directly from notes prepared in the field or from the research proposal. Still, we find it useful for heuristic purposes at times to mark and use this distinction.

Lastly, one of the things that makes the topic of research design so fraught with tension and miscommunication is that various epistemic communities often use the same word to mean different things—without recognizing those differences and, therefore, without understanding the reasons for the miscommunications that ensue. For instance, an experimentalist's understanding of what makes research valid differs from validity's meaning in other research approaches, reflecting different modes of thinking about the way(s) in which research is done. To take another example, in some cases, naturalist has been used to describe research on biological and physical topics in the understanding that those scientists can conduct their studies from positions outside of the research domain. There, the “behaviors” of plant cells, bacteria or rock and mineral formations are “natural” and indifferent to such observation and to the results of the study (e.g., Bevir and Kedar 2008; for an in-depth analysis of this latter point, see Oren 2006a). But a large section of the qualitative–interpretive research world uses naturalist to refer to precisely the opposite kind of research, in which the researcher is firmly positioned within the community and setting under study (e.g., Schatzman and Strauss 1973, Lincoln and Guba 1985, Erlandson et al. 1993, Athens 2010)! This research is “naturalist” in that the researcher engages in activities that are naturally occurring in such settings—e.g., observing people, talking to them, and/or taking part in the course of their everyday, “natural” activities in their own, “natural” settings, much as “ordinary” members of that setting would comport themselves.

In yet another instance, constructionism and constructivism are used in different disciplines, or even in different subfields of the same discipline, with different meanings. International Relations, for instance, has developed its own historically grounded use of these terms with their own particular meanings and reference points (see, e.g., Green 2002, Hopf 2002, P. Jackson 2002: 258, n. 12); but that field's use of these terms is often at odds with the broader methodological and methods literature. Similarly, experimentalists and others use the term subject in reference to persons who are the objects or units of study; whereas in other types of research, “subject” is seen as denying persons agency, and the terminology has shifted to “research participants.”11 Researchers working with these terms need to make themselves aware of such differences, as conversations often develop in which scholars end up speaking past each other because they assume that scholarly terms are being used to mean the same thing, when this is, in fact, not the case.

This discussion of language and nomenclature in the methods and methodological literature links to a different question: the meaning of “design” in this book's title. It has two; they are intertwined; and we have already been using them interchangeably and will continue to do so. On the one hand, interpretive research design—imagine the stress on the first word—could mean the outline of the steps a researcher would follow in planning a research project using an interpretive approach. This is the sense that marks much of Chapter 1; it is design as object, as noun. At the same time, interpretive research design—where the noun has almost the quality of a gerund—is somewhat more dynamic, emphasizing the thought processes and ensuing strategies that go into designing interpretive research. This is the meaning that informs much of the book and lies at the root of its subtitle—Concepts and Processes. If the reader finds our discussion of designing for interpretive research more narrative in its treatment by contrast with the typically more stepwise, procedural approach of traditional textbooks, it is due to these dual meanings and our emphasis on the second of the two.

The one area of interpretive methods that receives short shrift in this book is the more “creative” side of the methodological family: methods drawing on poetry, play-writing and performing, painting, and other artistic endeavors. Given our own empirical engagements in the political sciences (specifically, with public policy, public administration, political sociology, and feminist and gender studies) and in organizational studies, where research engagements tend to be rather traditional and such methods are not commonly found, we have not included specific examples of them, nor do we engage the particularities of the kinds of designs and justifications they require. The journal Qualitative Inquiry is a major source for such work, and we happily refer readers interested in such methods to the articles there and to their references.

In sum, researchers in the social sciences across the board need more effective preparation for designing research projects, whether in the field or in archives, that are shaped and supported by phenomenological, hermeneutic, and allied methodological presuppositions and argumentation. We hope the volume engages readers across the full spectrum of these disciplines, at both undergraduate and graduate levels, as well as those in “applied” or professional degree programs: educational studies, nursing and allied health studies, organizational studies, public administration, public policy analysis, urban and regional planning, and others too numerous to list. Because of the specific orientation we take, we anticipate that intersectionality scholars and feminist researchers, many of whose approaches intersect with and overlap interpretive ones, will also find the book speaking to their concerns.

A Sketch of the Book

As the first volume in the Routledge Series on Interpretive Methods, this book treats concepts and processes in interpretive empirical research design, and the methodological issues they raise, looking across methods of generating and analyzing data. Although it engages some very practical issues, such as the structure of research proposals, it is not a how-to volume, as many methods—especially of data analysis (e.g., ethnomethodology, semiotics, metaphor or category analysis; Feldman 1995, Yanow 2000)—follow specific logics of inquiry and require specific designs. We discuss some topics in an overview fashion, relying on other volumes in the series to flesh these out, each in ways appropriate to its own method.

Chapter 1 is devoted to the whys and wherefores of research design, and Chapter 2 then explores the logic of inquiry of interpretive research, with particular attention to where research questions come from. It sketches out abductive ways of knowing before turning to the methodological underpinnings of interpretive research: the ideas from hermeneutic and phenomenological philosophies that are enacted in various forms of meaning-focused, context-specific, interpretive research methods. Research designs, however, require not only a specification of a research question and a theoretical domain; they also need a specification of planned sources of evidence relative to that research question and domain, as well as a sense of how those data will be analyzed. Chapters 3, 4, 5, and 6 engage the kinds of issues that inform choices of data sources: contextuality, its several implications (e.g., for concept development, access, forms of evidence), and, finally, issues in evaluating the trustworthiness or “goodness” of an interpretive research project. Chapters 7 and 8 then take up issues that situate research designs in a broader context.

The rationale underlying the middle section of the book requires a bit more explanation. The design parts of an intended research project are often articulated in the context of a research proposal, as discussed in Chapter 1 and outlined there in Table 1.1. Most textbook discussions of research design explore it in linear fashion, following the contours of the completed outline of such a proposal. Because we are interested in the concepts and processes that go into thinking about interpretive research and what distinguishes it from other research approaches, we take a different tack in Chapters 3 to 6. We engage, instead, the kinds of issues a researcher thinking interpretively would need to consider in carefully formulating the steps of a plan. In doing so, we note the elements that are common to research proposals whatever their epistemological and ontological presuppositions. But we pay close attention to the significant differences that arise when one takes an interpretive approach.

Due to the practice, begun in the early 1970s, of requiring statistics courses in social science curricula, most, if not all, researchers today are familiar with the kind of research design that is typical of a positivist methodology, with its attendant concepts. Most methods textbooks, many of them required reading in graduate and some undergraduate coursework (see Thies and Hogan 2005), lay out its presuppositions, often designated “the” scientific method (as if there were only one). Many design concepts and terms, such as operationalization, sampling, and falsifiability, are, therefore, second nature to most researchers, who are not aware that these are grounded in positivist research methodologies and, therefore, less appropriate for other research approaches.

Because of the prevalence and dominance of these and other terms, positivist researchers, and even those doing interpretive research, may have difficulty recognizing this misfit. In Chapters 3 to 6, because of many researchers’ greater familiarity with positivist-informed concepts, we have situated our discussion of interpretive research characteristics and criteria in close proximity to those on the whole more familiar terms. This enables us to show where and how interpretive methodologies part company with those terms and to explain the ways in which interpretive researchers think about related concepts and processes. Interpretive researchers need a language for responding, for instance, to questions and comments that emerge from a positivist paradigm, such as: What is your independent variable? How did you operationalize that concept? Is that a falsifiable proposition?12 In articulating the reasons that those terms are not good fits for interpretive research design elements, we argue for certain concepts that are more directly linked to interpretive presuppositions and whose use helps surface those differences. In some cases, other concepts and terms better connect to and reflect interpretive presuppositions and extant research practices. We take this up at length in these four chapters.

Specifically, Chapter 3 explores the implications for designing research of the central characteristic that distinguishes meaning-focused inquiry from other approaches: the role of context. In interpretive research, meaning-making is key to the scientific endeavor: its very purpose is to understand how specific human beings in particular times and locales make sense of their worlds. And because sense-making is always contextual, a concern with “contextuality”—rather than “generalizability”—motivates research practice and design. In this chapter, we explain this concern and take up its implications for concept development and understandings of hypothesizing and causality. The three of these play out differently in interpretive research because of its emphasis on context and on the situatedness of both researchers and “researched.”

Context has further implications for the character of evidence: where and how am I going to find “my data,” what will those data look like, and, when I am interacting with research participants, what sort of researcher role will I assume as I co-generate those data with them? What emerges from this discussion is a fuller understanding of the necessity for flexibility in interpretive research design. These matters are explored in Chapters 4 and 5. The ways in which evidence is generated, and the ways in which such processes are discussed in an interpretive research manuscript, are key to how the trustworthiness of a researcher's knowledge claims will be evaluated by a diverse range of readers. Chapter 6 takes up various processes through which researchers designing interpretive projects can anticipate checking on their sense-making in the field, in data analysis, and in writing.

Chapters 7 and 8 move beyond the details of a research design itself to look at research designs in their broader contexts. In Chapter 7 we take up some of the largely silenced areas of field research: the play of emotions in the field, researchers’ sexuality, and, in particular, the “wheelchairedness” and other physical constraints under which some researchers work, all of which might well be anticipated in thinking through a research design but are commonly not spoken of. We also look at two issues gaining attention these days, human subjects protections and data archiving, both problematic from the perspective of interpretive methodologies, whether for procedural or ethical reasons. And we relate elements of a research design to sections of the manuscripts that report on the research. In Chapter 8 we consider “mixed methods” research before turning our attention to still broader issues involved when interpretive research crosses over to other epistemic communities, such as during reviews of various sorts.

To avoid misunderstanding concerning the book as a whole, we add three caveats. First, if some readers are expecting to find polemics here against modes of research other than interpretive ones, they will, we trust, be disappointed. While the contrasts we draw between interpretive and positivist approaches can simplify exposition in the laying out of contrasts between their respective designs, we have been at pains to avoid caricaturing positivist thinking and design, in particular, and we alert readers to possible simplifications where this arises. We do not see positivism as a negative development in the world of ideas or as a derogatory term. In fact, neither of us would be in our present positions or writing this book, for reasons of sex, in both of our cases, and, in one case, of religion, were it not for the heritage of social positivism's emphasis on universality having entered into the social and political world of its day. Both French and American revolutions were fought for egalité /equality—for the 1789 Déclaration des droîts de l'homme and the statement in the US Declaration of Independence, “We hold these truths to be self-evident, that all men were created equal,” the political manifestations of positivism's idea of universal scientific laws. Subsequent civil rights movements of all sorts have fought to realize that principle.

We use “positivism” as an umbrella term to refer to many types of research, from experimental research with its hypothesis-testing ideal, which seems to have set the gold standard for ideas about quantitative methods, to survey and other variables-based, statistical research, to studies conceived of as basically “descriptive,” including some forms of historical and comparative case study analysis. (See Note 7.) Given the behavioralist orientation dominating many social science graduate programs by the time we got there, both of us were trained in survey research design and/or statistical analyses of various sorts, one of us peregrinating further than the other along that path. Yet we are pluralists in our methodological convictions. While we are, ourselves, more inclined toward an interpretive methodological position, we hold that certain kinds of research questions lend themselves much better to survey research or experimentation, and it would be foolish to undertake, say, semiotic squares or ethnomethodological analyses to address these (e.g., because of time or other resource constraints, or simply because one wants information on a very focused matter across a large number of respondents, rather than in-depth, meaning-focused stories concerning their work or lives).

Our interest here is in laying out the methodological grounding for interpretive methods in the context of research designs, and in doing so in a way premised on the view that different modes of science are characterized by different standards and criteria of evaluation, even if all scientists share, in one way or another, an interest in the procedural systematicity and attitude of doubt that legitimate knowledge claims. Given the over 40-year prominence of behavioralist and statistical approaches to the full range of social sciences, two generations of scholars (at least, in the US) have been trained or educated largely without exposure to that grounding—or, for that matter, to the ontological and epistemological grounding of positivist-informed methods. Those researchers who would have been educated to a different, more pluralist way of looking at the social science world are, on the whole, no longer educating students or reviewing manuscripts, leading to a more monocular view of “science.” We would like to recover and build on the broader view that characterized scientific practices of earlier times.

Second, even as we write about the logic of interpretive inquiry, we take to heart cautions against “methodism”—a preoccupation with methods that subjugates the substantive issues under study to the dictates of technical requirements, as if these could somehow ensure the truth of knowledge claims.13 Graduate students, in particular, may sometimes be paralyzed by the imposed or felt need to conform to such dictates, when it should be their substantive concerns, instead, that motivate their research endeavors. When methodological awareness degenerates into a “check list” assessment process that ignores substantive issues, that is one indicator that methodism has taken over. Such a move should be resisted vigorously, in our view, for interpretive as well as positivist research projects. “How we know” is an essential part of science; but without a deep concern for the “what,” research would be a sterile exercise.

Finally, in treating positivist research approaches, we have engaged their representation in textbooks and other discussions, rather than delving into the detailed nuances of research practices such as those found in more sophisticated methodological analyses among positivist scholars (e.g., Brady and Collier 2010, articles in such journals as Evaluation Research, Organizational Research Methods, Political Analysis, Sociological Methods & Research) or in actual scientific practices. Our reasoning for doing so is that, on the whole, more students (and perhaps others) are likely to be introduced to research methods through methods textbooks than through the more nuanced methodological literature. And it is these ideas that have taken hold, broadly, often presenting a picture of “the scientific method” and other procedural issues in ways that do not always resemble what practicing researchers do. For example, “replication” is an often cited practice that is said to demarcate “true” science from “pseudo” science. Yet, as Zimmer (2011) reports in discussing publications in Science and The Journal of Personality and Social Psychology, it does not appear to be much practiced or valued, nor even effectual.

We recognize that the practices involved in the implementation of research designs are complex and that in its execution, research does not always implement initial plans exactly (an issue for IRBs; see Chapter 7). Moreover, in practice, there may be more overlap between interpretive and positivist research than our heuristic dichotomy (see Table 6.1) and our discussions here portray.14 Our purpose is to help those from diverse research communities recognize interpretive research as a distinctive logic of inquiry and to develop what this means at the design stage. All too often, interpretive research projects are acknowledged upon completion to be significant contributions to knowledge and/or practice, but the positivist language of design tends to foreclose that appreciation at the proposal stage, with a deleterious effect on funding.

And just as the positivist label elides huge differences in scientific practices, so, too, does the interpretive label. Interpretive schools and methods have family resemblances, in Wittgenstein's sense—but they also have specific differences. This variety limits the extent to which we can spell out specific designs or design principles. For this reason, the book rests at a certain level of generality, emphasizing concepts and processes of interpretive research design—albeit with concrete illustrations from published research—to achieve utility across a wide range of interpretive practices.

“Science” is not, and has never been, a single practice. Even within the natural and physical sciences, scientific processes and procedures are done differently by botanists and chemists, astronomers and zoologists. Moreover, what it has meant to do science and to be scientific has been changing over time, ever since natural philosophy developed and eventually turned into “science.”15 Interpretive social science takes its place within this panoply of meanings and practices. Our mission in this volume is to provide interpretive researchers, as well as those who review or teach such research, with the rationale to understand and argue for the logic of inquiry underlying this kind of science in ways that are consistent with interpretive methodological presuppositions and the methods that enact them. The “new” engagement with or (re)turn to interpretive methodologies and methods does not eschew design, rigorous systematicity or explanatory (constitutive) causality. None of these need be sacrificed in doing science that stays true to interpretive presuppositions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset