6

DESIGNING FOR
TRUSTWORTHINESS

Knowledge Claims and Evaluations
of Interpretive Research

Doubt concerning the trustworthiness of research claims is fundamental to understandings of science. Plans for subjecting these claims to doubt or to “testing,” a hallmark of science, are commonly built into research designs. But these practices are enacted in dramatically different ways in interpretive and positivist approaches. Much of the extant literature on research design assumes a front-loaded, standardized research process based on positivist conceptions of knowledge and positivist standards of evaluating knowledge claims and the research process that has produced them. These are at odds with the iterative, phenomenological–hermeneutic sense-making process at the heart of interpretive science, thereby producing a conundrum: How is one to assess an a priori design for a research process that is situated and iterative—that is, one that is inherently resistant to planning that fixes its details before the research commences? And in a more practical (or even political) vein, what design elements consistent with this logic will persuade proposal reviewers—many of whom are likely to believe that research designs should be fixed a priori in their concept development, hypothesis-based in their formulation, and unchangeable in their execution—of the trustworthiness of the project design, particularly given the upfront admission that it is expected to change?

This chapter aims to show that commonly accepted positivist standards for assessing research are limited in their applicability when it comes to interpretive research. These standards (also called criteria in the methods literature) are most appropriate and their logic especially clear for research conducted in a laboratory, with its focus on a particular understanding of causality, itself based on specific understandings about what is real and how reality can be known. These are the standards that have been extended to other venues. For interpretive research that is conducted with the goal of understanding contextualized meaning-making and which is based on another set of “philosophical wagers” (P. Jackson 2011) about reality and its know-ability, other standards, already in use, are more appropriate and need to be brought into play.

The commonly accepted positivist standards include “validity,” “reliability,” “replicability,” “objectivity,” and “falsifiability.” The first part of this chapter engages at some length their grounding in positivist research practices in order to show how they do not fit with the presuppositions of interpretive science and why, therefore, these indicators are not useful for assessing its trustworthiness. We next engage two issues that are often presented as particular problems for field researchers: “bias” and the “contamination” of field realities due to researcher presence. The conceptualization of the latter as problematic makes sense from a positivist perspective, as we discuss, but both issues have been understood as afflicting all field research without regard to the distinctive goals and underlying philosophies of interpretive field research. Interpretive researchers have developed their own criteria for assessing researcher sense-making, which we then take up. Finally, we return to the critiques of “bias” and “contamination,” engaging them this time in light of the preceding discussions.

Two items before we continue. One, as noted in the introduction, we engage here positivist standards as they are treated in textbooks, rather than in discussions in the more sophisticated methodological literature or as implemented in experienced researchers’ practices. Second, precisely because these terms are so widely known and so familiar,1 some of them, such as validity, have been taken up in research that is methodologically interpretive (e.g., Klotz and C. Lynch 2007: 20–2). But there, the terms convey meanings broader than the methods-textbook focus that we take here. Our discussions treat positivist usage, rather than the terms’ adaptations in some interpretive research projects.

Understanding the Limitations of Positivist Standards for
Interpretive Research: Validity, Reliability, and Replicability

In positivist research, the trustworthiness of researcher claims is discussed in two general ways that reflect positivist presuppositions about and goals for knowledge. The first focuses on the “validity” and “reliability” of operationalized variables and the general “replicability” of a study; the second focuses on “threats” to the goal of causal inference, which we mention here but take up in greater depth in the next section in the context of field research.

The general logic underlying the validity of a given variable (known as construct validity) concerns whether the particular indicator used by the researcher measures what it is supposed to measure. For example, is the learning of individuals in an organization, as measured by some before and after test, an adequate measure of “organizational learning”? Or, to take another example, are elections the best indicator of “democracy”? In either case, might other indicators, such as “collective practice” or a “universal franchise,” be better for articulating what is at stake in these key concepts? The congruence between a theoretical concept and its operational measure—that is, the validity of the construct—is essential to positivist research design, because if the measure is not valid, then the results of the empirical tests using that measure will not provide an assessment that is germane to the concept (and its attendant theory). When experimentalists add “gender” to their analyses, for instance, perhaps hoping to increase the variation that their research can explain, but operationalize it as “sex-of-subject,” the study's results speak to biological theories of sex differences but not to theories of gender, which construe that concept differently (Schwartz-Shea 2002). The operationalized measure (sex), in other words, is not germane to theorizing about the concept “gender.” Hence, the considerable care given in positivist designs to clarifying concepts and to their operationalization.2 This approach to validity assumes the kind of front-loaded research process discussed in Chapter 4, divorced from the meaning-making of research participants.

The reliability of a given variable, from the perspective of positivist presuppositions, rests on the idea that the same measurement procedure, carried out by two or more researchers working on a project (or even by the same researcher at another time within the same project or repeating it), can produce the same result (assuming the phenomenon under study has not changed). Reliability measures assess the extent of “measurement error” for a given variable. For example, “intercoder (or inter-rater) reliability” assesses the degree to which two or more researchers or research assistants assess observational, interview, or other data in the same way, as they code them using the categories established by the project's PI (Principle Investigator). The greater the extent of agreement between coders (or “raters”), the greater the reliability of the coding scheme for the variable in question. This reliability measure assumes that coder disagreement (i.e., coding the same observation differently) can be explained by human error in measuring the phenomenon being studied (and that explanation provides the rationale that legitimates the discrepancy—in this case, “normal” human error).

Replicability is a standard for assessing an entire research study (whereas reliability is applied to particular measures). It concerns the question of whether the same research project, from data “collection” to analysis, would, if carried out by another researcher, produce the same results. It is a practice taken from the laboratory sciences, where researchers might be seen rushing to their labs to try to replicate the results of newly published findings, as was the case with the reported discovery of cold fusion at the University of Utah. Replicability was central there and led to a scientific scandal: Utah scientists made public claims about the success of the research prior to peer review of the experiments (and even received money for it in a special allocation from the state legislature), but other laboratories were never able to replicate their results (Browne 1989). In the social sciences, replicability means, for example, that two different researchers should be able to apply the same statistical technique to a given quantitative data set and obtain the same results. For field research, the assumption would be that in the data “collection” process and in the analysis, different researchers with the same research question in hand should reach similar conclusions about which evidence matters and about the meaning of that evidence.3 Researcher characteristics are assumed to be irrelevant in both of these research processes.

These three standards—validity, reliability, and replicability—make sense in the context of positivist assumptions about the stability of the social world and its know-ability by human researchers. They have been developed and applied in laboratory settings. Training of laboratory assistants, for example, is meant to control for any effect their physical presence in the lab might have on the conduct of the research; their personal characteristics are deemed irrelevant, making them interchangeable. The white lab coat, which serves to anonymize researchers and their bodies (Livingstone 2003), symbolizes (and to a great extent enacts) this ideal of researcher interchangeabilty. These and other practices, understood as providing the best assessments of positivist causality, have been extended and, as necessary, adapted to non-experimental settings, including field and archival research. In working with qualitative data, the researchers working on a project are also trained to code words in identical ways: one coder is (or can be trained to be) as good as any other. This interchangeability is precisely what survey researchers are trying to achieve in training assistants not to vary from the questionnaire they are administering, including not replying to requests for further explanation.

The utility of both reliability and replicability rests on the degree to which the social world is understood in terms of a relatively stable (and singular) truth that can be mirrored with ever greater accuracy in terms of general, a-historical, a-cultural laws.4 Some positivist social scientists retreat from this assumption by limiting the “scope” of their theories to specified time frames or cultural locations (as mentioned in Chapter 3). They also adjust reliability and replicability to these narrowed claims in not expecting the theory and its concepts to be reliably replicated outside of the project's specified scope.5 Even when they make these adjustments, however, the perspective on researcher characteristics (as irrelevant or contaminating unless controlled) remains intact.

These criteria and associated practices are ill-suited to interpretive research because it makes quite different assumptions about the stability of the social world and how researchers can know it. It has, therefore, developed quite different goals and a different logic of inquiry. With respect to (construct) validity, its “local knowledge” approach to concept development, its disinterest in measuring phenomena, and its constitutive understanding of causality, all focused on understanding meaning-making in context, put interpretive research at odds with that criterion's concerns, focused, as they are, on the adequacy of measures. Further, the standard of validity assumes there is a “real” meaning to data (whether in the form of words or of observations that are used to create numerical data sets) mirroring the world “out there” (see Rorty 1979), rather than seeing language as constituting meaning (the interpretive presupposition).

Furthermore, interpretive understandings of social phenomena as being dynamic and fluid, as well as historically constituted, are inconsistent with both concept reliability measures and requirements for replicability, resting as these do on a more stable, a-historical understanding of the social world.6 Reliability and replicability are additionally suspect from an interpretive perspective because neither researchers nor research participants are assumed to be interchangeable. A data “collection” process repeated at another time and/or place would not be understood as capable of guaranteeing the production of the same data: both researchers and participants are seen as “embodied” or situated, and that situatedness, which can be person-specific, plays a role in the co-generation of data. (This bears on the matter of data archiving, taken up in Chapter 7.) At the same time, however, interpretive researchers work within an implicit understanding that interpretive processes are similar across humans, as well as that researchers and researched are acting as members of their respective communities (academic and “local”), such that continuity of interpretation, as much as differences, is what warrants explanation.7

Both positivist and interpretive researchers anticipate differences in interpretations between researchers, then, but the understanding of the source of these differences changes across these two epistemic communities. Therefore, whether different interpretations constitute a problem in need of fixing (and if so, how to fix it) is at issue. The contrasting views on this point hinge on perceptions of the necessity and possibility of the researcher's control over the conditions of research, as well as on the meaning and implications of difference itself. The one sees differences as problematic and control as necessary, and it seeks to control for different interpretations by limiting the flexibility of the research design and the flexibility and judgment of researchers and/or making the latter as interchangeable as possible. The other sees different interpretations as inevitable, rendering control impossible, and of research interest. It seeks to build flexibility into its designs, making potential sources of difference between researchers as transparent as possible and using those differences to account for the generation of knowledge claims—as taken up later in this chapter.

The Problems of “Bias” and “Researcher Presence”:
“Objectivity” and Contrasting Methodological Responses

Birthed in experimental and statistical research traditions, validity, reliability, and replicability rest on the removal of researcher “presence” from research processes, an idea central to positivist-informed methods. It is based on the assumption that the researcher can generate knowledge of the research setting, its actors and their acts, its events, language, objects, etc., from a point external to it. This is what it means for both researcher and research to be “objective”: to stand outside the subject of study—meaning, to have both physical and emotional distance from it (Yanow 2006b). In laboratory research, this distance is enacted in a variety of research practices, including invariant scripts and protocols that strictly limit researcher interactions with their “subjects” while also requiring subjects’ compliance with experimental procedures. In survey research, this distance is enacted in attempts to control for “interviewer effects,” the influence of the survey-giver's demographic characteristics (e.g., age, race, other appearance factors) and demeanor (e.g., facial expressions, stance) on survey-takers’ responses. Additional controls built into the survey design, such as Likert scales or close-ended questions, seek to limit the response options available to those being surveyed. In these and other forms of positivist research, distance is enacted in the practice of assembling indicators for concepts a priori—without opening them to possible “contamination” by situational social realities—and then assessing their validity and reliability.

Holding out the possibility that a researcher at a physical and emotional-cognitive remove from the people and issues being studied can gain objective knowledge controls for, if not eliminates, the potential for researcher bias. “Bias” implies that emotional and cognitive detachments have been breached. Since physical distance is seen as enabling, if not guaranteeing, emotional and cognitive “distance” (in a metaphorical sense), the lack of physical distance might be seen as engendering bias, which may materialize in research processes, from data “collection” to analysis.8 These forms of distance—of objectivity—are expected to be engaged in a positivist research design.

The contemporary positivist understanding of researcher bias can be traced to a set of psychological experiments on subject bias that began in the 1960s (Wason 1960), a line of research that continues today (e.g., Hergovic et al. 2010). Subjecting laboratory participants to a variety of tasks at various levels of specificity— from Wason's (1960) assignment to infer a rule applying to triples of numbers, to Tabor and Lodge's (2006) asking subjects to read a series of research studies on gun control to assess their opinion change—researchers have found a form of bias in their reactions, termed “confirmation bias.” The phrase refers both to subjects’ intentional search for evidence that will confirm their prior convictions or beliefs (rather than disconfirm them) and to their evaluation of the character of that evidence. This form of bias may be seen as resulting from subjects’ lack of cognitive distance from the study topic: both evidentiary search and evaluation are seen as slanted, rather than following the ideal of a value-neutral search and assessment (Devine et al. 1990, Klayman and Ha 1987, Trope and Bassok 1982).

Translated into the context of researcher bias, confirmation bias—which might well combine cognitive involvement with emotional attachment—might be suspected to induce the researcher to select only that evidence that will confirm a prejudice for or against an argument (whether in data collection and/or analysis stages). Alternatively, a researcher might become too close, emotionally, to particular ideas or individuals (“going native”; see Note 8, this chapter), losing the affective distance perceived as necessary for non-biased assessments of evidence. In archival research, the concern is less with the physical presence of the researcher interacting with research materials than with the potentially biased framing of the research project—its theoretical, historical, and other modes of contextualizing—which the researcher brings with him to his reading of the documents (much as “reader response theory” would lead us to expect; see, e.g., Iser 1989). In laboratory research, both random assignment of subjects to control groups and double-blind procedures (in which neither subjects nor experimenters know what the theoretical model predicts) are intended to prevent confirmation bias (Shadish 2007: 48–9). That such procedures are not (usually) feasible in field, let alone in archival, research renders the problem of confirmation bias additionally serious, from this perspective.9

The difficulty with researchers’ physical presence in the research setting is tied not only to the potential biasing of research processes and analysis, but also to its potential to alter events in the field. This has been of empirical interest in the social sciences since the Hawthorne experiments of the 1920s–1930s, which demonstrated (among other things) the ways in which the workers studied responded more to the attention of managers and researchers than to the organizational climate factors researchers had set out to analyze (Mayo 1933; Roethlisberger 2001/1941). From a positivist, non-laboratory perspective, these results pose a challenge to the possibility that researcher presence can be neutralized. Non-neutrality threatens to undermine determinations of (positivist) causality: that is, whether the presence of the researcher herself, rather than the independent variable of interest, causes the effect perceived during the study. Campbell and Stanley (1963) called this type of problem “reactivity” (meaning, the ways humans react to the knowledge that they are being studied). It is deemed a threat to the “internal validity” of research findings—their trustworthiness as assessed in terms of whether the variable of interest (the independent variable) was the actual cause of observed change in the dependent variable (also understood as a problem in “causal inference”). It might be that human “reactivity” is what is actively causing different behaviors, instead. For these reasons, eliminating researcher presence is understood as desirable, for example through the use of “nonreactive measures” (Webb et al. 1981) or even disguised observation (where feasible; Allwood 2004).10

Methodological counsel such as this has led researcher presence to be widely understood not simply as irrelevant but as a contaminant in the research process (see Chapter 5 discussion). Inflexible survey instruments and experimental protocols are designed to produce physical and cognitive–emotional distance from research participants; researchers are expected not to adapt or adjust these in response to participants’ questions or demands. Underlying such instruments and protocols is the concern that without the sorts of controls which seek to regulate researchers, the latter will respond, in very human ways, to their human interlocutors and in so doing bias the results of the survey or experiment.11 Studies lacking such controls are, by this logic, at particular risk of bias.

From an interpretive methodological perspective, these conceptualizations of bias are problematic. First, the interpretive logic of inquiry has as its primary goal understanding research participants’ meaning-making in their own settings, precisely without the kinds of artificial controls these treatments of bias recommend. Second, researchers enter these field settings understanding that their embodied selves constitute the primary instrument for accessing and making sense of these individual and community meaning-making processes. Interpretive researchers and methodologists dispute the possibility of disembodied research, as if all researchers were interchangeable and as if they could conduct their research without interacting with situational participants and without having those interactions affect their interpretations and knowledge generation.12 This problematic conception of objectivity has been theorized at length among feminist scholars and philosophers of (social) science (e.g., Harding 1986, Hawkesworth 1988, 2006b, Longino 1990, Polkinghorne 1983). Such a position has been called the “god trick” (Haraway 1988, in response to Nagel's endorsement of the “view from nowhere,” the title of his 1986 book).

Without physical presence and absent an engagement—intellectual, surely, but at times also emotional—with members of the setting being studied, and even with its texts and other material objects, sense-making would hardly be possible. Controlling for researcher bias in such situations would seem to mean that researchers should aspire to be “blank slates” with no theoretical or other expectations, who can check their values, beliefs, and feelings—their own meaning-making— at the door. It also implies that they are incapable of monitoring and reflecting on their own learning, their own sense-making processes—that is, that they are trapped, unknowing, in their prejudices. The idea that researchers are incapable of recognizing bias and prejudice is logically inconsistent with the phenomenological and hermeneutic premises that underpin interpretive understandings of science. To presume that humans cannot be aware of their “biases” is to reject human consciousness—the possibility of self-awareness and reflexivity— and human capacity for learning.

Because of these methodological presuppositions, interpretive methodologists have long been involved in thinking through research practices that engage researcher meaning-making in relation to research trustworthiness, including the effects of researcher presence. These practices begin from the position that there is no place to stand outside of the social world that allows a view of truth unmediated by human language and embeddedness in circumstance. The search for knowledge, whether in the field or in the archive, begins wherever the scientist initially finds her- or himself (informed by research literatures and prior experience) and then proceeds toward new understandings of the research focus. This orientation toward processes of understanding privileges human consciousness as an inevitable and useful part of knowledge-making, and it accompanies the researcher's physical, cognitive, and emotional presence in and engagement with the persons and material being studied. The central feature of these methodological checks on sense-making is reflexivity, including analyzing how the researcher's identity— both as presented and claimed by the researcher herself and as perceived by others—may affect the research process (as the discussion in Chapter 4 attests). This is a key consideration at the design stage and continues as a methodological concern through the fieldwork, deskwork, and textwork phases of a project. Other checks on sense-making focus researchers’ attention and analysis explicitly on the connections between their own meaning-making processes and the data they generate and analyze in the process of developing and advancing knowledge claims.

Researcher Sense-Making in an Abductive Logic of Inquiry:
Reflexivity and Other Checks for Designing
Trustworthy Research

Because of their focus on situated, contextualized meaning-making, interpretive researchers emphasize the following in their research, which is quite other than those bias-avoiding steps that characterize a positivist logic of inquiry:

  1. •   bottom-up, in situ concept development;
  2. •   constitutive understandings of causality;
  3. •   the relevance of researcher identity in accessing sites and archives;
  4. •   the need to improvise in response to field conditions; and
  5. •   data co-generated in field relationships (as discussed in previous chapters).

The character of these hallmarks explains why a meaning-focused logic of inquiry requires flexibility in its design. Instead of faulting interpretive research designs for being open-ended, dynamic, and flexible, evaluative criteria need to assess how researchers deal with these characteristics in accounting for the research processes on the basis of which they assert their knowledge claims.

Even though the research process is expected to be dynamic and flexible, a great deal of procedural planning goes into interpretive research. The discussion of these procedural details in (or absence from) the design becomes one of the ways in which interpretive projects are evaluated. We have already engaged several in Chapters 4 and 5:

  1. •   the relationship of researcher identity to choice of and access to field research sites;
  2. •   researcher role(s) and the degree of participation in research involving participant observation;
  3. •   mapping the site for exposure and intertextuality;
  4. •   anticipating forms of evidence and analysis of their relationship to the research question; and
  5. •   fieldnote practices.

Here, we take up three additional design elements, discussion of which reviewers of interpretive work increasingly expect to find in research manuscripts. The presence or absence of such discussion is often used as an evaluative criterion, suggesting the desirability of explicit, thorough, and thoughtful engagement:

  1. •   reflexivity, perhaps the most important of the three, an interpretive counterpoint to positivist objectivity;
  2. •   data analysis strategies and techniques; and
  3. •   what is known in the qualitative methods literature as “member-checking.”

Engaging in these practices and making one's engagement explicit and as transparent as possible in the research manuscript is understood within the interpretive epistemic research community as contributing to the quality of interpretive research. Anticipating them in the design becomes further grounds for evaluating it (as well as the later research manuscript).

All three are about practices that researchers engage in as checks on their own sense-making. They are part of the standards to which interpretive research aspires and the criteria according to which it is evaluated: their presence in a research project can directly contribute to assessments of the trustworthiness of researcher knowledge claims. From a design perspective, these are largely enacted after a proposal has been accepted and the research is under way, during fieldwork, deskwork, and textwork phases. But their possible later use can be considered in advance, even if their particulars will of necessity change to reflect research facts on the ground as the study progresses.

Checking Researcher Sense-Making through Reflexivity

“Reflexivity” refers to a researcher's active consideration of and engagement with the ways in which his own sense-making and the particular circumstances that might have affected it, throughout all phases of the research process, relate to the knowledge claims he ultimately advances in written form. Reflexivity includes consideration of how the researcher's own characteristics matter and, where feasible, assessments of the ways in which his particular scholarly community and even the wider social milieu impact the research endeavor. The concept and practice have a complex history to which we cannot do justice. (For a brief history of reflexivity as an interpretive criterion for evaluating research, see Schwartz-Shea 2006; for a fuller one, Alvesson and Skoldberg 2000.) In what follows, we emphasize the pragmatic side of this concept rather than its considerable philosophical complexities.13

The essential components of reflexivity vary at different stages of a research project. At the design stage, reflexivity is enacted in systematic consideration of the researcher's characteristics (in, e.g., “demographic,” disciplinary, and other terms) and potential physical location in the field setting and what these might mean for access to persons and ideas and for researcher–participant interactions. Because the construction of researcher identity is interactive and dynamic, possibly changing over the course of the research project, reflexivity at the design stage is not predictive. But thinking ahead of time about possible identity issues, such as challenges of various sorts, can help a researcher later on in the field, if and when such challenges materialize.14

Once in the field, interactions begin (and analysis and sense-making continue), producing many possibilities for reflection. These include reflecting on:

  1. •   how the researcher's chosen role and/or physical location on site might be shaping the kinds of information being accessed or blocked;
  2. •   how the researcher and research participants are co-constructing the former's identity and what that appears to mean (at that point in time) for the co-generation of data;
  3. •   how the researcher's presence or personal characteristics may be affecting particular interactions;
  4. •   changes in degrees of participation along the observer-participant continuum;
  5. •   the adequacy of initial mapping for exposure and intertextuality;
  6. •   the development of the researcher's thinking as archival and/or other materials generate new understandings; and
  7. •   possible revisions, big or small, in research design in light of field realities.

Reflexivity may also serve as a check on researcher ethical misconduct, as Librett and Perrone (2010: 745) argue: in not distancing researchers from their research participants, reflexivity strengthens their personal responsibility for the research and its outcomes. Much of this can and should be recorded in fieldnotes contemporaneously with the descriptions of conversations, setting, events, interactions, and documents that provide the context for researcher sense-making. In all cases, reflective notes need to be self-consciously tagged as researcher sense-making (as opposed to description, even as interpretive presuppositions mean that “description” is never a mirror but itself a theoretically-informed interpretive act).

Reflexivity is essential to the field, but it cannot and should not stop upon exiting the field. At deskwork and textwork stages, reflexivity continues as the fieldnote records of researcher–participant and researcher–documentary interactions are woven into a publishable manuscript. What makes reflexivity interpretive—some call this critical reflexivity—is the link to epistemological matters. This includes the self-monitoring of the researcher's own “seeing” and “hearing” in relation to knowledge claims, including theoretical expectations, as articulated in presentations of the research setting, actors, and so on in the research manuscript, as well as of his or her own emotional reactions to events, people, sites, documents.

This seeing, hearing, and feeling produces researcher understandings.15 The practice of reflexivity involves the self-conscious “testing” of these emerging explanations and patterns, including of what seems clear and what seems muddy at particular times in the field. Reflection may also reach both backwards in time—to contemplate initial theoretical expectations and past observations as understanding deepens—and forward as the researcher ponders emerging puzzles and/or silences and how field maneuvering might mean exposure to new people or documents that could shed light on these.

Reflexivity on the written page is methodologically significant for at least two reasons. First and foremost, reflexivity allows researchers to trace out the ways in which very specific instances of their positionality affect their research accounts and the knowledge they claim on the basis of those accounts. Pierce (1995), for example, explains that her greater degree of interaction with women than with men in the law offices where she conducted her research produced a generally flatter, less nuanced portrait of the men. Wood (2009: 130–1), by substantive contrast but equal reflective detail, notes that although women participated in the insurgent organizations she was studying, the men were far more active in her field interview settings, often interrupting the women's narratives despite Wood's best efforts to intervene, all in all leading her to rely more on men in her research. Shehata (2006) observes that some research participants related to him in terms of his birthplace; others emphasized their common religion; and still others worried that he was a spy for the company administration. Lin (2000) reflects on her standing as an Asian-American interviewing in US prisons with few Asian-Americans: “[N]either staff nor prisoners had any reference point for my racial allegiances,” whereas “a white or black interviewer would have confronted more predictable problems, given the different racial mixes of white and black staff and prisoners at each prison” (2000: 189, 190). Black and white interviewees alike appealed to the similarities between their own racial groups and Asians, answering her questions in ways that were different from those a white or black researcher might have generated, given the “allegiances” implied by those racial identifications (2000: 189). In reflecting on the written page on processes shaping their knowledge claims, all four of these researchers enable their readers to assess how geographic and demographic positionalities shaped their knowledge generation and development. Reminding readers of the fluidity, open-endedness, and complexity of lived experience, critical reflexivity calls attention to the ambiguities and multifacetedness of meaning-making.

Second, a critical reflexivity calls on researchers to think deeply about the ways in which their own research communities are historically constituted, such that particular socio-political contexts shape, in previously unarticulated or unrecognized ways, the research questions asked or the very concepts used to investigate phenomena.16 Reflexivity may enable a researcher to grasp and explain how her initial assessment of the situation being studied was influenced by the socially-historically constructed understanding of the research community of which she is (seeking to become) a member. For example, C. Lynch's (2006) experiences in US social justice activities prior to graduate school gave her a basis for questioning the conventional academic wisdom that interwar peace movements were naïve, responsible for dangerous policies of appeasement and isolationism. Instead of privileging these experiences and assumptions in her analysis, she took a “strongly self-reflective stance” toward her own evidence and conclusions in order to “compare the logic of [peace movement] behavior against that of the ‘lessons’ taught me by the dominant narratives” (2006: 294, 292; see also C. Lynch 1999). Similarly, Oren (2006b: 220) reflects on the evidence-generating practices of the international relations (IR) scholarly community, himself included, to build an argument that data so produced are not neutral, despite widespread assumptions and/or claims among IR scholars to the contrary, because the “analytical concepts and coding rules [are] themselves historical subjects more than objective instruments without a history.” This argument parallels that of sociologists and others concerning the ways in which metaphors shape theoretical reasoning (e.g., Brown 1976, Gusfield 1976; see also Ghorashi and Wels 2009, Sykes and Treleavan 2009).

In these processes, reflexivity enacts a methodological value that underlies many interpretive criteria (in particular, those concerned with checking researcher sense-making during data generation and analysis): transparency of knowledge generation.17 Consider, for example, Gina Reinhardt's experiences having her marital status constantly challenged while she was in Mozambique, miles away from her fiancé in the US. This led her to make some key choices about her research. Reflecting on gender, race, the values that were important to her personally, and the choices she subsequently made, Reinhardt (2009: 297) writes:

In making her reasoning transparent, Reinhardt invites the reader to consider the extent to which her research choices might have affected the knowledge claims she advances in presenting her data and in their analysis. Paradoxically, reflexivity can serve to enhance the trustworthiness of the researcher's knowledge-generation processes even as its use might reveal research activities that challenge that trustworthiness. A reader may decide that what is revealed through such transparency weakens the knowledge being advanced—but its presence enables that judgment. Without such transparency, assessment of knowledge claims would be impaired. It is a key to the legitimacy of interpretive sense-making: rather than making the connection between process and conclusions appear seamless, reflexivity reveals and, where possible, analyzes the consequences of a reliance on a “human” research instrument.18

There is considerable variation in the practice of reflexivity, as it is still an emerging methodological idea with norms that vary by discipline (e.g., it is expected, and accepted, more in anthropology than in political science) and field (e.g., more in feminist research than in policy studies). Variation may also be due to debates over the extent to which the researcher's voice should be on display in a research manuscript. Such debates recognize the stakes involved in self-disclosure, including the power of the researcher at the deskwork and textwork stages to (re)present her knowledge claims, as well as varying degrees of comfort with self-revelation.19

Choices concerning reflexivity enact the researcher's accountability to those studied, to the evidence as he understands it, and to the value of transparency for reviewers and potential readers of the study. Rather than being (or being seen as) an exercise in vanity or self-indulgence, reflexivity should be understood and treated as a scientific activity at the heart of interpretive research. Reflexivity enacts the systematicity of interpretive research in a manner that is consistent with an interpretive logic of inquiry, and it puts researcher presence in the field site and the subjectivity of interpretation front and center for critical consideration, rather than trying to mask or ignore it. It is a significant marker of quality in interpretive research because it makes the research process and its claims more transparent, thereby maximizing the trustworthiness of the researcher's claims to knowledge as voiced in a research manuscript. Until the centrality of reflexivity to interpretive science is more widely understood, its anticipation in various aspects of a research project and explicit discussion in research designs (and later, in methodology or methods sections of research manuscripts) is desirable.

Checking Researcher Sense-Making during Data Generation
and Analysis

Because (as noted in Chapter 4) the major “instrument” for the conduct of interpretive research is the researcher him- or herself (as compared to the scripts and protocols that control positivist researchers as well as their “subjects”), skeptics ask: “How does the reader know that the researcher didn't look only for confirmatory evidence?” (Schwartz-Shea 2006: 104, original emphasis).

Investigators have developed a variety of strategies and techniques to check their sense-making processes during both data generation and data analysis phases of a research project. Because data generation and analysis are not entirely separable stages but are intertwined, researcher sense-making begins the moment the researcher enters the field, if not before,20 and continues after she exits and settles down to the deskwork and textwork that are, in other, front-loaded forms of research, traditionally considered the data analytic stage. A plethora of data analytic techniques may be brought into play during the fieldwork, deskwork, and textwork phases, depending on the research question and the form(s) of the data, e.g., metaphor analysis for word data or spatial analysis for spatial data. Space limits preclude taking up the particularity of these distinctive techniques here (for a listing of a couple dozen possibilities, see Yanow and Schwartz-Shea 2006: xx).21 These techniques and strategies vary in the extent to which they are designed to be used in both fieldwork and deskwork (e.g., Becker's, 1998, recommended strategy of searching for negative cases) or only or primarily during textwork (e.g., deconstruction). They also vary in the extent to which they assume it is possible or necessary to return to the field to generate more evidence (e.g., some forms of grounded theory; see Charmaz 2006).

Because interpretive researchers do not seek to mirror the world, their primary concern in checking their own meaning-making is not focused on “getting the facts right,” as if there were only one version of that social reality. Rather, they are looking to articulate various experiences or viewpoints on the topic under investigation, in order to be able to understand its nuances more fully. Because they expect to learn about these over time, their task in checking their own sense-making concerns finding ways to suspend judgment or to avoid a “rush to diagnosis,” that is, to prevent themselves from settling too quickly on a pattern, answer or interpretation.

No single umbrella term has emerged as a label for the many techniques that have been developed to check researcher sense-making while analyzing data in the field, at the desk, or in writing. For example, Frank (1999: 97–8) describes how student teachers can learn to delay interpretation by dividing their fieldnotes between “notetaking” (descriptions) and “notemaking” (analytic comments)—although we hold that even in the process of describing persons, settings, events, and so on, the researcher is selecting which details are significant in terms of the research question, and such choice-making is at heart itself analytic. Others include “following up surprises” in the data (during the deskwork phase; Miles and Huberman 1984: 262) and searching for “negative cases” (during both phases; Becker 1998: 192–4) or for “tensions” in the emerging explanation (also during both phases; Soss, personal communication, 27 February 2011; for a review, see Schwartz-Shea 2006). The general idea is that the researcher consciously searches for evidence that will force a self-challenging reexamination of initial impressions, pet theories or favored explanations. Although not always articulated in terms of a “check” on researcher sense-making, some specific data analytic techniques, e.g., semiotic squares, operate in analogous ways (see Feldman 1995).

These techniques are aided by other interpretive research practices—the continual testing and revising of initial expectations, drawing on attention to inconsistencies arising from intertextuality and to silences in the data, i.e., what the researcher is not hearing in the field or seeing at the desk. Unlike the single test characteristic of front-loaded research (e.g., administering the survey that will test hypotheses established a priori), field and archival settings provide the researcher with many opportunities for “testing” developing understandings of research puzzles while the research is under way.

An effective research design should demonstrate awareness of these general strategies and specific techniques for checking researcher sense-making. The researcher can indicate one or more that might be drawn on in the course of the research, as appropriate to the proposed methods of generating and/or analyzing data. Demonstrating familiarity with these practices marks the researcher as aware of the general issue of concern, as well as of the variety of field and archival methods that might be used to support and challenge sense-making at both the data generation and data analysis stages, even when particular practices to be used might not be specifiable at the design phase.

Checking Researcher Sense-Making through “Member-Checking”

“Member-checking” refers to the practice of sending or bringing written material involving the people studied back to them. These are commonly transcripts of interviews conducted with them; segments of a research manuscript (or a completed manuscript) reporting on an event in which they were involved or including something they said; or follow-up, face to face conversations over similar materials. The intention is to see whether the researcher has “got it right” from the perspective of members “native” to the situation or setting under study.22 Where appropriate, an interpretive design should indicate whether the researcher plans to conduct “member-checking” and, if so, why.

Going back to others is more than the journalistic practice of “fact-” or “quote checking,” which implies that there is a singular social reality that can be captured by the reporter, as does the idea of getting the research narrative “right” or “wrong.” Neither of these is the sense in which this check on sense-making is used by interpretive researchers. Instead, it is used in recognition that research settings and sense-making of them may be quite complex, involving, for example, tacit knowledge, local vocabularies with local meanings, and/or positioned understandings of events and other things studied, the situated meaning of any of which the researcher may or may not have grasped well. The practice enacts the commitment to knowledge that takes into account situational actors’ own understandings of their experiences.

There is, however, considerable methodological debate over the details of this practice (e.g., Miles and Huberman 1994: 275–7, Emerson and Pollner 2002, Schwartz-Shea and Yanow 2009), including over whether some of its forms are inappropriate for some modes of research. One difficulty is that given the variety of perspectives in the field, seamless agreement among all group members about whether the researcher has “got it right” is improbable. D. Mosse's (2005, 2006) account of his efforts at member-checking in his ethnography of aid policy and practice in development organizations showcases the extent to which researcher purposes and situated interpretations may be embraced by some actors in the field and vigorously rejected by others. Project managers in one non-governmental organization took “strong exception” to his account (2005: ix), later filing formal objections with his university and then the professional anthropology association to which he belonged, even as some staff and workers elsewhere were sympathetic to his analyses.

Moreover, the language of “checking” with situational members implies that if they object to what the researcher has written, their understanding will prevail. This denies the researcher any epistemological purchase that might arise from information gleaned from exposure to other parts of the setting, adding layers of understanding that are not available to the objecting individual, or from the academic literature and the debates taking place there. We have not found methodological discussions advancing this approach that engage the variety of responses a researcher might expect from members “checked” or how these responses might be engaged in the written manuscript (for discussion, see Schwartz-Shea and Yanow 2009: 70–2). For one example, Liebow (1993) published his informants’ comments on his text in his footnotes, even when they took issue with his representations. Another difficulty is that the interpretive stance of inviting research participants to share what they feel and think on their own terms is in tension with the ultimate authority and power of the researcher at the text-making stage to present her theoretical and empirical arguments without consultation with members or their participation. And even when writing is jointly conducted, it is typically the academic researcher who wields the pen, so to speak (cf. Down and Hughes 2009).

Given these debates, whether member-checking is appropriate to a particular project should be carefully considered. It may not be feasible if the distance between the field site and the researcher's home base makes returning there prohibitively expensive—and mail or email may not always be an appropriate substitute for a face to face visit. It may not be desirable if sharing a manuscript or parts of one with some research participants might threaten anonymity or the confidentiality of others. It may be most appropriate to the sorts of participatory-action research (PAR) projects in which participants come close to the status of co-investigators (see, e.g., Cahill et al. 2004, Cahill 2007, Greenwood and Levin 2007, Berg and Eikeland 2008, Sykes and Treleaven 2009)—and in fact, PAR designs may sidestep this issue entirely. Despite these complications—or, perhaps, because of them—we think the issue worth thinking through in a research design.

Doubt, Trustworthiness, and Explanatory Coherence

The interpretive attention to researcher sense-making responds to a key issue in the broader context within which scientific research is conducted—its central concern with the trustworthiness of researcher claims vis-à-vis the knowledge presented in the research manuscript. As examined in this chapter, this concern plays out in different ways in positivist and interpretive methodological perspectives, each approach responding to this challenge by developing practices to address doubt, trustworthiness, and—by implication—the quality of any study.

In positivist methodology, the attitude of doubt is enshrined in one of its most powerful design concepts—falsifiability. Its widespread acceptance means that reviewers of research projects and designs often apply this standard to all research studies, regardless of their philosophical underpinnings. As discussed in Chapter 5, this concept rests on the idea that research can objectively mirror or measure its study domain, a presupposition not accepted within interpretive research (because what constitutes data is understood as generated by the research question and co-generated with research participants).

The falsifiability standard also shows up less formally in a question that is often posed to researchers: What evidence would convince you that your analysis is wrong?23 When asked by scholars working from a positivist perspective (e.g., King et al. 1994), this question voices a Popperian sensibility about how best to assess (causal) hypotheses that make up particular theoretical models (Hawkesworth 2006a). The question presumes that a model's hypotheses can be specified, tested, and assessed with precision against something in the externally observable world. This objectively “collected” evidence (as contrasted with the interpretive perspective on evidence as co-generated) can then be used by the researcher to evaluate the model's posited causal relationships, such that these can be shown to be erroneous.

The expectation is that researchers should be able to spell out the empirical implications of their theoretical models24: for example, that in producing a collective benefit, male subjects will cooperate less than female subjects, implied by a sex-differences model tested in social dilemma experiments (Eckel and Grossman 1996, Schwartz-Shea 2002); or that chosen candidates will move their platform promises closer to the median voter's position for the general election, implied by the model of voting behavior theory (Downs 1957). By referencing the evidence from an experiment or from the historical record, the researcher can answer the question concerning whether he has been wrong in his characterization of the world (as represented in that a priori model). If male and female experimental subjects cooperate at the same rate or if a political candidate fails to move her platform positions toward the median voter (and yet still wins the election), the models’ predictions have been falsified, and the researcher knows he was wrong. (For a critical assessment of this logic, see Shapiro 2004: 28–36.)

In both research approaches, the question seeks to inquire into the trustworthiness of the researcher's analysis. The purpose of interpretive research, however, is not model testing, but the understanding of human meaning-making in context; the goal is not to erase ambiguities, but to understand their sources. For this approach, with its emphasis on immersion in human meaning-making in the field and in archives and its iterative sense-making processes, the question pursues a different reasoning. Asked from an interpretive perspective, it seeks to inquire into the logic and explanatory coherence of the analysis, rather than the “goodness” of the model: How would you know if there were something else afoot in this situation that might be a better explanation of the puzzle you are seeking to explain?

Framed in this way, the issue is the adequacy of explanation and analysis—the explanatory coherence of the argument. To address this question, an interpretive researcher will point to (1) the consistency of evidence from different sources (the intertextuality of the analysis), (2) the ways in which conflicting interpretations have been engaged, and (3) the logic with which the argument has been developed. The first of these, consistency of evidence from different sources, builds on all the design themes laid out in these chapters which engage an orientation toward meaning-making and its ambiguities, particularly mapping to enable exposure and intertextuality. The second, engaging conflicting or contradictory interpretations, involves the deskwork and textwork in which the researcher points out, discusses, and analyzes the different interpretations (enabled by item 1) in terms of participants’ locations and identities, as well as the researcher's, using the many “clues” recorded and assembled in the fieldnotes. Conflicting interpretations are engaged in such a way that the research puzzle is “made sense of”—the “plot” is “resolved,” so to speak. The connection between the second and third items is “methodological” in its fullest sense: that is, method alone can never produce the denouement of entangled interpretations; that calls for authorial judgment and theorizing.

Answering this question, then, means recognizing evidence (generated through mapping for exposure and intertextuality) that might challenge the researcher's explanation, engaging it in the text, and accounting for in the analysis. In Becker's words (1998: 210), the reason for searching out and engaging such inconsistencies is “to refine the portrait of the whole—in order to offer, in the end, a convincing representation of its complexity and diversity.” As in the other logic of inquiry, the researcher turns to a marshalling of evidence—only here, the answer rests more on the logic of argumentation, its overall explanatory coherence, than on the logic of statistical analysis.

“Researcher Contamination” and “Bias” Revisited

For the methodological practices associated with positivism, researcher presence and judgment are problematic. From the perspective of these practices, it appears that the ideal researcher would be invisible to those she studied (“disembodied”) in order to minimize her impact on them (see Pachirat 2009b). She would also be emotionally insulated from their reactions to her, as well as from her reactions both to them and to whether the results of empirical tests supported her theoretical expectations. Because this ideal is not humanly possible, positivist methodologies set up “controls” on research, researchers, and research subjects to contain or, ideally, entirely avoid researcher contamination and bias.

Given the positivist goals for knowledge—to achieve universal, a-historical causal laws—these methodological controls make sense. In contrast, from an interpretive logic of inquiry in which the researcher him- or herself is the primary “instrument” of data generation and sense-making and where iteration is intrinsic to the research process, these sorts of controls may stymie research or even stop it before it can get started. Research designs that seek to control for “contamination” and “bias” do not fit interpretive methodological concerns. The unsuitability of control-based design for interpretive research does not mean, however, that interpretive researchers are not concerned about the trustworthiness of their research. In the preceding section and previous two chapters, we have shown the sorts of methodological practices developed by interpretive research communities for achieving trustworthy research, yielding evaluative criteria that fit an interpretive logic of inquiry. These criteria, however, pose challenges for positivist understandings of and expectations for research design which often affect the evaluation of interpretive designs at the hands of reviewers of various sorts. In closing out this chapter, we engage some of these, showing how they appear differently with respect to matters of bias and research trustworthiness in the light of these two very different logics of inquiry.

Take, for example, the positivist methodological concern that researcher presence will interfere with the path to knowledge, threatening causal inference, in particular. In interpretive methodology, researcher presence is understood as inevitable and in some cases invaluable! For instance, should a researcher, whether in all ignorance or by intention, violate local expectations that attach to one or another of his demographic characteristics (e.g., sex, class), the resulting response may well be a key learning experience. This is a central concept in ethnomethodological and other norm violation research, and it is in keeping with Kurt Lewin's idea that the best way to understand something (e.g., an organization) is to try to change it (a point also made by feminist researchers, e.g., Cancian 1992: 633). Shehata (2006) illustrates this in noting that his intrusive presence and the extent to which he challenged social class taboos, often inadvertently, contributed greatly to how he came to understand the operation of social class in Egypt.

Another positivist concern is that research participants will “perform” for the researcher—act in ways that they would not naturally act if the researcher were not present. The intentional masking of “backstage” views, attitudes, and opinions by research participants is possible, perhaps even likely in some circumstances, as all persons (including researchers!) make decisions about what, and how much, to reveal about themselves, sometimes with strategic intent. With prolonged observation, researchers can come to see participants and their words and acts in context, which will put “performing for the observer” into perspective (Lincoln and Guba 1985: 304–5). Or, as Liebow (1993: 321) remarked, in the context of participant observation studies, “. . .one returns day after day and month after month to the study situation, and lies do not really hold up well over long periods of time.”25

But more than that: to underscore a point raised in Chapter 5, interpretive researchers are less likely to understand “performance” as a problem than to see it as data. Invoking Goffman's (1959) backstage–frontstage distinction advances one perspective on the matter: all of us foreground a “presentation of self,” seeking to keep other forms of self-knowledge private. The implication that is sometimes brought into play when this language is invoked—that backstage identity is somehow more real than frontstage presentation, or performance—is unwarranted from the perspective of interpretive research. When participants do “put on a show,” that response is itself of intrinsic interest to the researcher. For example, reading across interviews and observations intertextually, Allina-Pisano (2009) and Agar (1986) both found that research participants had exaggerated certain claims. Allina-Pisano (2009: 68) described the exaggerations she encountered in a rural village in Russia as “part of broader social narratives and a liturgy of lamentation that is shared above all with outsiders.” Agar (1986) analyzed the discrepancy between widespread trucker complaints about specific problems (which they contrasted with the then-popular movies portraying independent truckers as cultural heroes) and his observations of the rarity of these problems as he traveled with them and analyzed industry accident data. Treating these exaggerations as data enabled both authors to understand their study settings in ways they might not otherwise have been able to do.

Even more importantly, interpretive presuppositions contest the assumption that there exists some “pure” or “authentic” conduct on the part of research participants. Instead, all human conduct is understood in terms of the myriad historically constituted power relations that are part of all social settings. (For a theoretical framework that elaborates these ideas, see Scott 1990.) Researcher presence deserves attention and analysis, and whether it poses a problem or presents an opportunity should be assessed according to situational, contextual, and theoretical factors, rather than being assumed automatically to be an obstacle to trustworthy knowledge claims.

And then there is the concern about confirmation bias, that the researcher searches only for evidence that confirms her preferred answer to the research question. First, interpretive research does not, and cannot, rest on a search for or selection of data in any kind of perfectly controlled or random sense. Researchers give up such control when they enter research participants’ worlds; and randomization is impossible because of the limitations on compiling a complete list (the “sampling frame”) of everything that occurs in the field. Instead, by intentional strategy and design, interpretive practice means mapping the variety of people, places, events, texts, etc., to expose the researcher to multiple perspectives on the research question. Researchers offer “situated knowledges” (Haraway 1988), each related to location: knowledge from somewhere. Reading intertextually across the many forms of evidence (spatial, text-based, visual, numerical, experiential, etc.) attunes researchers to the complexities of lived experience. In the archives as well, the multiple “voices” from the texts of, for instance, individual authors, organizational task forces or community manifestos attest to struggles over meaning-making and narratives. Most pertinent to the concern with confirmation bias are the long-standing practices and checks on researcher sense-making discussed in this chapter. Interpretive researchers, too, search for “disconfirming evidence.” That this practice is not consistent with falsifiability, Type I and Type II errors, or other aspects of the positivist framework of knowing does not mean it is less systematic (or rigorous). Instead, these overlapping checks and research practices enact a methodological rigor consistent with a logic of inquiry focused on the interpretive purpose of understanding meaning-making.

Second, the question about researchers intentionally choosing evidence that supports their argument while ignoring evidence that undermines it evinces an anxiety that is not unique to interpretive research: researchers working in other methodologies are also capable of “cooking the books” (and there are plenty of examples of that from laboratory research; see, e.g., Resnik 1998). What keeps researchers honest is an unwritten, unspoken, yet nonetheless tacitly known and communicated ethical code, largely articulated only when it is broken. Interpretive scientists are as committed to honest practices as any other kind of scientist; deceitful practices know no methodological borders. Moreover, acknowledging issues in knowledge generation, interpretive researchers continue to strive for transparency in their sense-making, including through reflexive checks on those processes. Demonstrating familiarity with interpretive research sensibilities and practices in a research design signals that the researcher is aware of these many issues.

The central methodological point that we are seeking to underscore here is that interpretive researchers are not captives of what they see, hear, or read—they are not trapped by what people tell them any more than they are by their prejudices. They are alert to the possibility of partial knowledge and multiple perspectives. Neither of these can be avoided or controlled for. But they can be acknowledged, engaged, and analyzed. Reflexivity aids in this process as researchers ask not only about their own meaning-making but also about what they are not hearing, about the silences in their interviews, readings, and observations. Inquiring into the meanings of such silences, whether chosen or imposed, is a major marker of quality in interpretive research. This is not to claim that reflexivity is a panacea for the issues raised by knowledge that is perspectival, any more than positivist controls can achieve that logic's ideal of objectivity. No one can be fully transparent to herself (Fay 1996, Luft and Ingham 1955), and all research endeavors proceed based on some set of presuppositions. The interpretive commitment is to increase understanding of the ways in which the characteristics of individual researchers and their academic communities affect the production of knowledge in the human sciences. Research designs that discuss the role of reflexivity in the project communicate this commitment to reviewers and other readers.

Summing Up

Table 6.1 summarizes the discussion presented in Chapters 3 through 6, bringing together design concepts that are particular to a specifically interpretive research project (the first column) with those that commonly appear in discussions of research designs in general but which are, in fact, specific to positivist methodological assumptions (the second column).

TABLE 6.1 Contrasting approaches to research and its design

Interpretive MethodologyPositivist Methodology
Researchmeaning-making measurement
orientationcontextuality (in re. knowledge)generalizability (in re. knowledge)
hermeneutic–phenomenologicalprediction tied to causal laws
sensibility: explanatory description(answering “wherefore?”)
(answering “why?”)
constitutive causalitymechanical causality
Designabductive logic of inquiry: iterative,deductive logic of inquiry;
attituderecursive, starting from surprise/inductive logic as precursor
puzzle/tension deriving fromto deductive inquiry
expectations vs. lived experiences
prior knowledge, expectationsclarity of model; prior experiential
(experiential, theoretical)knowledge deemed irrelevant
or potentially biasing
dynamic flexibility infixed, a priori design; control
implementation of design as
learning occurs
participants = agents with valued participants = subjects,
local knowledge; researchers asinformants; researchers
experts in processes of inquiryas subject-matter experts
research as “world-making”objective description
Getting educated provisional sense-making;theories > concepts >
goingstart with prior knowledge > thehypotheses > variables
hermeneutic circle-spiral
investigatingtesting
access questions; choices: of settings,case selection; researcher in
actors, archives, documents, . . . control (access is subordinated
(relational turn in field research;to selection)
ethical and power dimensions;
active learning in the field)
In the fieldmapping for exposure andsampling
or archivesintertextuality
bottom-up, in situ concepta priori concept formation
development (learning)(separated from operationalization)
exploration of concepts in ordinaryoperationalization of concepts
language, local knowledge terms
revise design as neededchanged research question requires
research re-design and re-start
Analysis ofhermeneutic sensibility: coherence,falsifiability
evidencelogic of argumentation, . . .
Evaluativetrustworthinessvalidity, reliability, replicability
standardssystematicityrigor
reflexivity, transparency;objectivity
engagement with positionality

The table shows the rough equivalences between the concepts and phrases that are central to these two different logics of inquiry and their enactment in research designs. The order of the entries from top to bottom represents, very roughly, the broad orientations of these two approaches to knowledge, the generation and analysis of evidence, and associated evaluative standards. This order, however, does not necessarily reflect the dynamic processes that characterize the actual conceptualization and implementation of research designs.

The table, particularly the comparison of the two columns, can assist those new to interpretive methodologies to understand and respond effectively, in a non-defensive way, to positivist interlocutors. For example, if a researcher's objectivity were challenged, that entry in the table under the positivist methodology column would lead him to an interpretive response opposite it under the interpretive methodology column: he might explain that, given that his research purpose focuses on meaning-making, his task is about understanding research participants’ worlds from their perspectives, rather than portraying an objective reality from a point outside their worlds. Or, if an interpretive study's “sampling” procedure is challenged, the table would lead the researcher to a discussion of mapping for exposure and intertextuality—concepts that can be used to flesh out the ways in which interpretive researchers search out variability and multiplicity (even as they lack the type of control implied by the sampling term).

The contrasts in terminology highlight some of the ways in which the concepts or phrases in the right-hand column, grounded as they are in positivist philosophical–methodological presuppositions, are inadequate for interpretive projects and at times even detrimental to their goals and sensibilities. The entries under the interpretive methodology column have a long history in interpretive literatures and research practices, although not all of them have been used in these ways before. We introduce them in this comparative context, drawing on interpretive methodological traditions, in ways that emphasize their continuity and consistency within an interpretive approach. We recognize that newer design concepts are bound to feel and sound strange by contrast with those that have been habitual research-speak. Only with widespread usage can new concepts acquire the recognition and legitimacy that will resolve this difficulty.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset