5

THE RHYTHMS OF INTERPRETIVE
RESEARCH II

Understanding and Generating Evidence

Having thought through the locations in which and the actors or texts among whom or which he will search for evidence to address his research question, what sort of evidence should an interpretive researcher look for? What is its ontological status, and how does he indicate in the research design what he will be seeking? We continue here with the rhythms of interpretive research, making a second pass at matters of accessing settings and actors, researcher role, degrees of participation, and positionality within the setting, but now engaging another level in our hermeneutic–phenomenological circle-spiral.

Much as “empirical” has at times been understood (and perhaps in many corners of academia still is) to mean “quantitative” research alone, “evidence” has in the last decade or so often been construed narrowly as that which results from experimental research designs—especially under the influence of the contemporary “evidence-based” movement, such as in medicine, policy, and management (for critical assessments, see, e.g., Black 2001, Clements 2004, Parsons 2002, “The Evidence-Based Policy Movement” 2002, Trinder with Reynolds 2000). But “evidence” has broader meanings than that (as does “empirical”). For interpretive researchers, empirical evidence is understood as coming in a variety of forms, with no single form pre-judged as superior to another (e.g., privileging “hard” data over “soft” ones). Data occur in various shapes because they emerge from the varieties of human activity, from physical artifacts (trophies, paintings, built spaces) to acts (performance evaluation, company picnics, caring for others) by different actors (workers, scholars, organizations, governments, societies) to language use (in politicians’ speeches, magazine articles, political cartoons).

In this chapter, we discuss the character of evidence from an interpretive onto-logical and epistemological position. We engage not only the variety of its forms, but also the ways in which it is “generated” (rather than “collected”) and, in many circumstances, co-generated between researcher and research participants, as well as by researchers as they work with research-related documents and other materials. Field practices for generating evidence should be discussed in a research design, including how the researcher intends to map the study terrain (literally and/or figuratively) for exposure and intertextuality.

The Character of Evidence: (Co-)Generated Data and “Truth”

Positivist research designs appear to assume that the data for a project are in some sense lying around in a field, just waiting for a researcher to collect them. This ontological status of the material that constitutes evidence for a project hinges on an understanding of “evidence” that rests on the etymology of “data.” Meaning things that are “given” (as noted in Chapter 4), it suggests that evidence exists independently from the research project that searches for it. The challenge for the researcher is to locate the data and collect them. This makes sense from a perspective that sees the research world and the researcher as completely separate, and separable. Moreover, analysis of those collected data means assessing findings’ proximity to “real world” situations—with what degree of certainty can they be said to provide an accurate reflection of the world studied?—leading to the expectation that all research designs should begin with hypotheses that are falsifiable (through whatever means of testing is appropriate to the methods being used).

But interpretive researchers see the research world and the researcher as entwined, with evidence being brought into existence through the framing of a research question and those actions in the research setting that act on that framing. In this view, data have no prior ontological existence as data outside of the framework of a research project: the research question is what renders objects, acts, and language as evidence—for that specific research question. Rather than being “given,” in this view data are created, at the very least by the research focus, which distinguishes among acts, events, persons, language, etc., that are relevant to the research and those that are not. Complete, all-encompassing perception and description are humanly impossible, whether in everyday or in research contexts: the “frames” or “lenses” in one's mind's eye filter out those elements of the perceptual world that are not central to concern in a given moment, and they “filter in” those elements that are relevant. Research is conducted with different foci, and in each of these the researcher superimposes a frame—the research question—on actors’ lived experiences of cultural, social, political, economic, and psychic realities, in the past, present, and/or over time.

Seeing, in other words, is always partial. That conceptual and perceptual act is a selective one, and the research question “tells” the researcher what the research-relevant data and their likely sources—places, events, persons, documents, other objects—are. Consider, for instance, the specks of dust on Martin Luther King's shoes as he delivered his “I have a dream” speech. Few scholars or commentators mention them because they are of no consequence for understanding what is deemed to matter about that event (Jacqueline Stevens, personal communication, 2 September 2010). Imagine, then, a forensic investigation in which such details might, in fact, be the very key to understanding!1

In selecting things and people, events and acts to attend to, as relevant to the research question at hand, the researcher may be said to “generate” data through research processes.2 But more, still, than that: from an epistemological perspective, the interpretive researcher is trying to understand things, events, and so on from the perspective of everyday actors in the situation or, in the case of archival research, the equivalent in the form of written or drawn representations of or reflections on contemporaneous activities. Admitting of the possibility (and legitimacy, from a scientific perspective) of local knowledge in the search for understanding contextualized concepts and actor meaning-making of events, etc., opens the door to knowledge generated by others than the scientist alone. Sense-making by the researcher depends, in this view, on sense-making by those actors, who are called upon to explain them to the researcher (whether literally, in interviews, or in the common conversations of everyday living, or less directly, in written or other records that constitute the material traces of acts, things, and words). We might then say that research project evidence is “co-generated” by actors and researcher together—a statement that could be extended to the interactions of a researcher with research-related texts, whether historical or contemporary. Texts’ authors or paintings’ artists—even those still living—may not be able to “talk back” to researchers directly, but interpretive scholars doing historical work have been innovative in developing intertextual and other techniques to check their own interpretations of those authors’ intended meanings as understood in their own times and by subsequent interpreters (e.g., Brandwein 1999, 2006, 2011; see, also, Fay 1996, Chapters 8 and 9).3

Understanding data to be co-generated means that the character of evidence in an interpretive project cannot be understood as objectively mirroring or measuring the world. The researcher is not outside that which is under study. Moreover, in field and archival research focused on meaning-making, the “research instrument” is the researcher in his or her particularity, as Van Maanen (1996) has argued with respect to ethnography. A methodological starting point for interpretive research design, then, is that both the researcher's prior knowledge and the embodiment of that knowledge may affect the data, whether these are interactively co-produced (as in interviews and participatory interactions) or where co-production means the researcher interacting with documents and/or other physical materials. A different researcher, possessed of different characteristics and prior knowledge, conducting the “same” set of interviews or examining the same materials, may (co-)generate data that vary in content and form from those produced by another researcher. Whereas this is what inter-rater reliability measures in positivist research are seeking to control for and prevent, in interpretive research it is not perceived as a threat to knowledge claims or research trustworthiness. Instead, researchers seek to be as transparent as possible about how they generated their evidence and the knowledge claims based on that, including the ways in which their own personal characteristics and background have contributed to that data generation. This is part of the notion of positionality discussed in Chapter 4 (see Chapter 6 for further discussion of how researchers check their sense-making).

Moreover, the potential existence of such differences among researchers says nothing about greater or lesser “accuracy” or “truth” of the data because it is expected that research participants respond to the particularity of researchers. For example, if a research participant is female, and male and female researchers interviewing her co-generate different interview evidence, it does not follow that one set of interview materials is “false” and the other “true.” Instead, differences may reflect gender power differentials (among many possible explanations), and the interpretive researcher would be expected to make these transparent, reflect on them, and consider their contribution to the knowledge claims advanced (discussed further in Chapter 6). Watson (2011: 210) pointedly observes that “[a]mong most academic researchers there is surely some awareness that philosophers like Austin (1962) long ago established that speech is action (and never just ‘saying’), and social scientists like Goffman (1959) showed that all communication has a ‘presentation of self’ dimension.”4

The scientific import of participants’ responses lies in the significance of what they and/or other situational materials narrate relative to the researcher's overall developing understanding—the parts having meaning in relation to the whole (as described in Chapter 2 relative to the hermeneutic circle). This requires a heightened transparency about analytic processes, achieved through reflexivity. Walby (2010: 645) comments on the “reflexivity [that] is part of the relationality of the encounter,” with respondents exercising reflexivity in the interview every bit as much as the researcher is. As Fujii (2010) shows, it is not that researchers (as with others) cannot detect lies, but that lies, rumors, inventions, denials, evasions, and silences are themselves potentially data that are relevant to the unfolding analysis.5 To put it somewhat differently, interpretive researchers are as interested in the frontstage as they are in the backstage, in Goffman's (1959) terms, or in what is made publicly legible, on view in the open square, as much as in what is hidden behind the façade or masked in the blind spot—to draw in the “Johari window” (Luft and Ingham 1955)—without attributing a “realer” ontological status to what is “behind” the presentation of self than to that very presentation. Although such situations are difficult to foresee, researchers might anticipate their possibility in the research design, depending on the research question, and discuss their potential implications for knowledge claims.

Positivist–qualitative researchers, by contrast, hold out for the possibility of objectivity in their interviews (and other data “collecting”)—that is, for their ability to generate knowledge from a point external, conceptually, to the research setting. This is what explains the treatment of researchers’ personal characteristics (including their prior knowledge) as irrelevant to knowledge creation (see Note 8, Chapter 4); and it is what, in that view, makes the replicability of interview and observational data by another researcher conceptually possible (discussed further in Chapter 6 and in connection with data archiving in Chapter 7). From this perspective, unless lies are being treated as a form of data, they are seen as errors that undermine accurate representation of what really occurred or beliefs or feelings truly held.

In interpretive research practice, researchers’ prior knowledge and personal characteristics are actively theorized in considering the trustworthiness of knowledge claims (see discussion in Chapter 6).6 At the design stage, researchers can try to anticipate concrete possibilities, e.g., how their persona may affect access to, and interaction with, various kinds of people or situations in particular research settings. Still, as Shehata (2006) details, researcher identity, too, is co-constructed: research participants are always “reading” the researcher's presentation of self, looking for signs (e.g., of equality or condescension, sympathy, fear, hostility) and interpreting and acting on them, much as we all do in ordinary, non-research, everyday life. This means that at the design stage, such anticipation can only be preparatory and conjectural, rather than predictive.

Understanding data to be co-generated also clarifies why use of criteria appropriate to positivist research designs, such as replicability (further discussed in Chapter 6), can mislead designers (and evaluators) of interpretive research. For example, if evidence is not understood as objectively mirroring or measuring the research world, the positivist standard of falsifiability, which rests on data findings’ close approximation of reality, needs to be replaced with other approaches to the assessment of interpretive researchers’ knowledge claims. The inadequacy of this standard for application to interpretive research is clearest in its statistical incarnation in the form of Type I and Type II errors, concepts that posit a “true” world (a “population”) that can be misrepresented by a randomly drawn sample in two possible ways (either the false acceptance of a hypothesis that is contrary to that “true” world or its false rejection when it is a “true,” accurate conjecture). This statistical device requires researchers to assign probabilities to these types of errors (i.e., levels of statistical significance that indicate confidence in the accuracy of the sample data: “confidence intervals”)—which further testifies to the positivist epistemological presupposition that researchers are capable of getting closer and closer to that singular truth. In interpretive research, to the contrary, the goal is not to ascertain the singular truth of the “research world” but its multiple “truths” as understood by the human actors under study (or as expressed through their various artifacts)—including the potential for conflicting and contradictory “truths.” The expectation, then, that all research designs should contain “falsifiable hypotheses” reveals a misunderstanding of the character and purposes of interpretive research.

Forms of Evidence: Word-Data and Beyond

In a key departure from positivist researchers, who approach data generation in the expectation that the evidence created by research processes will, ideally, be capable of being turned into the quantitative indicators constituting a “data set,”7 interpretive researchers do not privilege quantitative forms of data over other forms. Instead, they engage data-generating tasks in the expectation that the evidence created by their research processes will typically retain in analysis the form it had in its origins. This is most often the form of word-data, but interpretive research also encompasses numerical data, such as accounting reports or accident statistics, when that is their situated form (e.g., Czarniawska-Joerges 1992, Gusfield 1981, Munro 2001).

Interpretive researchers might also generate quantitative indicators that “stick close to the ground,” so to speak, in the sense that the logic behind the indicator is transparent (as in a relative frequency measure, by contrast with a regression coefficient). For example, to compare pre- and post-“9/11” US media representations of Saudi Arabia, Azerbaijan, and Kazakhstan, Oren and Kauffman (2006: 119) used relative frequencies of words’ appearances, a statistic that adjusts the absolute number of references to thematic representations of those states (e.g., “oil exporter,” “political repression”) in terms of the total number of all references to each state (for the media sources selected from a wide range of newspapers). These statistics enabled them to show, for example, that references to Saudi-born terrorists had increased as a proportion of the (increased) overall coverage of Saudi Arabia after 9/11 (2006: 141). They go on, then, to more nuanced, interpretive readings of the changes in coverage of “terrorism” compared to other thematic representations (“oil exporter”) within and between states. (These examples put the lie to the idea that it is the use of numbers that marks the difference between quantitative and qualitative or interpretive methods!)

This expansive and inclusive view of evidence-types has considerable importance for research design. A researcher needs to think in depth about how the forms of the evidence to be generated might relate to the initial research question. Does the research question point to the significance of meaning in the form of stories that might be gleaned from particular documents, letters, or reports identified? Might situational meaning be conveyed through the design and materials of a building or the layout of a neighborhood (which might add spatial analysis to a research design)? Or is the research question best explored through persons’ conduct, and if so, what does the researcher anticipate being able to observe, and why are such observations crucial to the project? As a brief example, a policy project could begin with government documents whose official, collective meanings might be contrasted with street-level workers’ understandings of those policies, the latter generated through interviews or, as in Maynard-Moody and Musheno (2003), directed story-telling. In turn, observations of those workers’ acts may reveal yet other ways to analyze both official policy and individual workers’ or clients’ articulated views. Because of the time-intensive character of these various modes of generating data in their several forms, a research design needs to demonstrate a sensitivity to the potential contribution that various data forms can and cannot make to the project.

A mismatch between research question and the choice of data-generating method(s) or of kinds of settings, actors, texts, etc.—or even of time of day or season—is the key failing of a research design. That mismatch does not arise due to necessary changes arising from field realities. Instead, it is a matter of design logic in preparing the research project, and it can—or should—be caught in reviewing the research design long before the researcher heads to the field. Mismatches are produced when the particular method chosen will not yield the kinds of data the researcher needs to address his question, or when the kinds of data the researcher anticipates generating are not appropriate sources for the sorts of evidence needed to explore the research question she is puzzling about. Research based on interviews with physicians or on medical records review, for example, cannot generate data useful for a research question concerning the mutually influencing interactions between patients and doctors; some form of observation of such interactions would be needed so that the design generates data relevant to the research focus. Or, for research that seeks to understand from their perspectives why impoverished citizens continually re-elect wealthy representatives, for instance, data from expert interviews with pollsters and political psychologists will not help. Experts’ views, while important for certain research purposes, are still removed from the firsthand sense-makings of patients or citizens; and so if the research question focuses on the latter, there is a poor fit between it and the proposed data sources on which these two research designs rest.

Although there is no simple, general test in interpretive research for assessing ahead of time what evidence will be generated, the character of different kinds of data sources—documents, participant observation, interviews, material artifacts, audio-visual materials—suggests whether these will yield the “right” kinds of evidence: evidence that will be appropriate to and adequate for the question specified. Interpretive researchers need to think through the broad array of evidentiary possibilities that might be available to them in their chosen settings and determine which are appropriate to their research topic. In the design itself they might explain how their choices of data sources and forms, and the methods for generating these, connect logically to their research question.

Mapping for Exposure and Intertextuality

Once the specific field setting or archive or set of actors has been chosen and initial access granted, where within it is a researcher to begin? Interpretive “mapping” in research design means anticipating “the lay of the land” in a particular research site for the purposes of “exposure” and “intertextuality.”

Interpretive researchers believe in the possibility of multiple interpretations of the social and political events and worlds they study. The concept of exposure rests on the notion that the researcher wants to encounter, or to be exposed to, the wide variety of meanings made by research-relevant participants of their experiences, whether in face to face encounters or through written records. It rests further on the idea that occupants of various positions within a research setting might be expected to have different views on the subject under study. Interpretive researchers anticipate that experiences and views will vary according to participants’ locations, literally and metaphorically, in the field of study: a neighborhood within a community (especially where these reflect class, race-ethnic, or other demographic factors that might be of interest to the researcher because they impact on experience and shape sense-making); a hierarchical position within an organization (because what one sees of and knows about organizational activities changes with exposure to different aspects of its work practices); a status and/or power position within a political configuration (e.g., party, legislature; for similar reasons). Participants, in other words, have their own “positionalities” analogous to researchers’.

The goal of mapping is to maximize research-relevant variety in the researcher's exposure to different understandings of what is being studied—particular events, policies, organizations, ways of life, and so forth. Such exposure to ideas and interpretations can require meeting and engaging different actors, in different roles, at different levels of responsibility, in different locations in the field. These can be different departments, horizontally, across a corporate hierarchy or different levels of a bureaucracy (e.g., Pachirat's, 2009a, view of activities on the floor from the catwalk of the slaughterhouse); different neighborhoods within a community; different contending parties within a social movement (à la Zirakzadeh's intention to engage different groups of ETA activists); different members of a social group. Interpretive research can also entail shadowing a single actor (e.g., a public figure in a leadership position; a CEO) which gains exposure of a different sort—e.g., to that key actor's network, way of life, organizational or community “map.” Analogously, in a document study, a researcher might select documents that reflect the many different viewpoints actors had expressed concerning historical events.

The interpretive researcher maps these positional differences across research participants in the research setting. Mapping in this metaphoric sense means identifying the different “kinds” of people or roles (e.g., shop floor workers, agency directors, community leaders, paraprofessionals), the various locations, and the different kinds and sources of documents and other artifacts that may be available in the community, polity, organization, or other setting under study. Activities and people may vary not only by location in the field but also by time of day, day of week, and season—in sync with customs, standard operating procedures, and other rhythms that characterize the lived experience of research participants. The differences of interpretation and meaning that can emerge from exposure of this sort provide, depending on the research question, precisely the type of material that is of interest to interpretive researchers. Exposure supports interpretation.

Mapping in this way also points to the diverse forms of evidence that might be available, from interviews to objects to written reports or speeches. These can be “read” across each other in intertextual fashion for what they reveal about different interpretations of particular events, persons, disputes, and so forth. Its place in social science credited to Julia Kristeva's mid-twentieth-century writings, drawing in this case on Mikhail Bakhtin, the concept of intertextuality has a long history in Biblical hermeneutics and literary analyses of poetry and fiction, referring to the ways in which one text invokes another through the repetition of a key phrase, thereby drawing the other text's meaning into the understanding of the focal one. To speak in English, for example, of a serious storm as “raining for forty days and forty nights” invokes the story of Noah told in Genesis. As L. Hansen (2006: 8) writes, “Texts build their arguments and authority through references to other texts: by making direct quotes or by adopting key concepts and catch phrases” (see also Weldes 2006).

We extend the term in metaphoric fashion beyond texts alone to the ways in which different types of data draw on (“cite”) material from other kinds of data, such that the researcher can “read across” them in interpreting meaning. Here, it is not just the appearance in one text or text-analogue of another; it is the active sense-making of the researcher, seeing “intertextual” links across data sources in ways that contribute to the interpretation of those data. As Brandwein (2006: 243, n. 24) observes, “Terms gain their meaning from their place within an extensive network, and in order to understand these terms, [researchers] must fully trace the entire network.” Interpretive researchers “read” evidence analytically from a variety of sources “across” the experienced reality of the situation under study (whether rendered in literal texts or, analogously, in acts and/or physical artifacts, historical or current), to assess meaning-making around a particular idea, concept, or controversy. Prior knowledge of terms and concepts and theories that may usefully inform that reading is key. A researcher analyzing the US National Aeronautic and Space Administration, for instance, would miss something of significance if he did not know that it named its Enterprise spaceships, at many Americans’ requests, after the science fiction Star Trek's fleet (Weldes 2003). Intertextual readings of this sort look for the dimensionality, ambiguity, and possible contradictions that might arise from broad examination of evidence, the researcher remaining open to the possibility of consensus and agreement without presuming or privileging it. It is seeing this intertextuality, and drawing on it in analysis, that leads to the “thickness” of interpretation—hearing in the Jewish trader's story (in Geertz's field research example) echoes of the Berber tribesmen's logic, and so on, in ways that enable analytic sense-making.8

Mapping for exposure and intertextuality is closely tied, in other words, to epistemological presuppositions and knowledge claims: the wider the map, the more varied the exposure, and the more transparent the account of these, the clearer the researcher's knowledge base and the more trustworthy the claims. In discussing the construction of memory, Wood (2009: 126) notes her skepticism concerning respondents’ memories of having heard a radio broadcast of a tape in which a pilot is heard asking headquarters if he should really bomb streets where he sees civilians. Finding the same report, later, in two written sources, one academic, the other journalistic—what we are here calling intertextuality—leads her to modify her skepticism. This transparency enables a reader to follow her thinking and enhances our trust in her analysis and knowledge claims.9

These three concepts—mapping, exposure, and intertextuality—hold for archival or other documentary research, as well as for interview and participant observational studies. Here, exposure represents the often circuitous process of locating documents that will enable the researcher to map different, perhaps contentious, views in the historical account. It also may point the researcher toward other documents or archives elsewhere than what were planned for in the research design (C. Lynch 2006). L. Hansen's use of the concept of intertextuality (2006; see, esp., Chapter 4) in reference to following citations from document to document likewise builds on a researcher's exposure to initial texts that lead to yet other texts in a hermeneutic spiral fashion.

The concept of exposure can be contrasted with the idea of “sampling” as used in qualitative methods—whether purposive (the intentional selection of persons, settings, or documents thought to have something to contribute to the study); snowball (in which one person, typically, or document leads to the next); or theoretical (the intentional selection of persons, settings, or documents based on analytic grounds, as suggested by the developing theoretical argument; Glaser and Strauss 1967). The language of sampling originates in the probability requirements of inferential statistical science: it is a technical term that refers to the scientific possibility of generalizing from a sample of a population to the population as a whole, within some degree of certainty. The term signals researcher control over the selection process, an implication that often does not hold for interpretive research settings.

We see problems of methodological logic in adopting the term into interpretive methods. Even if initial access to a research site is gained, multiple obstacles may preclude a researcher's control—e.g., the ability to examine particular documents or interview key actors, however these were initially chosen (based on purpose, theory, or snowballing in the field). Moreover, snowballing risks enmeshing the researcher in the network of the initial participant interviewed, something of which researchers are not always cognizant, leading to or reinforcing the silencing of other voices. Although many qualitative researchers now recognize that random sampling is not sensible for initial case or site selection (e.g., Gerring 2007: 87), these various forms of selection that use the “sampling” term seem to do so strategically: the language derives from the positivist paradigm and seeks to show or to argue, by rhetorical means, that a non-random selection of individuals to interview, documents to assess, sites to observe, or cases to explore can be, and is, as scientific as quantitative social science.

Given that the language of sampling still retains the sense of researcher control in ways that commonly do not fit field realities, in the spirit of recognizing the political character of science—that is, its use of rhetoric to persuade other researchers of the quality of a project's knowledge claims—we would like to see interpretive methodologists and researchers stop trying to force-fit their own research into that mold, give up the rhetoric of the sampling term (which can never mask the fact that these selections are non-randomized, albeit systematic in their own particular fashion), and accept the exposure rationale for selection as scientific in its own right. In our view, “exposure” is a useful replacement for non-random forms of “sampling” as it captures what we think the latter is striving to achieve, without trying to ground it in randomized actions, which qualitative forms of such choice-making do not, and cannot, enact. To speak of choosing cases, persons, settings, etc., focuses more on the dynamic, processual character of research, by contrast with the more stable character oriented toward pre-established criteria suggested by “sampling” (Lee Ann Fujii, personal communication, 3 July 2011).

In positivist–qualitative approaches, the practice of using multiple sources of evidence analogous to intertextuality is often termed “triangulation”—a word taken from the seafaring technique of locating a third, hitherto unknown point using two points of data already known to the sailor. When used by interpretive researchers, it does not convey the expectation that “convergence” across the multiple points of evidence will reveal what is “true” (Mathison 1988). Given the multiple ways in which humans can make sense of the same event, document, artifact, etc., convergence is in fact expected to occur less often than inconsistency or even contradiction (Hammersley and Atkinson 1983, Schwartz-Shea 2006). Here, too, we think it more appropriate in interpretive research to relinquish the language of triangulation, with its realist implications, for terminology that captures the intent of the idea but which is closer to its methodological presuppositions. Intertextuality is such a term. Analyzing intertextually across evidentiary sources is a long-standing interpretive practice; it is a marker of research quality in interpretive studies.

Initial “maps” for purposes of exposure and intertextuality are informed by the researcher's prior knowledge and are likely to be revised by encounters in the field. In addition, access to the varied data sources identified by mapping cannot be guaranteed. But in the research design, interpretive researchers should try to think broadly across these matters. Mapping across distinctive programs that serve the same general population (i.e., payments for disabilities or impoverished children) led Soss (2005) to understand differences in the approachability and responsiveness of government administrators. Mapping what Rwandan genocide perpetrators said in interviews by contrast with what was noted in the official letters of confession led Fujii (2008) to understand how the several spoken representations of the same events revealed coping mechanisms and rationalizations. Mapping a woman's claim to be a victim against testimony from others enabled Fujii (2010) further to understand how dominant, contemporary discourses of the genocide worked to occlude past governmental abuses. Mapping across news media enabled Oren and Kaufman (2006) to see how an event in Azerbaijan was reported differently.10

Mapping potential data sources for intertextual readings can also be a check on the extent to which research participants might be purposely “performing” for the investigator, presenting an intentionally partial or skewed version of events, motives, etc. Participants’ narratives can be made to speak to each other, whether in analytic deskwork and textwork or in actual field-based engagements (e.g., in follow-up interviews, contrasting documentary records), as well as in comparisons with the researcher's own experiences of the same events (see, e.g., Agar 1986: 67–8, Allina-Pisano 2009: 66–70). This is the work that exposure seeks to achieve, mapping not only across persons but across their physical locations, as relevant to a research question, and the different experiences and interests that are assumed to derive from these. In addition, exposure across time, so to speak, also serves to contextualize what a researcher sees and hears: archival research, in particular, often draws on a time dimension in mapping across sources.

Fieldnote Practices

Fieldnotes are another longstanding field research practice (see, e.g., Sanjek 1990, Emerson et al. 1995), one which crosses methodological approaches. These practices have rarely been connected, however, to the concept of research design, despite the fact that they are a major way in which scientific systematicity is enacted in the field. Because interpretive researchers anticipate a voyage whose endpoint is not self-evident, documentation of the research process, including what transpires in the field, is essential.

The fieldnote record enables researchers to be transparent about how they conducted their research. In a diary-like fashion, fieldnotes record day-to-day activities, events, and interviews, plus researcher sense-making of these, especially in light of initial expectations. It is in fieldnotes that the “thick descriptions” of the research site, events, conversations, observed interactions, and so forth are recorded. There, the researcher also reflects on her positionality (see Chapter 4; we take up reflexivity in Chapter 6) and includes other contextualizing comments that will be a reminder later on, especially during deskwork analytic activities, of thoughts, feelings, the texture of interactions, seeds of analysis, and the like. Field-notes are also used to track changes made to the initial research plan as a result of field realities, such as unrealizable access to particular documents, field locations, or interviewees.11

The combination of fieldnotes, researcher memory, and embodied experience (and other types of evidence) together provide the material for researcher sense-making. These materials provide the empirical grounding for claims about tacit assumptions, patterns of interaction, and language usage in the field site. But those claims do not rest on the notes alone: as Van Maanen (1988: 117) cautions, the “working out of understandings may be symbolized by fieldnotes, but the intellectual activities that support such understandings are unlikely to be found in the daily records.” Analytic sense-making, done during fieldwork and later in deskwork and textwork, is not, in other words, contained solely in the fieldnotes themselves. And fieldnote practices do not necessarily, and cannot feasibly, entail making those notes intelligible to outsiders.12

Attending to fieldnote practices at the design stage means anticipating issues that might arise in a particular site. For example, extensive on-site note-taking might be infeasible for one reason or another—no time between “job” obligations, no place to sit quietly and concentrate—such that fieldnotes need to be completed in the evenings (Pachirat 2009a), and then one struggles with exhaustion or the desire to let one's hair down and escape fieldwork's strains and burdens for a while, or one is caught up in research-related activities and defers note-taking to the morning or the weekend. Also, note-taking during a research conversation or interview might be disruptive of interpersonal exchanges, leading a researcher to opt for less conspicuous practices. On the other hand, note-taking might be expected by research participants as a commonplace part of research practice, such that it is ignored (Fujii 2010). Participants might even feel slighted if the researcher does not take out a recording device—a notebook or a tape—and even doubt the authenticity of the researcher and the scientific character of the research. Planning for such circumstances is important because of the centrality of fieldnotes to research practices: they record the meaning-making and contexts that enable claims of constitutive causality, why humans act as they do due to their own understandings of their worlds.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset