7

DESIGN IN CONTEXT

From the Human Side of Research to
Writing Research Manuscripts

Thinking about research design does not end with access and other issues or its production on paper. There are more things still to think about: planning beyond the research itself, in both space and time. Field research, of whatever sort, has its own physical and emotional entailments, little talked about in the research design literature; to one extent or another, a researcher can anticipate and plan for these. These days, researchers need to anticipate ethics reviews; but particular issues arise when interpretive methodologies confront protections for human participants. Moreover, renewed demands for data archiving loom on the horizon, posing challenges for interpretive research: archiving invokes the matter of replicability (discussed briefly in the previous chapter), which raises ethical and methodological concerns of its own. Lastly, research designs lay the groundwork for the research manuscript: how might a researcher anticipate that in thinking about the parts of a proposal?

The Body in the Field: Emotions, Sexuality,
Wheelchairedness, and Other Human Realities

Much as Weberian bureaucracy theory carries many unspoken assumptions about sex and gender, class, and so forth (Ferguson 1984), so, too, are there a lot of unspoken assumptions embedded in ideas about doing field research. One set of these concerns its “Western” dispositions, regarding, for example, openness with respect to scientific inquiry, along with an implied impartiality and accuracy of governmental and other sorts of data, something touched on in Chapter 4.1 A second concerns emotional, sexual, and physical entailments. Methodological treatments, including the research design literature, irrespective of the methodological approach followed, have not yet taken on board the vast variety of researchers engaged in field and archival research. Where are the explicit engagements with race-ethnic issues in the field? Gender? Sexuality? Physical ability? Class? It is as if the researcher body is (still) male, middle class, Caucasian-European, capable of unfettered physical mobility, and a-sexual.2

We bring these topics in here not only for the benefit of researchers who do not fit this outdated stereotype, many of whom likely already know quite well what arrangements they would need to make in order to live their lives while conducting research, but for advisors and methods instructors, who, like us until recent years, have not been fully cognizant of the assumptions of ablebodiedness, in particular, built into field research methods and their discussions. Here, it is we—instructors, advisors—who need to think more fully in contemplating research designs! This is also why we have set this discussion aside as a separate section, rather than integrating it into the “regular” discussion of design issues in earlier chapters: until it becomes a more normalized feature of the research methods community, such that all methods discussions engage emotions, sexuality, wheelchairedness and other bodily dimensions as a matter of course, it needs to be flagged for attention. Several of these matters need to be anticipated and planned for, in ways that can involve advisors, too (and perhaps even department heads). Although we speak here specifically to those engaged in field research, some of what we say pertains to working in archives.

One common taboo in methods texts and design discussions concerns speaking openly of the emotional roller-coaster that, if unanticipated, can catch the field researcher unawares. Even knowing that it can affect some researchers does not necessarily prepare one for it oneself. Far from home, in a strange place, without a support network of family, friends, and well-worn weekend newspaper and cappuccino routines, loneliness and homesickness can hit at odd moments of the day. Even discussions of “culture shock,” much present in anthropological texts, at least, typically do not focus on these sorts of “ordinary emotions,” dealing instead with the initial anger at and later acceptance of differences in the organization of life—different shopping hours, different food stuffs, different work habits, and so on. The initial emotional response to that is often: Why can't they do things the way we do them back home?

That is rather different from attacks of fear, or loss, from missing one's fiancé (Reinhardt 2009), one's spouse, parents, friends, and so on (see Ortbals and Rincker 2009, Henderson 2009). Modern technologies—the internet, email, VOIP (Voice Over Internet Protocol) set-ups such as Skype, video links, less expensive telephone connections, etc.—have diminished the sense of detachment relative to what it was in even the recent past (well into the 1990s, depending on location), when one might wait for the post for days or weeks on end. Particular research topics, too, may pose their own challenges (see, e.g., Whiteman 2010) and require even greater self-monitoring and self-care. Field research on various forms of violence—domestic, institutional, or political, such as insurgency-related events—may expose the researcher to physical and/or emotional brutalities which he has not anticipated. As Soss advises (2006: 143, emphasis added), researchers need to recognize that “the researcher role is a human role”; they need to learn and know their personal limits. For those heading into field settings marked by violence or its potential, planning for their own protection and safety—along with that of research participants which is the concern of US IRBs and other states’ boards and policies (see discussion below)—must be carefully undertaken.

The extent to which these issues are made an explicit part of a research design is up to the researcher (and perhaps advisors), but they are well worth thinking through. Planning for time out of the field, if and where feasible, or for regular, ongoing support from others may be key to the on-going conduct of physically or emotionally taxing research. Fieldnotes provide a place to reflect on such issues; reflexivity calls for their discussion in research manuscripts, although a kind of tough field researcher identity seems often to preclude such narratives. Sometimes, just knowing that research has an emotional side, even if this is not commonly written or talked about (other than in informal conference settings, such as the corridor chat or the bar, long after the fact), is enough to assuage the strength of the feelings when they do hit.3

Methodologists have for some time discussed the ethnocentric, even racist, and class-biased character of ethnography and participant observer research: field researchers from the Northern hemisphere studying inhabitants of (former) colonies in the Southern hemisphere (Harrell 2006), as well as American Indians on reservations, a clear, if unspoken, analogue to more explicitly colonial situations (see Bruyneel 2007); and participant observers from wealthier classes studying the poor, the deviant, the outcasts in marginal domestic neighborhoods and communities. It is only in recent years that anthropologists have begun to speak openly of heterosexual relations, sometimes leading to marriage, between themselves and their “informants” in the field (Lewin and Leap 1996); yet Walby (2010: 641) remarks on ongoing silence among sociologists with respect to “the sexual politics of research” in general. That there is something worth thinking about in methods talk with respect to gay, lesbian, bisexual, transsexual, and queer research identities—whether “out” or closeted—is only beginning to be admitted as worthy of consideration and to be discussed explicitly (Lewin and Leap 1996, Wilkinson 2008). The question of whether to become involved in emotional or sexual relationships with research participants while in the field rehearses similar issues regardless of sexual identity, some of them echoing discussions in US universities about professor–student relations with respect to uneven power dimensions (see, e.g., Paludi 1996).

Whether to be out about non-heterosexual identity in field settings adds other dimensions to the discussion. As with other aspects of researcher identity, there is no single answer: at times it might aid access (see Wilkinson 2008), at others, hinder it (see also Walby 2010). In situations in which being out might endanger the researcher, or research participants, other layers of concern kick in. (See criticisms of Humphreys’ research, 1970, in which he took the automobile license plate numbers of gay men frequenting public bathrooms, pickup spots for casual sexual encounters, and followed them home; e.g., Humphreys and others 1976.) Some exploration of these parameters in advance of entering the field setting might be possible; certainly, thinking them through in advance is advised. Whether one includes these thoughts explicitly in a research proposal (and later, in the research manuscript, most likely in the methods section) is a matter of individual judgment, as it will depend on imagined and anticipated readers and local situations, as well as the researcher's own proclivities toward self-disclosure.

As silent as research design treatments, methods textbooks, and other discussions have been about emotions and sexuality, they have been even more so about assumptions of ablebodiedness built into the conduct of research, especially field research. Entire discussions of research design, including this one to this point, do not engage the sorts of considerations required by “wheelchairedness,” as Mike Duijn puts it (personal communication, Fall 2008), the aging body in the field, so to speak, or other forms of physical limitation. Moreover, even when disability is engaged as a topic in field research, it has usually been with respect to studying wheelchaired, learning disabled, autistic, and other “impaired” people (e.g., Casper and Talley 2005), not the challenges posed to researchers.

Whether one is wheelchair-bound, for reasons of accident, genetics, illness, or age, or ambulatory but constrained by blindness (Krieger 2005a, b), rheumatoid arthritis (Felstiner 2000), multiple sclerosis, cerebral palsy, or some other sense impairment or movement disorder (Howe 2009, Mogendorff 2010, Robillard 1999), one may need to give additional consideration to the research settings in which one wants to position oneself and to the role(s) one wants to assume there. At a very basic level, are the buildings and rooms in which one will conduct interviews or read archived materials accessible? Although some nations’ laws now require that buildings, within certain constraints, be made disability-accessible, these stipulations and their implementation are by no means universal. How does one handle toilets that are not designed to accommodate the wheelchaired? If one is sight-limited, how will observation and note-taking be conducted? If one's hearing capacity is limited, does that suggest the use of recording devices that other researchers might disparage (on the argument that they interfere with participant openness and rapport)? If one's speech is impacted, how will one conduct interviews? If one no longer has the agility of a 28-year-old, how will one negotiate seven flights of stairs or hillier, rockier climbs? And so forth.

None of these is ipso facto prohibitive for conducting field research, and we know a handful of researchers who have successfully completed field research projects under such constraints. But as they and others are aware, it takes forethought and planning, and incorporating the outcomes of both into the research design. It may require educating one's advisor(s), if one is a graduate student, to the constraints under which one works and the additional plans one needs to undertake. It may require additional line items in a research grant: much as those limited by language draw on translators, who need to be paid, some of the wheelchaired and others may draw on aides, who also need to be accommodated in a research budget. Physical access to and within archives—Are the shelves reachable? Are the study tables usable?—can be equally challenging, requiring planning, various sorts of accommodations, and budgeting for the same.

There is an even more fundamental, methodological as well as material, question: Does one “pass,” a possibility for those with physical limitations that are not (immediately) visible?4 Or does one let research participants know ahead of time that one needs some form of accommodation? As with other sorts of researcher demographic attributes, which enable access in some situations and block it in others, this question has no universal answer. “Common sense” might suggest advanced notice as part of planning, e.g., for an interview: making sure ahead of time that the participant knows what one's access, seating, drinking, toilet, and other needs are. But at least one action research ethnographer we know at times intentionally does not apprise prospective interviewees of his wheelchairedness, feeling that the surprise factor—and their ultimate need to arrange to carry him physically up the stairs where there is no (functioning) elevator—can work to his advantage in the subsequent interview (Mike Duijn, personal communication, October 2008). Shah (2006: 216, emphasis added) comments on both methods and methodological issues when she writes:

Although I carried out the interviews, a non-disabled support worker was present to facilitate access to fieldwork settings, ensure the data collection tools (i.e., mini disc recorder) were working, and assist with any problems that emerged. She could also reflect on the visual dynamics that were shaping the discussions between the interviewer and participant, and take additional field notes when required. . . . On the few occasions where I could not make myself understood to the participant, the support worker would amplify my voice and repeat the question for the participant, thus changing the dynamics between the three people and enriching the interview situation. However, from the outset it was agreed that the support worker should have her own strategies to avoid being drawn into the formal discussion between the researcher and the young person. She did this by positioning herself out of the young person's visual range.

It is worth underscoring her comment that the aide's interventions, far from harming (biasing?) the interview, enriched it! Although Shah presents her comment in the context of “the methodological privileges available to a disabled researcher doing disability research” (2006: 217), we see no reason that these advantages cannot apply also to research on other topics. We look forward to a day when these issues are a central, yet unremarkable aspect of research design thinking.5

Interpretive Research and Human Subjects Protections
Review

It is increasingly required that scholars formalize their research ideas as soon as they conceive of them as potential research projects and submit them for some form of ethics review. Unlike journalists who, in the US, enjoy First Amendment protections that allow them to follow the trail of a story interviewing whoever will agree to it, social science researchers today must submit research proposals for prior review to Institutional Review Boards (IRBs) in the US or to similar committees elsewhere that bear the responsibility of assessing whether the researcher has taken adequate measures to protect the rights of human research participants (and animals, in other arenas of the research world). In this section, we address the specific concerns that arise when an interpretive project involving human participants is reviewed by an IRB in the US. We are aware of parallel policies pending in EU member states, as they develop their own “code of researcher conduct” review committees, largely modeled on their image of US policies and procedures. Other states—Australia, Canada, and the UK among them—have their own boards and policies, which may or may not raise similar concerns. A full comparative analysis is beyond the scope of this volume, but as this is a major concern in a significant part of the research world, we outline the issues here, with US IRBs as our case, in the thought that it may be enlightening for others submitting research for review to ethics and other committees elsewhere.

We begin with a brief summary of the historical background of US policy, as the EU member states’ policies we have seen, which claim to be modeled on the US approach, appear to be ignorant of the ways in which this history, much of which they do not share, has shaped that policy. And the specific privacy laws of the EU and its member states, which drive their data protection policies, are different from US IRB preoccupations. IRB policy is potentially of concern to non-US scholars, too, who collaborate with US researchers. As we have noted elsewhere, US institutions are increasingly requiring non-US research partners to provide documentation of equivalent review at their home institutions (Yanow and Schwartz-Shea 2008). We anticipate that this will influence EU and other non-US policymaking in the near future.

Institutional Review Boards were created in the US as part of federal policymaking that developed between the 1970s and the late 1990s in response to perceived violations of research ethics.6 The “pre-history” of this legislation started with international response to the experimentation on human subjects conducted by Dr. Josef Mengele, in particular, and others during the Nazi regime in Europe. Three international resolutions—the 1947 Nuremberg Laws, the 1948 Declaration of Geneva, and the 1964 Declaration of Helsinki (the latter two from the World Medical Association)—sought to define and protect the rights of human subjects by articulating general ethical principles to guide research (respect for persons, beneficence, and justice). The more immediate antecedents to US legislation were specific to the US: medical and psychological experiments conducted by scientists in US institutions, often with federal funding, which came to light or drew attention in the 1970s–1980s. These included the 1928–1972 Tuskegee syphilis experiments (conducted by the US Government Public Health Service);7 the 1951–1974 Holmesburg pharmaceutical tests (funded by the CIA, US Army, Dow Chemical, Johnson & Johnson, and over 30 other federal agencies and commercial companies); Stanley Milgram's 1961 and later psychological experiments (on subjects’ compliance with orders); and Philip Zimbardo's 1971 Stanford prison experiment studying abuse of authority (funded by the US Office of Naval Research). Some would also include Humphreys’ 1965–1968 sociological observation of gay male bathhouse behaviors.

This history led to a series of legislative acts and policy documents: the 1974 National Research Act, which created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research; the latter's 1979 Belmont Report (Ethical Principles and Guidelines for the Protection of Human Subjects of Research); the 1991 Federal Policy for the Protection of Human Subjects, the so-called Common Rule; and the 2001 National Bioethics Advisory Commission (NBAC) Ethical and Policy Issues in Research Involving Human Participants (Volume I) report. Although created at the federal level, these US policies rest on their implementation at local levels, through university-based or private “institutional review boards” (where the broad policy name comes from). This includes determining the content of the consent form that participants are required to sign indicating awareness of the possible harms they might incur from participating in the research, as well as deciding whether or not the researcher must administer such a form.

Initially designed on the basis of an underlying, albeit unarticulated experimental research design, the Common Rule has extended IRB oversight to other research designs starting in 1991, including field research and even, in some cases, oral history and other forms of humanities research. As part of IRB assessments, board members in some places, at some times, also evaluate the scientific merits of the proposal—despite the fact that this is not part of the federal policy mandate that created these boards. Research designs can figure prominently in such judgments.8 This has become a problem for those conducting interpretive research (and, in some places and times, qualitative field research), when board members are familiar with and expect an experimental research design (with its formal hypotheses, specified variables, and testing, validity, and reliability specifications) but, instead, are confronted with a research design with a very different character.

To recap what we have laid out in previous chapters: experimentalists have, on the whole, more control over their laboratory settings and research subjects than field researchers have over their field settings and research participants. This, plus the specifics of their kinds of research and types of subjects or participants, means that experimentalists typically have more power in and control of their research settings than field researchers do of theirs, especially when the latter are conducting research among societal, political, and organizational leaders, experts, and other elites. The open dynamism and requisite flexibility of interpretive research designs, the fact that participants may choose a different form of participation than that initially envisioned by the researcher, the potential risks faced in some projects by researchers, the fact that interpretive research at times begins, in effect, long before the researcher even envisions doing research on that topic, the lack of researcher controls over settings, events, and persons—none of these conforms to expectations of designs that approximate experimental research or to IRB procedures that in effect control for both researcher and participant agency. This does not mean that interpretive (and qualitative) researchers should be let off the hook with respect to protecting research participants; on the contrary! But it does mean that their concerns, and the kinds of permissions and release forms they need to use, are different.

For example, field researchers working in contested terrains, whether among insurgents or among perpetrators and survivors, or among risk-seeking populations and/or practices, such as drug users and graffiti taggers (Librett and Perrone 2010: 739), need to take precautions with respect to inadvertently having their participants tagged as collaborators, traitors, and the like, endangering their well-being and their lives. Requiring signed consent forms under such circumstances would likely achieve the opposite of IRB protection purposes. Oscar Salemink (2003: 4) relates three such incidents, among them the story of Georges Condominas, a French anthropologist working in Vietnam who described a man's marriage in a subsequent publication. Some two decades later, Condominas learned that the subsequent, illegal translation of that book by the US Department of Commerce distributed to the Green Berets (a US Special Forces unit) had led one of its officers to identify and torture the man. As Salemink suggests, such outcomes can be the result of researcher naiveté as much as of oversight. This account highlights the extent to which publication may pose a far greater risk to participants than researcher “interventions,” despite having signed consent forms—or perhaps even because of them.

US IRB procedures are highly ethnocentrically biased. They assume a population of literate, research-savvy, English-speaking, well-off participants, with access to modern technologies regardless of where in the world they are located. One undergraduate, working on her BA honors thesis, was required by her university board to write her consent form in English, despite the fact that the people she was interviewing were not English-speaking.9 The form had to include a US telephone number which participants could phone should they have concerns about the research—despite the dearth of telephones in their homes and town, the unaffordable expense of a trans-oceanic telephone call had they been able to access a phone, and the fact that no one at the US end of that telephone number spoke their language and they had no access to translators or the ability to pay them.

One feature of interpretive research poses a particular challenge to IRB policies: the fact that research projects often originate in aspects of the researcher's non-academic life, turning into formal research only after the researcher has already gained access, established relationships (although non-research in character), and become familiar with the setting and its “inhabitants” (discussed in Chapter 2). As soon as scientists conceive of their ideas as potential research projects, IRB policy would require them to formalize these ideas as research designs in order to submit them to human subjects protection review. How a board would handle the prior contact common in some interpretive research is unknown, indicative of the ways in which these policies are out of synch with interpretive research practices.10

A second, procedural matter arises out of the flexible openness of interpretive research designs requiring on-the-spot response to what might transpire in the field, but it is a potential issue shared with other forms of research: the extent to which design implementation varies from design plans, and the stated intent of some IRBs to begin to require the equivalent of an “exit license” (Schwartz-Shea, fieldnotes, 19 September 2006). This would explore the extent to which researchers in the field carried out what they said they were going to do in their proposals, in particular (we imagine) with respect to human participant protections. Until now, such oversight has not been enacted, as far as we know, and the threat appears to go far beyond federal policy. Moreover, the extent to which actual research conforms to protocols is at issue in laboratory research itself (one of several problems in pharmaceutical trials reported in Hill et al. 2000, for example), where the image is that conformance is not only the ideal but the reality. If local IRBs begin to institute these sorts of post-review reviews, the ripple effects of this cast stone will echo far beyond the ponds of interpretive and qualitative social science.11

Facing IRB practices, there is little that interpretive (and qualitative) researchers can do at this point other than to be knowledgeable about federal policy, to be aware of the kinds of challenges they might face, and to prepare themselves to respond. There has been little uniformity in local IRB policies and practices from one university to another—federal policy rests on local implementation—and no set of case law and precedent, although this may be changing with the advent of accrediting associations for IRBs.12 Various social science associations, as well as individual scholars, have begun to pay attention to these matters and to try to educate the oversight agencies at the federal level to the needs of non-experimental social science research designs.13 In Librett and Perrone's words concerning ethnographers, “If [researchers] are to return from the pale of academic deviance, a better effort must be made to engage and explain the relevance of interpretive research in an academic milieu obsessed with prediction” (2010: 744).

We hope that this brief discussion might assist researchers engaging in these conversations at the local level. Understanding the differences between what federal policy mandates and what is left to local interpretation and implementation might help as they respond to issues that might arise when boards examine interpretive research with an experimental design in mind. We also hope that this might alert researchers in other parts of the world to the potential complications that may arise in their own locations from research regulation policies that are built on experimental research designs alone, as well as to possible difficulties arising from collaborations with US scholars operating under present research regulation regimes.14

Data Archiving and Replicability

Replicability—the ability of another researcher to repeat a research project, reproducing the process through which data were initially generated, with the same results—has become a central feature of certain kinds of science. It is increasingly being heralded in the social sciences, along with—and perhaps influenced by— the development of large databases. In service of the positivist ideal of research replicability and as it is costly in both money and time to collect large amounts of quantitative data, pressure is increasingly being brought to bear on researchers who have built databases out of their own collected data to archive these. This, in order to enable other researchers to develop their own research by replicating the archived research or by reusing archived data to address different research questions. The archived data become, for all intents and purposes, self-standing, context-independent databases. Indeed, some journals, such as the American Journal of Political Science, have recently announced editorial policies limiting publication of accepted empirical research manuscripts to those for which authors make the data available to other researchers.

This practice raises all sorts of concerns for both qualitative–positivist and qualitative–interpretivist researchers, given the ethical considerations raised by their having promised confidentiality in the process of acquiring and generating information. Some sources of data are impossible to disguise: known figures (e.g., the Minister of Immigration during a particular regime or era); unique organizations (e.g., the only major interstate bridge-building mega-project connecting two countries), especially when understanding what has gone on in the research setting requires knowing cultural or sociopolitical information about it or unique socio-political or other group features (e.g., the Black Panthers; Davenport 2010). If confidentiality has been offered and accepted and disguise is not possible, archiving runs the risk of violating that promise as other researchers—and, potentially, not only researchers—have access to the data.15

With respect to fieldnotes, aside from questions of the confidentiality of materials contained in them, archiving in order to make them available to other researchers makes little sense. For one, they are typically, literally, notes: scratches of ideas and thoughts and records of conversations, observations, and so forth, made by a researcher—often in a hurry, under fieldwork, rather than deskwork, conditions—as an aide de memoire to jog recollections later when, under calmer, quieter, and more reasonable working conditions, she can sit down to work them out in more narrative form in the research manuscript.16 Those notes are not likely to be meaningful to a researcher who did not experience what the notes summarize. Julian Orr (personal communication, 13 November 2010), draws a useful contrast with “the records of an archaeological excavation, in which the point is to record the exact location in three dimensions of every artifact, while also detailing the changing soils.”17 But the differences between studying unmoving, nonreactive potsherds and moving, reacting people are clear. Moreover, some IRBs require researchers to destroy their fieldnotes after a time, as further protection of participants, a common requirement in The Netherlands and other EU member states under “personal data protection” or other privacy laws. This would prohibit archiving altogether (and pose problems for cross-continental collaborations with clashing US and EU institutional rules).

Furthermore, there are important questions to be asked about the quality of the databases that are made so readily available to other researchers. McHenry (2006), for instance, analyzes the entries for India in the Cross-National Time-Series Data Archive (CNTS) developed by Arthur S. Banks, specifically the three categories that represent domestic conflict: general strikes, riots, and antigovernment demonstrations. These three do not reflect lived experience in India itself: living and working there, McHenry found at least nine different kinds of disturbance, a far more nuanced picture than that suggested by the database—leading one to think that research using CNTS might be seriously flawed.18 As Becker (2009: 549) writes, “[R]esearchers can use statistics others have gathered, but only when they have independently investigated their adequacy for a theoretically defined purpose, something that can never be taken for granted.” We suggest this holds not only for data in statistical form.19

For interpretive researchers, aside from the ethical and data quality concerns posed by data archiving, other research process features make replicability itself— the reason for data archiving—less appropriate and less thinkable. For one, it assumes a cut and dried, fixed research process, rather than a dynamic, flexible design: the former promises clear, specified steps which are, at least in principle, capable of replication, whereas the latter, given its variability in response to local conditions and specific persons, is much less replicable. Moreover, even if research processes were, in principle, replicable based on the researcher's field-notes and other tracking records kept for the purposes of reflexivity and transparency, interpretive researchers assume that competent researchers must respond to field contingencies. These are not likely to be replicable, and they may well reflect the identity and persona of the researcher, as well as that of participants, who cannot be counted on to reappear—or even to articulate the same views in the same words or tone of voice. Another researcher, different research circumstances—quite aside from what might be called, turning the table on its head, “participant effects,” “setting effects,” “event effects,” and so forth (and not just “interviewer effects” or “researcher effects”)—all limit the extent to which field experiences can be replicated. As with other matters, this difficulty is caused not by researchers who are not “objective,” but by the dynamic character of social life. Unfortunately, the willingness to archive for the purposes of replication has been conflated with the research value of transparency (e.g., Lamont and White 2009: 85, Albright and Lyle 2010: 17). This means that interpretive researchers may need to clarify that they are not opposed to transparency even as they contest the norm of replicability as applicable to all forms of research and, in particular, to their logic of inquiry.

Many scholars who archive their data are likely to see such actions as a service to the research community because their data are then available to other scholars. Moreover, independent archiving by non-state actors and the enhanced availability of some data sets may also be important to transparency in democratic systems because, as Sadiq (2009: 37) argues, “every state, democratic or authoritarian, suppresses information about certain groups or phenomena.”20 We concur with Sadiq and Monroe (2010: 35) that the scholarly community needs to pay more attention to the “politics of data collection” and, by implication, archiving. Where archiving is voluntary and do-able conceptually, ethically, and methodologically, we have no quarrel with it. To the extent that the matter is coming into greater play in the context of publishing practices, it is worth thinking through in a research design (as well as more widely in methodological circles), even if it is not taken up there.

Writing Research Designs and Manuscripts

So much time and effort is put into preparing a research design that new researchers might well wonder whether it is “wasted” effort—work done only for the proposal and then forgotten once that has been accepted and the research project launched. As more experienced researchers know, that is far from the case! And so we provide a brief guide for newer researchers as to the “recyclability” of sections of their research designs. We begin with the general structure of a research manuscript as that is produced in many social science fields and show how the sections of the design develop into chapters.

Although in some subfields of some disciplines, experimentation in writing is accepted (e.g., in those fields that draw on more performative methods, such as play-acting, painting, and autoethnography, as used in some areas within educational and allied health studies), many other disciplines—sociology, geography, public policy, international relations among them—still expect fairly traditional written work, even when the methods used are “less traditional,” interpretive forms. Within these fields, the “plotlines” of much empirical written work—conference papers, journal articles, dissertations, book manuscripts—often follow a common logic. Moving from a broader focus in the “literature review” to a more narrow one in the “data presentation” back out to a broader engagement in the concluding section or chapter, the shape resembles an hourglass (see Figure 7.1).21

images

FIGURE 7.1 The hourglass shape of a traditional research manuscript as it relates to a research design. Sections I and II are common in content across research designs and manuscripts; below the dotted line, design contents are different, as indicated from the perspective of the design. Title page, table of contents, acknowledgments, notes, bibliography are not indicated. Original graphic design, Dhoya Snijders, Ph.D. candidate, VU University, Amsterdam; revision, Akiko Kurata, Ph.D. candidate, University of Utah.

The data section (III) is the narrowest part of the hourglass in the sense that it is the most detailed, the most grounded, in its focus. The “literature review” (I) is, by comparison, broader in that it sketches out the domain within which the conversation concerning the research question is taking place. The methodology/ methods section (II) focuses down from that, in presenting the knowledge-making rationale underlying the data that follow and legitimating the knowledge that will be claimed on their basis. The analysis section (IV) broadens out from the data, explaining to the reader how the data just presented (in III) bear on the specific research question (developed in I) and make sense of its puzzle. And the concluding section (V) explains the significance of the analysis (IV) in terms of the broader context of the research question laid out in section I.

The literature review in the research design, which explains and justifies the research question (the puzzle) and its significance, typically becomes section or chapter 1 of the research manuscript. The proposal's methods section, which presents the actual design of the research project—its “where,” “when,” “who,” “what,” “how?”—and whatever methodological explanation and/or justification, typically becomes section or chapter 2, although what is presented in the proposal with a future “I will . . .” orientation becomes the past “I did . . .” in the writing up.22 (That section or chapter would also include key decisions related to unexpected turns in the field, which have been recorded in the researcher's field-notes.) The remainder of the research proposal—the timetable, plans for disseminating the findings, etc.—is clearly future-directed planning and drops away in writing the research manuscript. Section III is not included in the research design, given that data can only be presented once they have been generated, and the discussion of their intended sources is typically included in section II. But sections IV and V do have their counterparts in the design, at the level of anticipation: what forms of analysis might the researcher use, and what might be the analytic importance, theoretical significance, or other contributions of that analysis?

Some researchers feel that the methods discussion, focusing as it does on the nuts and bolts of the research project, is a logical misfit in this second position, interrupting the flow of exposition that seemingly should run directly from its theoretical argumentation and situating to the data presentation. (And indeed, one sometimes finds the methods section in an appendix, rather than in the main body of a manuscript.) When one considers, however, that the argumentation in the methods chapter serves to legitimate, to authorize, the evidence presented in the subsequent chapter and the knowledge claims advanced in the analysis and concluding chapters, its position in second place makes logical sense. Its placement in this position contributes at the level of logical exposition to research transparency in that this chapter explains the knowledge-generating assumptions on which the presented data rest.

While the general construction of a research manuscript may follow the traditional hourglass model depicted in Figure 7.1, the “feel” characteristic of an interpretive research manuscript is quite different from that of a positivist–quantitative (and perhaps even –qualitative) manuscript, built as it commonly is around a table or tables containing findings in statistical form. The table(s) signal(s) to reviewers and other readers that the researcher has followed the expected steps and processes characteristic of positivist research—the initial threshold after which detailed assessment of the manuscript's quality begins. Interpretive manuscripts also communicate that their authors have followed the criteria and standards appropriate to their methodological approach, but do so in different ways. Some aspects of this signaling begin already in the research design. For instance, a researcher signals the intent to map for exposure and intertextuality, to be open to revising research plans as the need arises (including what some of these circumstances might be and how they might be handled), and to be reflexive about positionality in the plans for various methods of data generation (and perhaps analysis) discussed in the research design. The enactment of reflexivity, though, along with other standards (notably, thick description), is woven into the writing of the manuscript itself, along with the manifestations of other criteria or standards that have been followed (Schwartz-Shea and Yanow 2009, Yanow 2009).23

Might there be other ways of writing interpretive manuscripts? Scholars writing autoethnographic research have broken a fair amount of ground on this front (see, e.g., Ellis and Bochner 2000); others have explored performative styles of writing. Laurel Richardson (2000) has called for treating writing itself as a method of inquiry. One of us argued for a different dissertation structure with her advisor. She wanted to begin with the story of the organization being analyzed—its history, development, structure, actors, key events, and so on. As her own sense-making and theorizing about the case had emerged from that evidence, how would a reader be able to make sense of the theorizing without having the data presented first? Her advisor argued to the contrary: how could a reader make sense of the case material without a sense of the theoretical hooks through which its presentation was constructed? Being a dutiful graduate student, she complied, coming eventually to see the wisdom in that argumentation. In recent years, however, she has sought to begin papers and articles with a brief narrative of key empirical events, to set the stage for the theorizing. After all, story-telling captures an audience's attention, as we know from the relevant research literature (e.g., Boje 2008, Czarniawska 2004, Gabriel 2000, Hummel 1991). As (co-)authors, the two of us now often use carefully chosen epigraphs to the same desired end. As more and more social scientists experiment with breaking the frame of traditional, realist–objectivist writing, including greater use of the authorial “I,” we may see changes in this hourglass model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset