Geert Jacobs

26Verbal communication quality in institutional contexts118

Abstract: This chapter presents a selective outline of methods for evaluating the quality of verbal communication in the context of institutions. It is based on classroom experience with an interactive master’s course, where teams of students were invited to produce and evaluate a business plan (to attract funding), a job posting (to recruit new staff) and a press release (to contain a corporate crisis). The chapter draws attention to how important it is to take the communicative context into account through qualitative, ethnographic inquiry. It is suggested that it pays off to go behind the scenes, backstage in Goffman’s terminology, to monitor the complex production and reception processes underlying most verbal communication, including the notion of reuse.

Keywords: quality, text-focused evaluation, expert-focused evaluation, reader-focused evaluation

1Introduction

To address the issue of quality in the study of verbal communication is no straightforward matter, at least not from the viewpoint of the language sciences, which have had a long tradition of detailed and systematic description of the system and structure both of language in general and of individual languages. The linguistic interest in language use and hence in issues related to verbal communication quality is a relatively recent phenomenon: it was not until what the editors of this volume call the “pragmatic turn” in the language sciences that communicative action was put centre stage (e.g., Levinson 1983). In comparison, note that in a 1989 article that I draw on more extensively later in this chapter Schriver says that “a variety of document-evaluation methods” have been around since the 1930s. She is obviously referring to work outside linguistics; note in this respect the use of ‘document’ rather than, say, text. Also, not all linguistic pragmatic work looks at the quality of communication. On the contrary, in the 353 pages of the book of abstracts of the latest conference of the International Pragmatics Association, the world’s premier organization in the field of linguistic pragmatics (New Delhi 2013), ‘quality’ is just mentioned seven times. Interestingly, all mentions are related to research on verbal communication in institutional contexts.

In contrast, look at the unambiguous focus on quality in the opening lines of the editorial introduction to the first issue of the journal Document Design at its launch back in 1999:

World War II is still an important topic of research, not only for historians, but for document designers as well. Studies have shown that many of the serious and fatal accidents that occurred during the war were the result of misunderstanding instructions. The documents proved too difficult for the soldiers in emergency situations.

Clearly, some scholars in the broad field of verbal communication studies have moved more quickly to embrace quality than others. And so including a chapter on verbal communication quality in the present volume is not at all straightforward.

In addition to the fact that the linguistic interest in quality is relatively new, here are some more elements that complicate our endeavours to deal with verbal communication quality in this chapter:

by its very nature the study of verbal communication quality draws on interdisciplinary partnerships with such neighbouring disciplines as ethnography (mapping the social dimensions of communication) and the psychology of problem-solving (with a view to its cognitive dimensions). Of course it has also led to collaboration with the communication sciences. While such academic encounters across disciplinary boundaries are exciting, they are – as is wellknown – not without their challenges, both theoretical and practical.

as no form of verbal communication is entirely disconnected from the nonverbal (ranging from gestures in oral interaction to typography in writing), examining quality involves multimodality. Some of the pioneering work in this area has been done in the field of website design and usability (see de Jong & Lentz 2006; van den Haak 2009; Donker-Kuijer et al. 2010 and Elling et al. 2011 for recent work on e-government).

with linguists venturing beyond description to evaluation and, if not prescription, then certainly practical implications comes a flurry of questions around methodology (including a suspicion of subjectivity related to issues of validity and reliability – de Jong & Schellens (2000) suggest some of the early evaluations are built on “quicksand”) and even ethics (can and should we give recommendations to, say, advertisers on how to be more persuasive?).

certainly, if we’re not sure about selling out to professionals, at least we can and should help our students to become better communicators? With this pedagogical perspective come the notions of skills, needs analysis, literacy and self-efficacy (Bandura 1977), to name but a few.

Taking into account the limitations of a single chapter like this in addressing such a complex topic, I propose to present an outline of methods for evaluating the quality of verbal communication in the context of institutions. The outline is bound to be selective and the selection will no doubt be arbitrary. In line with the pedagogic concerns that dominate a lot of interest in communication quality, the outline presented here draws on my classroom experience with an interactive master’s course I have taught at a small-size Central European university, where students were introduced to different methods for written text evaluation. Crucially, they were also invited to try them out in the context of a team-based assignment in which they set up their own internet-related company. The teams had to produce and evaluate the following text genres: a business plan (to attract funding), a job posting (to recruit new staff) and a press release (to contain a corporate crisis).

In contrast with most of the key publications in this area that will be surveyed in the course of this chapter, we will be less concerned with the typical issues of quantitative analysis, including validity (whether we measure what we want to measure) and reliability (related to sample size). Instead, we will draw attention to how important it is to take the communicative context into account through qualitative, ethnographic inquiry. In doing so, we will suggest it pays off to go behind the scenes, backstage in Goffman’s terminology, to monitor the complex production and reception processes underlying most verbal communication, including the notion of reuse.

In the next sections we will turn to the various methods for evaluating text quality but not before we have clarified the central concepts in this chapter. ‘Verbal communication’ has been defined elaborately in the introduction to this volume and so we do not deem it necessary to go into any more detail here. Suffice it to say we consider it to include written and oral communication, as well as the use of written and spoken language in various forms of digital communication. The concepts of ‘institutions’ and ‘quality’ require some more attention, though.

First, briefly, institutions. De Jong & Schellens (1997) say it is the professionalization in the field of technical communication that “has engendered a growing interest in practical design-supporting research” and hence in issues of quality and evaluation. They refer to all kinds of functional texts, including manuals, leaflets and forms (402). From a somewhat different perspective, Schriver (1989) makes more or less the same point when she says that work-type reading and writing differs dramatically from school-type reading and writing. Although she does not go into the specific reasons why the two are different, it is implied that they have to do with the different ways in which school teachers and, say, business professionals interpret ‘quality’. Obviously, the notion of ‘institutions’ goes well beyond the technical, engineering setting to include news media, health care, law, marketing and politics, to name but a few domains. This broad scope is already characteristic of early work in this area, like Drew & Heritage’s (1992) collection of studies of language and social interaction called Talk at Work. While the volume is not about quality, it does characterize institutional discourse as “basically task-related” (3), with specific constraints determining what counts as an allowable contribution to the activity at hand and what doesn’t (22). A similar orientation can be found in most other work in this area, including contributions of professional discourse by Bazerman & Paradis (1991), Gunnarsson, Linell & Nordberg (1997), and Roberts & Sarangi (1999) as well as on organizational discourse, like Iedema (2007). For more recent work in this area see Candlin & Crichton’s (2012) and Pelsmaekers, Rollo & Jacobs’ (2014) studies of trust and Östman & Solin’s forthcoming analysis of responsibility.

So what is communication quality in institutions? It is generally agreed that low-quality (or bad) communication fails to consider the audience’s needs. Zooming in on written texts, Schriver (1989) mentions a number of typical problems: forgetting to provide the necessary context, not including examples, obscuring the purpose of the text, leaving out critical information and writing too abstractly. In contrast, high-quality (or good) communication does consider the audience’s needs. Note that this central concern with audience ties in with standard linguistic pragmatic concerns with (mis)understanding (including notions like presupposition, implicature and audience design) and co-operation (Gricean maxims) as well as more specific categories like relevance, empathy and coherence.

It should be mentioned here that, just like with linguistic pragmatics, a lot of research on verbal communication in institutional contexts is not concerned about quality. To give just one example that was used in the corporate crisis module of the master’s course: in their case study of a faculty strike at Eastern Michigan University, Vielhaber & Waltman (2006) are only interested in describing how the various stakeholders communicated, not in finding out how successful they were in doing so. To examine crisis communication strategies and messages, the researchers collected press releases, Web site postings, and e-mails sent by the university’s leadership and by the faculty union in the period before and during the strike. Based on the model developed by Coombs (1999) these documents were then examined to identify the crisis response strategies and the technology used to communicate those responses. There is not a single reference to quality in this paper.

2Classifying methods for evaluating the quality of verbal communication

There are various ways in which to organize this presentation of methods for evaluating the quality of verbal communication. I will zoom in on four here: the timing of the evaluation, the specific topics on which the texts are evaluated, the objectives of the evaluation and who is involved in evaluating.

The first perspective on evaluation methods that I would like to present here is to look at the timing of the evaluation. In particular, I propose distinguishing, as de Jong & Schellens (1997) do in their work on the quality of written texts, between so-called prewriting research and what they label ‘formative text evaluation’. The former category includes audience analysis, where the writer pro-actively tries to get a good idea of the readership’s needs and expectations even before he or she starts writing. The latter is evaluation proper and includes different forms of so-called usability testing: in this case a preliminary version of the text has already been written and the evaluation is set up in order to guide the redesign of the text. Clearly, both types of approaches can be combined. In this article, our focus is on the latter, more purely evaluative type of intervention, though. The idea is to survey different methods for evaluating communication quality on the basis of some kind of draft.

Another way to look at different evaluation methods is to zoom in on the specific topics on which the texts are evaluated. Some of the traditional topics include content, organization, visual design, style and illustrations. De Jong & Schellens (1997) spell out the following: selection, comprehension, application, acceptance, appreciation, relevance and completeness, all of which involve the reader. Other, reader-oriented topics that are frequently mentioned include accessibility and reuse. Note that de Jong & Schellens (1997) propose a number of specific suggestions on which methods are more suitable for which topics (for example, they recommend using the so-called cloze test, see below, if you want to evaluate issues of comprehension).

A third way is to look at the objectives of the evaluation, i.e. why the evaluation is done. De Jong & Schellens (1997) distinguish between evaluation methods with a so-called verifying function (aimed at obtaining overall impressions about document quality, which requires a quantitative research design, like in readability formulas), methods with a ‘troubleshooting’ function (aimed at detecting and diagnosing possible reader problems, which requires a qualitative and exploratory approach) and methods that facilitate a choice between alternatives. Note that all three approaches can lead to various degrees of redesigning, ranging from a simple copyedit or proofread (does the writer stick to accepted language standards in terms of grammar, spelling, punctuation and sentence structure?) to a comprehensive edit (a thorough rewrite of the text). As for the why of the evaluation process, I propose a broad perspective, including more general data analyses (even experimental studies) that are not aimed at evaluating (and, potentially, enhancing) the quality of one specific text but that promise to expand our knowledge of a whole genre and indirectly contribute to better future texts.

A final way of looking at the different methods, and the one that I propose to follow here has to do with the question who is involved in evaluating. Following Schriver (1989) I suggest three groups: the text-focused evaluation, which could be conducted by anyone (including the writer) or even by means of computer software, the expert-focused approach (involving anyone who can be seen as an expert in the domain of the text) and, of course, a reader-focused approach.

3Text-focused evaluation

The text-focused evaluation means that the text is examined on the basis of one or more principles that have been developed from ideas about how readers will probably respond (Schriver 1989). This could be done through, for example, readability formulas, which analyse word frequency and sentence length, drawing on the assumption that shorter sentences with more frequently used words are easier to understand. Since such formulas are easy to automate (think of style checkers in word-processing software), Schriver (1989) comments, they involve little or no effort and so they are cheap to implement. For a recent example, see Franck et al. (2011) who developed and tested a semi-automated leaflet optimizer to improve the readability of Dutch-language patient information leaflets. From the beginning, these formulas have been criticized: for one thing, since they operate at the word and sentence levels only, a text will get the same readability score whether its words are arranged in normal or reverse order (Schriver 1989). It should be noted, though, that readability formulas remain very popular in many areas of research outside linguistics, including specialized business and institutional settings like education and finance.

Another type of text-focused evaluation is the use of guidelines and maxims, all of them undoubtedly well-meant do’s and don’ts that are typically so vague and that fail to take into account any of the specificity of the reader to the extent that a redesign based on them may well make the original texts worse (cf. Schriver 1989). One such (very popular) guideline is to “omit needless words”. An example of a simplistic maxim is the idea of “You attitude” that is so common in advice on business prose. Its limitations are discussed in (2004).

A recent and more sophisticated newcomer to this category is King’s (2010) reverse outlining method, which starts from the idea that most text-focused evaluations are done by the writers themselves. Since writers typically find it more difficult to evaluate their own texts than those written by other writers, King’s method is aimed at helping them step back and derive an outline, which should allow them to diagnose potential organizational problems, for example by identifying missing or misplaced content. By drawing on the writer’s metalinguistic awareness, the approach is primarily a cognitive one and it recycles what is typically a prewriting move (writers normally make outlines before writing, if they make them at all) as a resource for evaluating the finished text itself.

In the master’s course I asked the students to use the reverse outlining method to evaluate their business plans but the results were disappointing: most of the students found it odd and counterintuitive to reach towards a higher level of abstraction for their own texts and they felt a distinct need to involve a third party in the evaluation process, one that in one way or another was better placed to judge the quality of their work, which brings us to the other two evaluation approaches listed by Schriver (1989): expert-focused evaluation and reader-focused evaluation.

4Expert-focused evaluation

In an expert-focused evaluation professionals with relevant expertise are asked to evaluate the text. Their expertise may be on the subject matter, the medium, or the target audience (de Jong & Lentz 2006). Expert-focused evaluation is commonly used in various institutional contexts. Think of the peer review procedure for the evaluation of manuscripts submitted for publication in academic journals. As most researchers know, this often leads to the frustration of widely divergent opinions. De Jong & Lentz (2006) refer to research on unguided expert evaluation, where the evaluation was entirely open. Typically, the results are disappointing. More recently, two basic strategies for guiding expert evaluation have been developed. The first strategy is heuristic evaluation, which provides the experts with evaluation criteria (for example in the form of checklists) that are likely to represent critical success factors for readers. These criteria can be related to wide-ranging aspects including lay-out and color use, but also usability, accessibility, and information quality (see Donker-Kuijer et al. 2010 on heuristic evaluation of government websites). The other strategy is scenario evaluation, where the experts are placed in a “surrogate-reader” role to help overcome the so-called curse of expertise (both Hinds 1999, quoted in de Jong & Lentz 2006): they are given realistic usage scenarios, which help them to judge a text through the eyes of the target user.

A much-publicized form of heuristic evaluation is Renkema’s CCC, where three general quality criteria (correspondence, consistency, and correctness) are applied to five textual levels (document type, content, structure, wording and presentation), leaving a total of fifteen evaluation points. There is a strong hierarchy in the system: correspondence is more important than consistency, which in turn is more important than correctness. And document type is more important than content etc. ‘Correspondence’ means that the sender achieves a goal and the document fills a need for the receiver (Renkema 2009).

In the master’s course class the text to be evaluated by means of Renkema’s CCC was a job advertisement, announcing a vacancy for a young university graduate to be filled at the start-up for which the students had previously tried to get funding through the business plan. Hence, the goals were clear, on both sides: the writers were trying to encourage potentially suitable new members of staff to apply for the job (in the end they were hoping to hire the perfect applicant), the reader is looking for the right vacancy (and, next, working hard to be hired for it).

When searching for the balance between writer and reader, the writer is in the driving seat, with many different choices. If he or she can maintain those choices through the text and at various levels (structure, words, layout, etc.), he or she will then score well on the second of the criteria: consistency. The third and final criterion is correctness and it means that there should be no mistakes.

5Reader-focused evaluation

In reader-focused evaluation research, finally, a text is evaluated by one or more potential members of the target audience. Schriver (1989) distinguishes two categories: concurrent (or real time) evaluation, including the so-called cloze test (where every 5th word is deleted from the text), eye tracking (with the position of the eye presumably corresponding to what is being processed, cf. Elling et al. 2011) and think-aloud protocols (see van den Haak et al. 2009 for an investigation into three variants), and retrospective evaluation, for example by means of interviewing, focus groups or surveys.

Schriver (1989) argues that concurrent testing (or what de Jong and Schellens (1997) call ‘pretesting’) provides the more reliable data, with less (or no) dependence on memory and a greater focus on specific text features (although readers may forget to speak during think-alouds).

De Jong & Schellens (2000) have classified the various methods for reader-focused evaluation differently, distinguishing between methods using task outcome (e.g., comprehension tests), methods using behavioral observation (e.g., think-aloud user protocols), methods using verbal self-reports (e.g., plus–minus method) and methods using a combination of the three (so-called one-to-one evaluation).

More recently, de Jong & Lentz (2005) have separated reader-focused evaluation approaches asking potential readers to read a text and, for example, think aloud from reader-focused evaluation approaches to judge a text, like the plus– minus method. It should be clear that the former approach is the more natural, with readers reading a text (or customers using a website, for that matter) rather than adopting an evaluative stance that they are not really familiar with.

As elaborately documented and researched by de Jong (1998) the standard procedure for the plus-minus method is to invite one or more members of the target audience to read the entire text and jot down pluses and minuses in the margin whenever he or she feels that a part of the text (ranging from a single word to a whole paragraph, including pictures and graphic elements) is good/positive (well written, funny, clear, ...) or bad/negative (uninteresting, confusing, ugly, ...). The next step is for the researcher to interview the reader and ask him or her to elaborate on the various pluses and minuses.

The plus-minus method may reveal problems at various levels of the text and the topics covered typically include graphic design, correctness, structure, comprehension, acceptance, appreciation, relevance and completeness. The general advice is to take the reader’s feedback seriously. It has been argued that not all the test reader’s problems are real reader problems and they are certainly not all readers’ problems so the decision whether to revise the text or not depends on questions of likelihood (how likely is it that more readers will have this problem?), impact (does the problem affect the effectiveness of the text?) and revisability (is there an adequate solution for the problem, one that does not create new problems?) (see de Jong 1998 for a more detailed exposé). Conversely, some reader problems are not the test reader’s problems so in theory as many pretests should be conducted as possible, although it is now generally accepted that a mere five test readers can typically detect 80% of the problems, including the most serious ones (see for example http://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/).

6Comparing methods for evaluating the quality of verbal communication

A lot has been said about the comparison of the three approaches listed above, text-focused, expert-focused and reader-focused. The consensus is that reader-focused evaluation is the more powerful of the three. De Jong & Lentz 2006 say that both expert-focused and reader-focused types of evaluation will make their own contribution to the quality of texts, but the real proof of quality can best be given by reader-focused evaluation research. They say that some researchers have proposed a sequential order (first conduct an expert-focused evaluation to gather the ‘low-hanging fruit’, i.e. the quality problems that are easily detectable, and then proceed with a reader-focused evaluation aimed at detecting the really hard-to-find problems). De Jong & Lentz 2006 conclude that expert-focused evaluations are generally more popular than reader-focused evaluations, mainly because they are less time consuming and less expensive (192).

But the three are definitely complementary, as my master students experienced when they were asked to read up on three research projects in the domain of job ads. The first one was Askehave (2010), who examines the quality of a Danish bank’s main written recruitment genre, viz. the bank manager job ad, by combining a systemic functionalist linguistic analysis of the ads with semi-structured focus group interviews in which a number of respondents, all presently employed as managers at various levels within the bank, were asked to comment on the ad. In particular, the aim was to explore the match (or mismatch) between the Bank’s recruitment needs, its communication strategy and the effect that the recruitment message may have on readers of the ad. While the focus group was actually an expert-focused evaluation (since the respondents were some of the bank’s managers and not real job seekers), it did generate a number of interesting new insights that the purely linguistic (text-focused) evaluation didn’t.

Earlier, Roberson et al. (2005) had used an experimental design and data from 171 college-level job seekers to show that detailed recruitment messages lead to enhanced perceptions of organization attributes and person-organization (P-O) fit, which in turn affect intention to apply. The practical implications are clear, not just for the design of recruitment advertisements and recruitment brochures but throughout the recruitment process: recruiters had better provide detailed information on what potential employees can expect to receive from the organization (including information about promotion and development opportunities, compensation and benefits, and organizational policies). This may help generate larger pools of applicants who are more likely to accept an offer if extended to them. Clearly, such a reader-focused evaluation is potentially far more powerful than the text-focused and expert-focused evaluations set up by Askehave (2010).

Around the same time as Roberson et al. (2005), tying in with Petrick & Furr’s (1995) notion of “total quality in managing human resources”, Blackman (2006) set up a similar reader-focused evaluation of job postings by conducting a quasi-experiment with final-year commerce students to see how a number specific variables (including the use of the word graduate in the heading, the use of pictures, and the mention of a career path or opportunities for development and promotion) influence attention to recruitment ads. Again, the benefits of a reader-focused evaluation are clear. At the same time, Blackman does point to the restriction of working with potential readers, in this case all students serving as applicants at the same, pre-experience, career stage and based at a regional Australian university (i.e. one with fewer employment options compared to the capital cities). This comment echoes de Jong & Schellens’s (2000) concern on the topic of sample composition, in general, and on the fact that participants’ background characteristics may affect the feedback collected.

In other words, of the many variables that quality depends on one extremely important variable is the reader. Put simply: what is a good text for one reader, may be a bad one for another. In an experiment which can be considered a reader-focused evaluation, Jones et al. (2006) have shown that individual jobseekers’ decisions about responding to job advertisements are affected by how deeply they process recruitment messages. Drawing on Petty and Cacioppo’s (1986) elaboration likelihood model they found out that those who tend to process messages less carefully choose more ads containing cues unrelated to the job (e.g., bolded font), and fewer ads with job-related arguments, leading on to recommendations for recruiters who wish to increase the size of their applicant pool. Note that Jones et al. (2006) use quality in a more narrow sense: “when EL is high, people’s attraction to job ads will be influenced primarily by the quality of the recruitment message” (168) and “Low EL among some job seekers may help explain findings showing that job ads are effective when they include features that have little to do with the quality of the recruitment message, such as by outlining ads in black boxes, including more white space and graphics, using bold or colorful fonts, or making the ads larger” (169).

7Beyond the reader

At this stage it is high time to point out that there is more to communication quality than just serving the audience’s needs. So far, all of the methods surveyed have zoomed in on reader understanding. Perhaps Renkema in the CCC model, with its focus on consistency between writer ambitions and reader expectations, was the only one to indicate that communication quality cannot be defined only from the perspective of the receiver (Renkema 1999). In particular, drawing on research into the quality of Dutch tax forms, he argues that research into communication quality should not be confined to what he calls ‘assessment research’. It should also incorporate economic research. Put simply, if producing better tax forms costs the government more time and money, then the possible benefits for the public may well be neutralized. Conversely, if the redesigned texts result in fewer phone calls in which citizens ask for more information, the evaluation process will easily pay itself back.

Even more, what is special about many institutional contexts is that the communication is persuasive in addition to (or rather than) informative. In pragmatic terms, the co-operative principle can then be deliberately violated; the maxims may be flouted. Even if persuasive strategies are not nevcessarily misleading (see Jacobs 2006 as well as in this volume chapters 13 and 25) it should be clear that there is so much more to quality than just understanding. The genre of the so-called ‘business plan’, which is at the heart of the master’s course referred to above, is a case in point: its prime objective is to convince the reader to invest money in the proposed project. Let me therefore briefly turn to work on financial communication to sketch the wider context. While I have so far deliberately abstained from any in-depth discussion of specific institutional settings, I feel it is necessary to briefly dip into the context of finance and accounting to make my final point.

There was a time, Crawford Camiciottoli (2010) argues, when financial filings were just meant to fulfill legal obligations. They were aimed at so-called disclosure, the annual public release of financial results as prescribed by law. Typically, the writers of these mandatory documents didn’t even have to worry about being understood. Quality was not an issue. Recently, however, it has started to grow on the management of financial institutions that they do have an interest in being understood since this can contribute to and consolidate a positive corporate image with key stakeholders (including shareholders, investors and customers of course, but also employees and management as well as the news media, all sorts of special interest groups and the general public). So this is where the methods for evaluating verbal communication quality that we have discussed in this chapter may well prove their worth. But this is also where persuasion trickles in: comprehension equals transparency and hence it may promote goodwill, convincing – as Crawford Camiciottoli (2010) argues – potential investors of the good standing and future worth of the company. She goes on to show how this has led to the emergence of voluntary forms of pro-active disclosure especially in the wake of high-profile financial scandals as well as the recent financial crisis with the subsequent loss of confidence in financial markets. As a result, financial communication has become a field of academic interest in its own right, putting out language-oriented research on annual general meetings of shareholders, live earnings announcements and earnings presentations via teleconferencing, CEOs’ letters to shareholders, annual reports, shareholder circulars, press releases. In this respect, Bhatia (2008) distinguishes between so-called ‘accounting discourse’, which is backward-looking, standard and legally required, and ‘public relations discourse’, which also tends to look forward and which goes well beyond what the company has to say). Hyland (1998) presents a relatively early example of a study of how CEOs attempt to influence readers by projecting a positive personal and corporate image in company annual reports. While he suggests a descriptive framework (i.e. one that refrains from coming up with recommendations for practitioners), Hyland’s analysis of metadiscourse (including the use of linking words like ‘therefore’ and ‘nevertheless’) does point to persuasive potential of certain specific language choices.

And so we have reached the limits of our methods for evaluating the quality of communication since what may be good for the writer can be bad for the reader (and the other way round). Also, we may have found a reason why many linguistic pragmatic researchers have steered clear from evaluating the quality of verbal communication in institutional contexts: since most of this is a non-collaborative negotiation, a struggle for power, quality becomes a multi-faceted, ambivalent notion.

References

Askehave, Inger. 2010. Communicating Leadership: A Discourse Analytical Perspective on the Job Advertisement. Journal of Business Communication 47. 313–345.

Bandura, Albert. 1977. Self-efficacy: Toward a Unifying Theory of Behavioral Change. Psychological Review 84(2). 191–215.

Bazerman, Charles & James Paradis (eds.). 1991. Textual Dynamics of the Professions: Historical and Contemporary Studies of Writing in Professional Communities. Madison, WI: University of Wisconsin Press.

Bhatia, Vijay K. 2008. Genre analysis, ESP and professional practice. English for Specific Purposes 27(2). 161–174.

Blackman, Anna. 2006. Graduating Students’ Responses to Recruitment Advertisements. Journal of Business Communication 43. 367–388.

Candlin, Christopher & Jonathan Crichton. 2012. Discourse of Trust. Basingstoke: Palgrave Macmillan.

Crawford Camiciottoli, Belinda. 2010. Discourse connectives in genres of financial disclosure Earnings presentations vs. earnings releases. Journal of Pragmatics 42. 650–663

Coombs, W. Timothy. 1999. Ongoing crisis communication: Planning, managing, and responding. Thousand Oaks, CA: Sage.

De Jong, Menno D. T. 1998. Reader feedback in text design. Validity of the plus–minus method for the pretesting of public information brochures. Amsterdam: Rodopi.

De Jong, Menno D. T. & Leo Lentz. 2006. Scenario evaluation of municipal Web sites: Development and use of an expert-focused evaluation tool. Government Information Quarterly 23. 191–206

De Jong, Menno D. T. & Peter J. Schellens. 1997. Reader-focused text evaluation: An overview of goals and methods. Journal of Business and Technical Communication 11. 402–432.

De Jong, Menno D. T. & Peter J. Schellens. 2000. Toward a Document Evaluation Methodology: What Does Research Tell Us About the Validity and Reliability of Evaluation Methods?. IEEE Transactions on Professional Communication 43(3). 242–260.

Donker-Kuijer, Marieke Welle, De Jong, M. D. T. & Leo Lentz. 2010. Usable guidelines for usable websites? An analysis of five e-government heuristics. Government Information Quarterly 27. 254–263

Drew, Paul & John Heritage (eds.). 1992. Talk at Work: Interaction in Institutional Settings. Cambridge: Cambridge University Press.

Elling, Sanne, Lentz, Leo & Menno D.T. de Jong. 2011. Retrospective Think-Aloud Method: Using Eye Movements as an Extra Cue for Participants’ Verbalizations, Vancouver, Canada.

Franck, Maarten Charles J., Foulon, Veerle & Leona Van Vaerenbergh. 2011. ABOP, the automatic patient information leaflet optimizer: Evaluation of a tool in development. Patient Education and Counseling 83. 411–416.

Gunnarsson, Britt-Louise, Per Linell & Bengt Nordberg (eds.). 1997. The Construction of Professional Discourse. London: Longman.

Iedema, R. (ed.). 2007. The Discourse of Hospital Communication. Tracing Complexities in Contemporary Health Organizations. Basingstoke: Palgrave Macmillan.

Jacobs, Scott. 2006. Nonfallacious Rhetorical Strategies: Lyndon Johnson’s Daisy Ad. Argumentation 20. 421–442.

Jameson, Daphne. 2004. Conceptualizing the writer-reader relationship in business prose. Journal of Business Communication 41. 227–264

Jones, David A., Shultz, Jonas W. & Derek S. Chapman. 2006. Recruiting Through Job Advertisements: The Effects of Cognitive Elaboration on Decision Making. International Journal of Selection and Assessment 14(2). 167–179.

King, Cynthia L. 2012. Reverse Outlining: A Method for Effective Revision of Document Structure. IEEE Transactions on Professional Communication 55(3). 254–261.

Levinson, Stephen K. 1983. Pragmatics. Cambridge: Cambridge University Press.

Pelsmaekers, Katja, Rollo, Craig & Geert Jacobs (eds.). 2014. Trust and Discourse. Amsterdam/ Philadelphia: Benjamins.

Petrick, Joe & Diana Furr. 1995. Total Quality in Managing Human Resources. Delray Beach, FL: St. Lucie.

Petty, Richard E. & John Cacioppo. 1986. Communication and Persuasion: Central and Peripheral Routes to Attitude Change. New York: Springer-Verlag.

Renkema, Jan. 2009. Improving the quality of governmental documents: A combined academic and professional approach. In Winnie Cheng & Kenneth C. C. Kong (eds.), Professional communication: collaboration between academics and practitioners, 173–190. Hong Kong: Hong Kong University Press.

Roberson, Quinetta M., Collins, Christopher J. & Shaul Oreg. 2005. Effects of Recruitment Message Specificity on Applicant Attraction to Organizations. Journal of Business and Psychology 19(3). 319–339.

Roberts, Celia & Srikan Sarangi (eds.). 1999. Talk, Work and Institutional Order: Discourse in Medical, Mediation and Managemnt Settings. Berlin: Mouton de Gruyter.

Schriver, Karen A. 1989. Evaluating text quality: the continuum from text-focused to reader-focused methods. IEEE Transactions on Professional Communication 32(4). 238–255.

Östman, Jan-Ola & Anna Solin (eds.). 2015. Discourse and Responsibility in Professional Settings. London: Equinox.

Van den Haak, Maaike J., de Jong, M. D. T. & Peter Jan Schellens. 2009. Evaluating municipal websites: A methodological comparison of three think-aloud variants. Government Information Quarterly 26. 193–202.

Vielhaber, Mary E. & John L. Waltman. 2008. Changing Uses of Technology: Crisis Communication Responses in a Faculty Strike. Journal of Business Communication 45. 308–330.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset