Chapter 10
Measurement and evaluation

Mairead McCoy

Chapter Aims

The chapter provides an overview of the main theoretical and practical issues involved in evaluating public relations. It begins by introducing the area of evaluation and discussing the importance of objectives. Models of PR evaluation are then reviewed and the key research findings regarding the practice of PR evaluation and the main barriers to implementation are summarised. Some of the initiatives undertaken by the PR industry are then outlined before three particular areas of debate – media evaluation, return on investment and online evaluation – are highlighted. Finally, the chapter concludes by presenting a case study of a social return on investment (SROI) evaluation.

Introduction to Evaluation

Evaluation has been described as a ‘transdiscipline’ (Scriven 1996: 402) that can be applied in many areas where efficiency, effectiveness and impact are important concerns (Rossi and Freeman 1989). Some years ago, Weiss (1972) defined evaluation as measuring the effects of a programme against its goals. However, while still recognising the element of comparing results with objectives, other definitions broadened the concept of evaluation to emphasise the importance of evaluation before and during the programme and stressed that evaluation is not solely a retrospective analysis conducted after the programme is over. For example, Berk and Rossi (1990: 8) suggested that, ‘evaluation research includes the design of … programs, the on-going monitoring of how well programs are functioning, the assessment of program impact and the analysis of the program benefits relative to their costs.’ Thus, evaluation can be both formative and summative and can involve assessment of needs, programme theory, implementation, impact and efficiency (Rossi et al. 2004). From this perspective, key issues are the formation of goals and objectives, identification of measurement indicators, the specification of the programme and cost–benefit analysis (McCoy and Hargie 2001).

In terms of public relations, Lindenmann (2005: 8) pointed out that measurement and evaluation, ‘has been widely discussed, actually carried out, and grown and evolved over a 60-year period of time’. Indeed, Lamme and Russell (2010) suggested that informal research and evaluation may have occurred as early as the late eighteenth century. Meanwhile, Watson (2012) offered an overview of the evolution of PR measurement and evaluation over the course of the last century. Additionally, Hon (1997) stipulated that PR can be evaluated at the individual, programme, organisational or societal level, although programme effectiveness is the most common focus of evaluation activity. It is argued that the ability to evaluate programme effectiveness is a key skill in strategic communication planning (Smith 2013). Indeed, evaluation routinely appears in many models of strategic PR management (e.g. Marston 1963; Kelly 2001; Cutlip et al. 2006; Hendrix 2006). Although often depicted as the final stage in the process, as outlined above, evaluation contributes to all phases of PR programmes. Supporting the idea that measurement and evaluation is a broader process than merely a post hoc performance appraisal, Macnamara (2014) developed the MAIE model that distinguishes between the concepts of ‘measurement’ and ‘evaluation’ where the former involves taking measures and analysing results (essentially data collection and analysis) while the latter is about identifying value. Crucially, the model also emphasises the importance of early in-depth analysis occurring after measurement and before the evaluation stage in order to arrive at ‘forward-looking’ ‘insights’ (p. 9) that inform future strategy, particularly through qualitative data and the collection and analysis of exogenous ‘big data’ information such as published research literature, case studies and market/competitor analyses.

According to Anderson et al. (2009: 6), proving the value of public relations is one of the profession’s most ‘vexing challenges’. In a business and social environment that is increasingly competitive, PR practitioners need to understand how to manage research and evaluation practices that contribute to success and accountability and allow them to demonstrate in a measurable way how PR programmes are of benefit to their organisation (Austin and Pinkleton 2006). With the advent of digital and social media and the growing integration with other functions such as marketing, the new ‘business landscape’ is increasingly complex and diverse, making PR measurement and evaluation more difficult (Jain 2014). In addition, evaluation is seen as a fundamental component of professional practice (Stufflebeam and Shinkfield 1985). As L’Etang (2008) contended, evaluation has the potential to increase credibility of the PR industry and help it to gain professional status.

Moreover, the topic of PR evaluation consistently emerges as one of the top research priorities for practitioners, academics and researchers (Watson 2008).

As mentioned above, objectives play an important role in evaluation. This relationship will now be examined in more detail.

Objectives and Evaluation

The most prevalent approach to evaluation is Tyler’s (1942) objective-based model that proposes that goals and objectives must be defined and specified as a prerequisite to evaluation. Therefore, at the outset, practitioners should be able to identify exactly what they want to achieve with their public relations programme. According to Watson and Noble (2007), setting appropriate objectives is the bedrock of effective evaluation. However, writing PR objectives has been described as one of the most difficult tasks that practitioners face (Broom and Dozier 1990; Kerr 1999). A common source of confusion regarding objectives is the tendency for practitioners to describe their tactics or activities rather than their intended consequences (Cutlip et al. 2006). Objectives can be cognitive, affective or conative (Gregory 2000) or set at output, outcome, outgrowth or outflow levels (see evaluation models later in this chapter for more information). While most will be familiar with the advice to define objectives that are ‘SMARRTT’ – specific; measurable; achievable; realistic; relevant; targeted; and timed (Watson and Noble 2007) – many authors have outlined key guidelines for formulating measurable objectives (e.g. Gregory 2000; Rossi et al. 2004; Austin and Pinkleton 2006; Anderson et al. 2009).

Their main recommendations can be distilled as illustrated in Figure 10.1.

Figure 10.1

Figure 10.1 Key recommendations for writing measurable objectives

Source: Adapted from Gregory (2000b) and Rossi et al. (2004)

In particular, calculating a realistic magnitude of expected change can be especially problematic and is usually determined via a combination of research and practitioner judgement. It also requires some baseline measurements to provide a comparison for later figures in order to identify any progress made. This is one valuable function of formative evaluation.

Likewise, the seemingly common sense suggestion that objectives should be ‘achievable’ also causes difficulty in practice. Familiarity with theories of mass communication effects and persuasion can help practitioners to understand what communication can achieve in order to set realistic objectives and to avoid misguided and exaggerated expectations of the effects that a PR campaign may have (McCoy and Hargie 2003; Macnamara 2006). It is important to bear this in mind, not only when setting objectives, but also when interpreting evaluation results. In addition, some effects may develop over time and only become apparent in the longer term. Therefore it is essential that evaluation activity is appropriately scheduled.

In essence, objectives identify the criteria by which the success of PR programmes can be evaluated (Fill 2005). However, in practice, links between specific campaign objectives and corresponding evaluation methods are small. For example, several researchers have found that objectives expressed in terms of impact among target publics were rarely measured by impact techniques (Pieczka 2000; Gregory 2001). To illustrate with a case study, Veil et al. (2009) described a situation where local emergency planners were tasked with increasing community members’ use of household emergency plans and kits. However, the campaign evaluation focused on the transmission of messages via the media, and did not attempt any research to discover whether it had achieved increased awareness or behaviour change among its targets.

In recognition of the problems associated with objective setting, some authors have proposed an alternative evaluation approach. For example, Scriven (1996), questioning the efficacy of criteria-based evaluation and relegating goals/objectives to minimal importance, introduced the concept of ‘goal-free’ evaluation. In his view, evaluations exist to make value judgements on whether the programme was of use to its stakeholders. This is a concept that may be applicable in the PR context.

Evaluation Models and Methods

Several models of PR evaluation have been developed. This chapter focuses on the following three main models: Cutlip et al.’s (2006) Preparation, Implementation and Impact (PII) Model; Macnamara’s (1992, 2002) Pyramid Model of PR Research; and, Watson’s (1997) Short Term and Continuing Models.

Preparation, Implementation and Impact Model

Cutlip et al.’s (2006) ‘PII’ Model, originally conceived in the late 1970s, depicts the possibility of evaluating PR at three different levels of preparation, implementation and impact. Evaluation undertaken at the preparation level assesses strategic planning in terms of the adequacy of background information gathered to design the programme, as well as the appropriateness and quality of message content and presentation. At the implementation level, evaluation examines the adequacy of the tactics and efforts applied to the PR programme. During this phase the number of PR materials produced and distributed is documented, opportunities for exposure are determined from the number of messages placed in the media and the number of people who received and attended to programme messages is also measured. The final level of the PII Model involves assessment of programme impact where the extent to which programme goals and objectives have been achieved is investigated. Accordingly, changes in targets’ knowledge, opinion, attitude and behaviour become the focus of evaluation efforts. Additionally, the determination of PR’s contribution to social and cultural change is proposed as the ultimate summative evaluation. This may involve parallels with the social return on investment approach discussed later in the chapter.

Cutlip et al. (2006) argued that each step within the model increases understanding and accumulates knowledge so that an evaluation is not complete without addressing criteria at each level. However, they cautioned against substituting measures from different levels, for example, using measures of column inches (implementation level) to infer changes in target publics’ knowledge, attitude or behaviour (impact level). While the PII Model does not propose evaluation methodologies; nevertheless its focus on separating the various levels at which PR can be evaluated is valuable and it serves as a useful checklist for evaluation planners (Watson and Noble 2007). That said, it has been argued that models that do not provide methodological guidance remain purely theoretical frameworks that offer little practical assistance for practitioners (Macnamara 2006). However, arguably the PII model’s main contribution lies in clarifying the parameters surrounding each evaluation level.

Pyramid Model of PR Research

Building on the PII Model, Australian author Jim Macnamara developed a ‘Pyramid Model of PR Research’ (Macnamara 2002). As illustrated in Figure 10.2, this model conceptualises PR programmes in the form of a pyramid that rises from a broad base of inputs, narrowing through outputs and outcomes until reaching a peak where objectives are achieved.

The Pyramid Model differs from the other models discussed in this chapter in the fact that alongside each stage it proposes a menu of appropriate evaluation methodologies. While the list is not exhaustive, it nevertheless serves as a practical illustration of the wide range of informal and formal research and evaluation methods and tools available to practitioners. In particular, it highlights a number of ‘no cost’ or low cost avenues including secondary data (both internal and external). This is an important point for PR practitioners to note as cost is frequently advanced as the reason for not undertaking evaluation. The inclusion of indicative methodologies in the model also underscores the point that different methods are required at different stages and measure different things. In other words, it is important for PR practitioners to understand what research methodologies to use and when.

Figure 10.2

Figure 10.2 Pyramid Model of PR Research

Source: © Jim Macnamara 1992 and 2002. Used by permission of Jim Macnamara

Although the Pyramid Model has been criticised for its seemingly summative approach and lack of a dynamic feedback element (Watson and Noble 2007), Macnamara (2005) argued that while not overtly acknowledged, nonetheless the model implicitly proposes that research findings from each stage are continually incorporated into planning. Thus, if initial pre-testing at the input stage finds that a chosen medium is inappropriate, no practitioner would continue to the output stage and use that medium to distribute information.

In comparing the PII and Pyramid Models, it is evident that each uses varied terminology to describe what are essentially similar stages. Moreover, these models have been criticised because they appear too complex, static and lacking in dynamic feedback (Watson 1997; Watson and Noble 2007). However, this conclusion has been refuted by Macnamara (2006: 21) who argued that while the models depict a chronological illustration of the order of activity, ‘in reality, input, output and outcome research is part of a dynamic and continuous process and these models should be read that way’.

Short Term and Continuing Models

In response to the perceived need for accessible and dynamic models, Watson (1997) developed the Short Term and Continuing Models of PR evaluation. After empirically investigating evaluation practice via four case studies, he concluded that PR actions operated according to two broad structures of short-term media relations campaigns and longer-term programmes that utilised a variety of strategies and tactics to create effects among target groups. Thus, ‘two different evaluation models are needed to judge two very different scenarios’ (Watson and Noble 2007: 95). Watson’s Short Term Model follows a single track, linear process wherein awareness objectives are implemented through media relations and evaluated by way of media or target response analysis. To meet the needs of longer-term PR programmes, Watson designed a Continuing Model with an iterative loop to depict a dynamic and continuous evaluation process. The model initially begins with a research stage from which objectives are set. Following this phase, strategies are selected and tactics chosen. Multiple formal and informal analyses are then applied and the information is fed back to each programme element. Thus, throughout a programme, the objectives, strategies and tactics are continually adjusted.

Watson claimed that his models are simple, accessible and do not require rigid adherence to evaluation methodologies. In addition, the models recognise the fact that information can be used to adjust programmes so that summative evaluation data can actually be used in a formative manner. Furthermore, with seeming reticence to recommend appropriate evaluation techniques, Watson and Noble (2007: 101) maintained:

[T]hese models are not detailed prescriptions for undertaking evaluation of public relations programmes. This is a complex problem that does not lend itself to simple, straightforward solutions; nor is a long list of potential evaluation techniques useful for similar reasons.

In summary, the reviewed models offer four principal insights into the evaluation of PR. In broad terms they:

  • depict PR as a multi-step process;
  • clarify that different methods are appropriate at different stages;
  • underscore the importance of avoiding level substitution;
  • offer debate about the usefulness of prescribing evaluation methodologies.

Having said this, however, the practical application of these models is in doubt. It has been claimed that most practitioners have not adopted these approaches due to a lack of knowledge, too narrow and academic a dissemination base or practical and universal appeal problems (Watson and Noble, 2007).

In addition, others have proposed that in recognition of the paradigm shift towards relationship management (Ledingham and Bruning, 2000), there is also a need for a specific level of evaluation that focuses on how measurable relationships with stakeholders could be linked to business outcomes. This has been dubbed ‘outflows’ (Thellusson 2003). Zerfass (2010) also defined this as the value created by communication processes in terms of the impact on strategic/financial targets (value adding) and tangible/intangible resources (capital-building). The empirical measurement of relationship indicators in PR is a developing area of research. In an early study of organisation–public relationships, Hon and Grunig (1999) proposed six indicators of a successful public relationship as: control mutuality; trust; satisfaction; commitment; exchange relationship; and communal relationship. In addition, Huang (2001) has developed and validated a cross-cultural OPRA (organisation–public relationship assessment) instrument. Other researchers have focused on quantifying the link between relationship indicators and organisational outcomes. For instance, Bruning (2002) found a quantitative link between relationship attitudes and outcome behaviours, while later research also revealed that mutual benefit was a specific and measurable outcome that provided competitive advantage (Bruning et al. 2006).

Having outlined the theory of evaluation, the chapter now moves on to explore how evaluation is carried out in practice.

PR Evaluation in Practice

Nature and extent of evaluation practice

A number of empirical research studies have investigated the nature and extent of PR evaluation practice in countries around the world, including the United States, Canada, Australia, Europe and the United Kingdom. Results show remarkable consistency across these geographically diverse regions. The main findings can be synthesised as follows.

Practitioners generally seem to support the idea of PR evaluation, recognising its professional benefits (Chapman 1982) as well as its importance to the credibility of PR (PRCA 2009). However, there is also a feeling that PR is difficult to measure in precise terms (Walker 1997) with one fifth of respondents to a PR Week survey maintaining that PR could not be measured (Fairchild 1999). Indeed, it has been found that practitioners recognise that research is talked about more than it is actually performed (Lindenmann 1990; Walker 1997).

The majority of practitioners appear to rely on techniques involving the measurement of media coverage as well as experience and informal judgement. For instance, a recent European study found that 82.4 per cent of respondents monitored clips and media response (Zerfass et al. 2015). However, there is evidence to suggest that, despite the ubiquitous nature of media evaluation, practitioners are not fully satisfied with this approach (Pinkleton et al. 1999). For example, Baskin et al.’s (2010) research revealed that although 89 per cent of respondents regularly utilised ‘clip reports/press cuttings books,’ only 65 per cent regarded them as an effective evaluation method. Similarly, 93 per cent identified ‘pre- and post-surveys’ as an effective means of measuring impact, yet only 50 per cent made frequent use of the technique. Nonetheless, the overall evidence suggests that most PR evaluation takes place at the output level with low levels of input, outcome and outflow measurement (Gregory 2001; Xavier et al. 2006; Zerfass et al. 2015). Typically, a variety of evaluation techniques are used representing a ‘continuum from virtually no evaluation to formal and ongoing efforts’ (Hon 1998: 123) with averages of 3–6 methods per campaign being reported (Pieczka 2000; Xavier et al. 2005). However, Pieczka (2000) attributed this to ‘fluke’ or luck rather than a deliberate intent to employ systematic triangulation.

In his research among American practitioners, Dozier (1984) identified three styles of PR evaluation: scientific impact evaluation; seat-of-pants evaluation; and scientific dissemination evaluation and found that practitioners adopted styles simultaneously with scientific evaluation supplementing rather than replacing more informal approaches.

There are also suggestions that evaluation activity varies across the four specific programme content areas of: problem definition; planning and preparation; implementation/dissemination; and impact assessment. Piekos and Einsiedel (1990) found that, overall, intuitive techniques such as discussions with top management/colleagues, informal meetings with media personnel and reactions of contacts, significant publics and top management, were used significantly more often than scientific methods, with the exception of the implementation/dissemination phase. During this particular phase, scientific procedures to monitor the distribution, placement, potential exposure and audience attention were most common.

Despite this picture of evaluation as rather unsystematic and irregular, it is worth noting that there are many excellent best practice examples of PR campaigns that have been evaluated effectively. A number of them are referenced within this chapter and also include, for example, Spencer and Jahansoozi (2008) who described how research and evaluation was used throughout London’s bid to host the 2012 Olympic Games. Nonetheless, on the whole, the previously reviewed studies, which span over three decades, demonstrate little in the way of substantial improvement regarding the widespread execution of systematic and formal evaluation approaches. It is therefore important to examine the barriers that stand in the way of developing and using evaluation in PR.

Barriers to evaluation practice

A number of reasons have been put forward to explain the limited progress in implementing PR evaluation in practice. Universally, the most common barriers have centred on limited resources. Time and again, the dual themes of insufficient budget and lack of time have emerged from surveys of practitioners as the principal constraints to conducting PR evaluation (Hauss 1993; Walker 1997; Kerr 1999; Xavier et al. 2005, 2006; Baskin et al. 2010). Indeed, in terms of financial support, expenditure on evaluation is low with generally less than 5 per cent of the PR budget allocated for evaluation (Lindenmann 1990; PRCA 2009). In addition, there has been widespread acknowledgement that PR practitioners lack knowledge of, and training in, evaluation methods and processes (Walker 1997; Xavier et al. 2005; Baskin et al. 2010). This is particularly concerning given that research suggests that most PR research and evaluation is conducted by PR-trained personnel rather than by research specialists (Lindenmann 1990; Walker 1997). Moreover, many practitioners have claimed that clients lack understanding of PR and evaluation (Walker 1997) so that the emphasis on media-oriented evaluation stems from pressure from clients who want a tangible, easily understood, standardised evaluation approach (Xavier et al. 2005).

The role of senior management in influencing evaluation activity has also been raised. For instance, a more structured, scientific approach to evaluation often ensues when senior management support or demand it (Piekos and Einsiedel 1990; Hauss 1993). This issue seems to stem from perceptions about PR itself with CEOs claiming that they instinctively knew when PR was effective (Campbell 1993). In White and Murray’s (2004: 18) study of UK CEOs there was also an acknowledgement that while the effects of PR occurred over time, few managers were willing to wait for longer-term results and that:

value is recognised in the quality of the advisors at work in the practice. The CEOs interviewed who have as advisors practitioners that they know, respect and trust, and in whose abilities they have confidence, are quite prepared to rely on their judgement that they are receiving valued support from public relations.

Nonetheless, Macnamara (2007: 3) proposed that lack of budget, time and demand were ‘excuses’ rather than valid reasons for not evaluating PR. He argued that other, underlying reasons came into play, namely that practitioners do not see evaluation as relevant because they mainly operate in a technical role that is primarily output oriented. Indeed, it is suggested that PR practitioners’ own attitudes towards research and evaluation may also exert an important influence on their evaluation activity. For example, a significant positive relationship between the belief that research and evaluation was important and actual use of formal research and evaluation approaches has been found (Judd 1990). However, most data on practitioner attitudes suggests a more negative viewpoint. Feelings that evaluation could be perceived as ‘checking up’ on staff, that criticism would be taken personally rather than professionally, apprehension about the implications of negative evaluation results and a preference for informal evaluations have been recorded (Kerr 1999).

The tools of evaluation are another concern for practitioners (Xavier et al. 2005). Kerr (1999) found that practitioners were hesitant about employing available measures because of perceived problems and called for the development of a quick and easily administered measurement instrument. However, many commentators disagree with this desire to reinvent the wheel (Cline 1984), instead maintaining that the industry already has access to existing research tools and techniques that are ready and waiting to be used (Phillips 2001; McCoy and Hargie 2003; CIPR 2005; Lindenmann 2005).

Industry response

Against this backdrop, PR professional associations and related organisations around the world have developed a range of initiatives to support and encourage the development of PR evaluation, with varying degrees of success. Over the years, they have held numerous seminars, workshops and conferences and issued a number of white papers and guidelines for various aspects of evaluation, many of which are referred to throughout this chapter. However, perhaps the most significant event in recent times occurred at the second European Summit on Measurement in Barcelona held in June 2010. Here, leaders of five international professional measurement and evaluation bodies (AMEC, Global Alliance, IPR Measurement Commission, PRSA and ICCO) and 200 delegates from the world’s top measurement companies and PR agencies agreed the first ever global standard for PR measurement. Named the ‘Barcelona Declaration of Measurement Principles’, the agreement comprises seven key principles, which were updated in 2015 as shown in Figure 10.3.

Figure 10.3

Figure 10.3 Barcelona Principles 2.0

Source: AMEC (2015) How the Barcelona Principles have Been Updated Available from: http://amecorg.com/how-the-barcelona-principles-have-been-updated/ (accessed 3 September 2015)

Early reaction to the ‘Barcelona Principles’ was mixed. While professional associations generally saw them as an important framework for future development that provided a clear position on measurement fundamentals, some practitioners criticised them for being too pedestrian and not going far enough (Magee 2010; Magee and O’Reilly 2010). Nevertheless, the Declaration seemed to stimulate renewed impetus within the industry. The CIPR has updated its CIPR Measurement and Evaluation Toolkit (2011) to reflect best practice principles in evaluation and has also set clear guidelines for its awards entrants and judges regarding acceptable measurement practice. Similarly, the PRCA announced that it would include a compulsory evaluation section on its awards entry form, and create a specialist evaluation module within its Consultancy Management Standard accreditation process (PRCA 2010). One of the key products of the task forces established after the Barcelona Principles was the development of the Valid Metrics Framework (VMF) that aimed to provide alternatives to Advertising Value Equivalency (AVE) (AMEC 2011). The VMF comprises two main components. First, the horizontal axis displays the stages of a communication/marketing funnel as awareness, knowledge, interest, support and action. Second, the vertical axis focuses on three PR phases covering PR activity, intermediary effect (e.g. third party dissemination) and target audience effect which essentially represents a continuum from simple production to outputs and outcomes. Grids were designed pertaining to a range of PR functions and can be populated and customised with suggested metrics appropriate to a particular programme.

However, ironically one of the challenges to emerge from the burgeoning evaluation activity is a confusing array of measures and inconsistencies in definitions and vocabulary, giving rise to a growing movement to find and agree measurement and evaluation standards (Michaelson and Stacks 2011). Although individualised factors will come into play, Macnamara (2014) argued that standard procedures allow for more effective comparisons both internally (e.g. before and after the campaign) as well as externally (e.g. with competitors). Two international consortiums comprising professional associations, communication bodies and research organisations have been leading the way in the search for standards. The Coalition for Public Relations Research Standards and the Social Media Measurement Standards Conclave have published various guidance papers focusing on developing standards in the areas of: content and sourcing; reach and impressions; engagement and conversation; influence; opinion and advocacy; and impact and value. For resources please visit the www.instituteforpr.org/public-relations-research-standards and www.smmstandards.org websites. Research suggests that the relatively recent standardisation movement is already making an impact whereby in a survey of 347 US senior-level communication practitioners, one quarter reported that they had adopted standardised measurements (Thorson et al. 2015). Moreover, the researchers found that an innovative and proactive organisational culture was found to be an important variable in embracing standardisation.

It appears that there is significant forward momentum in the industry towards encouraging and supporting the widespread use of PR evaluation. It remains to be seen whether this can translate into practice and overcome the many barriers to evaluation.

Having provided an overview of evaluation theory and practice, the chapter will now highlight three specific areas that often generate particular debate. These are media evaluation, which is discussed first, followed by a brief discussion of return on investment before concluding with an overview of online evaluation.

The Media Evaluation Debate

As highlighted in the previous section, PR is commonly evaluated at the output level using content analysis of media coverage. This can range from basic to sophisticated, be quantitative or qualitative and may be performed manually or via computer software systems. The simplest form of content analysis is the counting of press clippings and/or radio/TV segments that mention an organisation or its products and services or those of its competitors. Articles that contain other key words or issues that an organisation identifies as relevant can also be gathered. In addition, the press/broadcast coverage can then be measured in terms of column inches/centimetres or seconds/minutes of airtime. Circulation or readership analysis may also be carried out using Opportunities To See (OTS). These indicate audience reach and are calculated from circulation or ratings figures of the medium in which the item appeared. Advertising Value Equivalency (AVE) is also a common, though controversial, measure of PR. This is assessed by multiplying the column inches/centimetres or seconds/minutes of air time gained by the corresponding media advertising rates. In essence, AVE aims to determine what the print/broadcast coverage generated by a PR campaign would have cost if equivalent advertising space had been purchased. In particular, AVE has been the subject of considerable debate within the industry and consequently is worthy of further discussion.

Over the past two decades, AVE has been widely condemned as an evaluation metric (Jeffrey et al. 2010). Jeffries-Fox (2003) argued that the method suffers from a number of conceptual and practical problems. First, the relationship between news stories and advertising is too complex to simply assume that a news story of a particular size has the same impact as an advertisement of equal size. Second, AVE cannot reflect the value of keeping stories out of the media. Third, while both can have an effect on consumers’ awareness, perceptions, attitudes and behaviour, Public Relations and Advertising are different disciplines. For instance, any given advertisement will repeatedly appear in the media in exactly the same way, but public relations news stories about an issue or an event can be highly variable with regards to placement and presentation (Macnamara 2006). Thus AVE has been described as comparing a boat and a car – they are both means of transport but operate in different environments and although passengers can end up at the same destination, they get there in different ways (Fairchild 1999).

In terms of practical difficulties with AVE, Jeffries-Fox pointed out that it is impossible to determine AVE for media outlets that do not permit advertising (e.g. BBC). Second, calculating AVE for the total amount of coverage gained does not take into account the possibility that some or all of the publicity could have been negative or neutral. Finally, each piece of media coverage may not focus exclusively on one issue or organisation and can include favourable references to competitors. From a wider perspective, Macnamara (2006) argued that AVE is deficient because it cannot measure the range of PR activities that do not have media publicity as their goal (e.g. events, community relations, etc.).

Criticism of AVE is further compounded by the fact that in some cases, ‘multipliers’ are applied to basic AVE calculations on the assumption that news coverage is more credible than advertising, and therefore more persuasive. This is referred to as ‘PR value’. Multipliers can vary widely, ranging from 2 to 13. However, there is little empirical evidence for such weightings (Michaelson and Stacks 2007) and they are generally applied in an arbitrary manner. In fact, some practitioners decrease the AVE figure because of PR’s lack of control over message, audience and publication schedule (Austin and Pinkleton 2006). Overall, multipliers have been described as, ‘unethical, dishonest, and not at all supported by the research literature’ (Lindenmann 2003:10).

In addition, many professional bodies and associations have been long-term critics of AVE. This was most recently reflected in the Barcelona Principles outlined earlier. For instance, the Institute for Public Relations published a position paper rejecting the ‘term, concept and practice of Advertising Value Equivalency’ (Rawlins 2010: 1). Other professional associations are focusing on industry award schemes as a way to wean practitioners away from AVE. For instance, in its awards criteria, the CIPR states that if AVEs are used, a mark of zero will be awarded for the entire measurement and evaluation section.

However, despite its many critics, AVE remains a popular evaluation method among practitioners. As AMEC executive director Barry Leggetter acknowledged, ‘people use AVE because it is an easy thing to figure out. The metric is flawed, but it provides a number. That’s what a CMO or CEO demands’ (Magee 2010: para. 7). Simon Warr, Board Director of Communications and Public Affairs at Jaguar Land Rover agreed, ‘in the absence of anything that is more relevant, we do use AVE … internally, they have a degree of recognition and are something people can easily understand’ (Magee 2010: para. 28). AVE amounts can also be impressive as Claire O’Sullivan, director of media measurement company Metrica pointed out, ‘often AVE figures returned are much higher than any PR budget and they make PR people look good’ (Wallace 2009: para. 4). Demand from clients/managers for AVE is often cited as a reason for their continued popularity. As Emma Cohen, MD of Skywrite put it, ‘like it or not, if clients want to use AVE, you use it’ (Wallace 2009: para. 8).

It is too early to tell what impact the Barcelona Principles and the VMF as well as other initiatives will have on the use of AVE. Discouragingly, research suggests that the metric continues to be employed (Watson 2013; Dahlborg et al. 2014; Thorson et al. 2015). Although client demand for AVE may remain strong, Macnamara (2006) urged PR professionals to resist such pressures, arguing that they have a duty to advise, counsel and educate their clients on its limitations.

Rather than relying on somewhat blunt measures of the volume of coverage gained or financial metrics such as AVE, it also recommended that practitioners carry out assessments of the quality of their publicity. According to Macnamara (2006: 42) in-depth media content analysis takes into account: media type; prominence; positioning; size of articles or length of a radio/TV segment; share of voice of quoted sources; and, the position/credibility of key sources. In addition, ‘tone’ is also a key variable in media content analysis in terms of whether coverage is negative, positive or neutral (Watson and Noble 2007). Moreover, Michaelson and Griffin (2005) recommended that the accuracy of overall coverage as well as specific messages should be taken into account by determining the presence of: correct information; incorrect information; misleading information; and, omitted information. In addition, media evaluation can be a source of valuable intelligence not only in terms of an organisation’s own coverage, but also competitors, as well as societal issues and trends (Watson and Noble 2007).

According to Austin and Pinkleton (2006), media content analysis is a five-step process involving: establishing objectives; selecting sample of texts; determining units of analysis; identifying categories of analysis; and coding content. Technological developments offer the potential for automatic analysis of text (Fekl 2010) although some are sceptical of the ability of software to fully appreciate and interpret meanings from text (Macnamara 2006). Nevertheless, several software programmes for media content analysis are available.

However, one of the main limitations of media evaluation is that it cannot measure actual impact or results among target publics. One of the dangers inherent in the popularity of media evaluation is that practitioners may infer such results. Cultip et al. (2006) cautioned against this type of ‘level substitution’ in their PII model. Nevertheless, Neuendorf (2002) claimed that media content analysis, if conducted rigorously and scientifically, could be useful for ‘facilitating’ inference and helping to predict likely effects on publics. Similarly, it is been argued that because news media summarise ongoing social debates, analysis of the news media is an efficient way to indirectly measure public attitudes (Bengston and Fan 1999). On the whole, Supa (2014: 2) concluded:

[T]here seems to be consensus among scholars that media relations does hold value to an organization, though the magnitude of that value is not clear, and may be dependent on the goals of the organization with regard to media relations and exposure.

Return on Investment/Social Return on Investment

The renewed attention surrounding PR evaluation measures and standards has led to rekindled discussions of the concept of Return on Investment (ROI). Practitioners have long struggled to employ this measure to explain the value of public relations. Although a strict definition of ROI focuses on outcome measures of financial returns in relation to costs incurred, Watson (2011) found that ROI is used in a ‘looser’ sense in the UK PR industry and is often interpreted as AVE measurements.

Macnamara (2014) offers an overview of the variations in ROI methods and concluded that the lack of consensus is unlikely to move practice forward. Moreover, the focus on monetary value is often problematic in PR where, ‘the complexity of communication processes and their role in business interactions means it is not possible to calculate Return on Investment in financial terms’ (Watson and Zerfass 2011: 11). Although practitioners believe that PR creates and maintains value, its specific worth remains difficult to monetise (Grunig 2006). Indeed, the 2015 European Communication Monitor (Zerfass et al. 2015) found that communication professionals saw their contribution to organisation/client goals as building immaterial assets (brands, reputation, culture) and facilitating business processes (influencing customer preferences, motivating employees and generating public attention). Additionally, the majority (80 per cent) argued for the relevance of communication by pointing to the positive effects of good reputation, organisational culture and brands. In contrast, only one-third of respondents measured their impact on intangible/tangible resources, revealing a discrepancy between how they articulated the value of PR and what they actually measured. However, not giving up on the application of ROI in PR, several authors (e.g. Likely 2012; Watson and Likely 2013; Macnamara 2014) point to the availability of a number of alternate processes such as ‘Benefit Cost Ratio’, logic models and ‘communication performance management’ that could be utilised in PR.

An alternative perspective to demonstrate value created in non-financial forms is that of ‘Social Return on Investment’ (SROI). SROI is a method of understanding and measuring a broader concept of value that considers the ‘blended’ nature of economic, social and environmental outcomes (Bhatt and Hebb 2013). Increasingly prominent in the non-profit literature, SROI could be appropriated in the PR context to help express, ‘the value that public relations creates for organisations through building social capital; managing key relationships and realising organisational advantage’ (Watson 2008: 115). The most common SROI approach follows a logic model (Onyx 2014), recommending that a ‘theory of change’ should be specified for each activity, explaining how the programme facilitates the achievement of its objectives/mission in terms of inputs (resources); activities (programme implementation); outputs (countable products of the programme); outcomes (benefits/effects); and impacts (significant, usually long-term change in effects in the wider environment). Financial proxies are then used to assign values to non-monetary outcomes. A key feature of the SROI process is the identification and involvement of stakeholders to explore the full extent of the impact of the activity under scrutiny and the estimation of financial proxies (Ecorys 2014; SROI Network 2013). As such, it is an example of a participatory evaluation approach (Suarez-Balcazar and Harper 2003). However it is recognised that SROI can be difficult and subjective (Bhatt and Hebb 2013). Macnamara (2014) has observed that there has been surprisingly little discussion of SROI in the PR literature despite the opportunities it offers for facilitating a sociocultural research perspective for PR. As such, it could provide a new avenue for PR evaluation, particularly as the logic model bears striking similarity to the nomenclature and concepts of the PR evaluation models. The case study at the end of the chapter provides an overview of a SROI evaluation of a community-based domestic violence intervention designed to raise awareness of the support available to victims.

The final section of this chapter now focuses on a developing area of research and practice – that of online evaluation. It is not possible to provide a comprehensive discussion of this topic within one chapter, however, an overview of the main issues is presented.

Online Evaluation

European communication professionals have identified ‘coping with the digital evolution and the social web’ as the one of the most important challenges within the next three years (Zerfass et al. 2015: 40). As with all aspects of PR, there is a need to monitor, measure and evaluate digital and social media communication (Phillips and Young 2009). However, online monitoring and evaluation has been described as a ‘black hole’ in evaluation (Watson and Noble 2007:208). For example, DiStaso et al. (2011) interviewed PR/Communication executives and discovered that the practitioners had more questions than answers about social media measurement. Wright and Hinson’s (2013) longitudinal analysis of PR practice found that little progress had been made over the last seven years in terms of measuring and evaluating social or emerging media, finding that while 61 per cent of their participants conducted simple output measurement, only 22 per cent measured message impact. Thus, Jeffrey (2013: 2) has cautioned that ‘the majority of PR practitioners (and marketers) have no real idea of what is working and what is not in their social and digital programs’.

In an attempt to tackle this issue, AMEC has adapted the Valid Metrics Framework outlined earlier for the social media context to reflect differences between social and traditional media. Realising that the marketing sales funnel may not be the most appropriate process in social media situations, the AMEC Social Media Valid Framework (Bartholomew 2013) replaced it with a model of exposure, engagement, influence, impact and advocacy. Additionally, the ‘intermediary phase’ was removed since social media is often characterised by direct interaction. Two alternative perspectives were proposed. The first focuses on channel, business and programme metrics, while the second revolves around paid, owned and earned media. The Framework can be populated with metrics emerging from the Social Media Measurement Standards Conclave. AMEC advise that social media outcomes and goals should be defined in advance and that while quantitative data is easy to measure, an increasing emphasis is needed on quality and context. Content sourcing and transparency are also important considerations. Although acknowledging that no model fits all situations (CIPR 2011), the adapted framework is useful to help plan evaluations or to identify gaps in current approaches. An illustration of the framework applied by the Department for Environment, Food and Rural Affairs’ ‘Chip My Dog’ campaign is available via AMEC’s website (http://amecorg.com/social-media-measurement).

Lindenmann (2003) suggested that cyberspace analysis of PR outputs should comprise an examination of a) website traffic patterns and b) online discussions. Duncan (2010) outlined how web analytics tools can be used to identify where website visitors are coming from and how they interact with an organisation’s website. More advanced statistical techniques that draw upon demographic and message content of referring sources can also provide further insights into the messages that are most effective at driving traffic and how messages and specific outcomes could be matched to optimal effect. Some of the advanced analyses require the expertise of specialist firms, but basic types of analysis can be accessed by practitioners from readily available free sources. For example, the CIPR has published a useful guide on Google Analytics for PR (Smith 2014).

A number of other approaches to evaluating websites have also been developed. For instance, Hallahan (2001) advocated that usability research should be conducted in order to understand how users navigate an organisation’s website. Similarly, Ingenhoff and Koelling (2009) carried out a content analysis of websites using a framework created by Taylor et al. (2001) where they examined the five principles of: ease of use; usefulness of information; conservation of visitors; generation of return visits; and dialogic loop.

In terms of examining online discussions, Lindenmann (2003) advised that the criteria applied in analysing offline editorial could also be used with internet postings. Paine (2007) detailed a number of specific measurement techniques for analysing blogs at output, outtake and outcome levels. She also pointed out that as well as quantitative metrics, it was also important to examine the quality of the content. Blogs can also be analysed from the perspective of their ability to build and maintain relationships online. Using Hon and Grunig’s (1999) PR Relationship Outcome Scale, Kelleher and Miller (2006) found that blogs were perceived as conveying a ‘conversational human voice’ and as such, correlated positively with the key relationship outcomes of trust, satisfaction, control mutuality and commitment. Similarly, Saffer et al. (2013) found that greater levels of organisational Twitter interactivity positively affected the quality of the relationship. Moreover, the CIPR’s (2010a: 30) Toolkit proposed that the measurement of social media should focus on ‘identifying what conversations the organisation should participate in (or initiate) and understanding how all of these interactions and mentions (the “outputs”) impact the organisation. In other words, what impact (outcomes) do these outputs have on the organisation’s goals?’

From a broader perspective, Jeffrey (2013: 4) detailed an eight-step social media measurement process of: identifying organisational/departmental goals: researching stakeholders; setting objectives for each stakeholder group; determining social media Key Performance Indicators (KPIs); choosing tools and benchmarking (using the AMEC Matrix); analysing results and comparing to costs; presenting to management; and measuring continuously and improving performance. In this regard, the underlying principles mirror those that apply in offline PR. However, perhaps one significant difference is the pace of development in the social media world (DiStaso et al. 2011). As Jeffrey (2013: 17) highlighted, ‘social media analysis methods change at light speed so be ever-vigilant to seek out new thinking, standards and resources.’

Conclusion

Gregory and White (2008: 307) likened the evaluation debate to:

a car, stuck in mud or snow, trying to move forward. The engine revs, the wheels spin, exhaust fumes and friction smoke clouds the scene, but – in the end – the car remains stuck. So, too, the evaluation debate: a great deal of discussion but no forward movement.

As outlined in this chapter, evaluation of PR has a strong theoretical underpinning; there is a plethora of frameworks offering advice as to how PR can be measured and evaluated, a vast array of methods and tools exist for doing so and a growing number of industry initiatives are being developed. Despite this, the uptake of systematic and formal evaluation among practitioners has remained disappointingly low. It is important to garner a greater understanding of why the implementation of evaluation appears to be so problematic in practice and to devise strategies to overcome these barriers. In particular, Macnamara (2014: 21) pointed to a ‘generalised gap between theory and practice in PR’, suggesting that greater participation, integration and collaboration between industry and academia may be one route for evaluation to regain some traction and continue on its forward journey.

The following case study outlines a SROI evaluation of a community-led long-term awareness and educational programme that aims to highlight the wide range of services available for victims of domestic abuse through workplace engagement. It is not a traditional media-led organisational approach but incorporates elements of social responsibility and community relations and thus is useful as a SROI illustration.

C@se Study

Workplace Charter: ‘Pathways for Participation’

It is estimated that one in five women and one in nine men will experience domestic violence in their lifetime (NI Crime Survey 2003/4). The Workplace Charter on Domestic Violence is a long-term awareness and educational programme developed by Onus (NI) Ltd. in order to provide recognition for organisations and communities that support individuals suffering from domestic violence. Onus believe that an effective response to the issue starts with all employers across the public, business, voluntary and community sectors recognising that domestic violence is a problem that impacts on all of us as a society, and playing their part in supporting victims and sending a clear message to perpetrators that domestic violence is intolerable. Colette Stewart, Onus Business Manager, explains the importance of the initiative:

For the majority of people, the place they feel most safe is their home. But for an estimated 1 in 5 women and 1 in 9 men living with domestic abuse, home is the least safe place for them. Domestic abuse is very damaging, and it impacts well beyond the home. We need to ensure that our response to victims of domestic abuse is widespread and easily accessed. That’s why we asked workplaces to help us in getting the message out that support is available for victims of domestic abuse.

The Workplace Charter initiative was designed to provide support to organisations and communities to help them understand the context and impact of domestic violence within their environment and how to respond effectively to disclosures. It offers various ‘Pathways for Participation’ (see Figure 10.4). All participating organisations support the Safe Place Campaign Pledge: ‘never to commit, condone or stay silent about domestic violence’, and display the Safe Place logo to indicate that information on services for victims of domestic violence is available on the premises. ‘Safe Place’ organisations commit to raising awareness on the range of local support services and distribute public relations collateral including window stickers, posters, white ribbons, business cards, lipbalms etc. ‘Safe Employer’ organisations agree to support any employee affected by domestic violence through an agreed Domestic Violence Workplace Strategy. The ‘Safe Town’ pathway recognises locations that undertake a united, multi-partner approach to supporting victims of domestic violence. The Onus outreach initiative is currently supported by over 600 organisations including the PSNI, Northern Health and Social Care Trust, Newtownabbey, Antrim and Ballymena Borough Councils, Northern Regional College, churches, libraries, constituency offices, solicitors, shops, gyms, dentists, florists, hairdressers and many others. New ‘Pathways’ are also being developed. Onus provide focused training and resources to the wide range of organisations participating in the scheme and host an Annual Awards Ceremony to acknowledge the work of employers in demonstrating their commitment for victims of domestic abuse to access support in the workplace.

In February 2015, Onus commissioned an independent evaluation1 of the ‘Pathways for Participation’ programme adopting a SROI approach to understand and measure the value created as a result of the initiative for both the organisations involved and wider society. The project followed the logic model of evaluation (Nicholls et al. 2012) of: establishing scope and identifying stakeholders; mapping outcomes; establishing impact; calculating the SROI; and reporting, using and embedding. An initial scoping meeting was held with Onus staff in order to explore the reasons for undertaking the evaluation and to gain a deeper understanding of the programme. A desk review of Onus’ materials provided further context for the initiative. Onus had a variety of reasons for undertaking the evaluation and, although they continually monitor their activities, they had not previously had the capacity to carry out a formal and independent evaluation of their services. The programme had been running for five years and they felt it was an important juncture to evidence its impact and value to current and future funders. Onus believes that domestic violence thrives on being hidden. The aim of the ‘Pathway for Participation’ initiative is to display it and to challenge society’s view about domestic abuse and remove barriers to disclosure such as stigma, fear and judgement. The key objective of the SROI was to create a framework that would allow Onus to further communicate their value to beneficiaries and identify areas for further improvement as the programme continued to evolve and develop. In order to provide a manageable timeframe for the project, the SROI evaluation focused on the ‘Pathway for Participations’ activity in the previous financial year (1 April 2014 to 31 March 2015).

Stakeholders were identified as employers, victims, local councils, human resources departments, partner organisations (e.g. Police Service of Northern Ireland, Women’s Aid), funders, schools and health trusts. Additional primary qualitative data was gathered through workshops with Onus staff and telephone interviews with representatives from stakeholder groups and organisations participating in the programme. Internal secondary data was also obtained from Onus’ records. Following Onyx’s (2014) definitions, inputs were identified as staff time, salaries, office

Figure 10.4

Figure 10.4 Pathways for participation

utilities, marketing literature and expenses. Outputs included the number of organisations participating in the programme, number of delegates trained, number of expressions of interest, volume of ‘Safe Place’ PR collateral and the awards ceremony. Outcomes and impact focused on three main areas. First, outcomes for the participating organisations included a healthier workplace, improved staff awareness, networking opportunities and positive reputation. In terms of the state, outcomes were identified as increased reporting of domestic abuse incidents. Although this may involve additional costs for the state in the criminal justice process, these could be offset by reduced long-term health costs. Finally, the programme also facilitated impact on society by contributing to building safer communities and reducing social stigma. Financial proxies were assigned for each outcome/impact.

Questions for Discussion

  • 1 Is evaluation essential in helping PR to gain credibility as a profession?
  • 2 Could ‘goal-free’ evaluation be applied in the PR context?
  • 3 How can senior management/clients be persuaded to invest in evaluation?
  • 4 Will AVE ever be replaced as an evaluation method?
  • 5 How could ‘Social Return on Investment’ be applied in the PR context?
  • 6 What is the main reason for low levels of impact evaluation in PR?
  • 7 What can be done to encourage more evaluation among practitioners?
  • 8 To what extent do the Barcelona Principles represent best practice in PR evaluation?
  • 9 Does online evaluation differ from evaluating offline PR?
  • 10 What financial proxies could be applied to the outcomes and impact identified in the ‘Workplace Charter’ case study?

Note

1 The evaluation was carried out by Dr Mairead McCoy and Anne Durkan, Ulster University and funded by a Social Enterprise Development Award grant from Santander.

Further Reading

CIPR (2011) Research, Planning and Measurement Toolkit (3rd edn), March. Available from: www.cipr.co.uk/sites/default/files/Measurement%20March%202011.pdf (accessed 30 June 2015).

Gregory, A. (2001) ‘Public relations and evaluation: Does the reality match the rhetoric?’ Journal of Marketing Communications, 7(3): 171–189.

Lindenmann, W. K. (2003) Guidelines for Measuring the Effectiveness of PR Programs and Activities. Available from: www.instituteforpr.org/wp-content/uploads/2002_MeasuringPrograms.pdf (accessed 30 June 2015).

Macnamara, J. (2005) Jim Macnamara’s Public Relations Handbook, Sydney: Archipelago Press.

Watson, T. and Noble, P. (2007) Evaluating Public Relations (2nd edn), London: Kogan Page.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset