Chapter 3

Field, Analyze and Report, and Evaluate Listening Research

Now that we know how to set up listening initiatives (Chapter 1) and choose from among the many and various solution options (Chapter 2), it's time to run the listening initiative, learn, share, and evaluate the results.

Field the Research

Before putting the research into “production,” employ pretest procedures to make sure that the data is top quality and that essential procedures are working properly. The following four tactics will help verify quality of data to derive the highest-value results and insights:

Ensure Data Quality: Four Tactics

  • Test connections to listening sources you want to harvest to make certain they are working: Sometimes sources will change URLs, site organization, access schemes, or even cease operations altogether. Any of these can prevent you from making successful connections. Check periodically if you're running your listening effort over a period of time.
  • Test that content is being harvested correctly: Test the queries that locate and retrieve content from sites to ensure that they return what you want; and keep iterating and testing until the results meet your needs. Writing queries is an art, one that requires knowledge of the business, insights into customers, and some technical skill in search logic. If you are listening in a regulated industry like pharma or financial services, consult with legal and compliance colleagues to ascertain that your listening harvesting conforms to requirements.
  • Filter spam out and deal with vernacular: Spam is a common nuisance on social media sites, and much of it is auto-generated. This extraneous material hurts quality and gives leading vendors a black eye. Large companies that purchased listening services expressed disappointment in the levels of spam (and irrelevant information) in the datasets their vendors provided (Forrester 2010).

    Texting and instant messaging have brought about all sorts of language innovation; you've undoubtedly seen “4” used for the word “for” or “cul8r” for “see you later.” Solutions for handling these types of substitutions should be used, such as dictionaries for slang, new words, text abbreviations, or common misspellings.

  • Determine that basic processing routines perform properly and accurately: When software reads posts, tweets, or reviews, it breaks them down into elements and assigns them to content categories. These categories can be such things as brand names, people names, or geographies; product types like “hair care”; topics such as customer service; or attributes like color. Categorization errors lower quality because the data inside the categories is not the same. When software classifies both “Earl Grey tea” and “Charlie Brown” as colors instead of the correct classes of “beverage” and “name,” respectively, computations will be thrown off and the insight will be invalid (Pettit 2010). Test and modify categorization until the content is classified satisfactorily; and don't assume that errors will cancel themselves out. If you are using workflow and collaboration functions, test these, too.

Go Live

Once you've addressed pretests and quality concerns, it's time to throw the switch and take an active stance toward listening. Ensure quality by spot-checking harvesting and processing, and then make necessary adjustments. The social world is dynamic, so it's critical to keep in step to obtain the highest-quality results. Keep in mind that going live can also mean “backcasting,” or conducting historical analysis on a content collection.

Analyze and Report

It's best to have a plan when it comes to analyzing, and reporting should be close to an event in its impact. In this section, we recommend an analytic strategy and raise two related concerns: the importance of using multiple data sources to both confirm findings and improve confidence in sentiment. Following those discussions, we take on the topic of reporting.

Analytic Strategy

It's not uncommon for people to get a little anxious shortly after a listening initiative starts generating data: “There's so much data: What does it mean? And what do I do with it?” That feeling can apply whether you're getting machine-processed streams of Web data or dealing with reams of qualitative verbatim statements from communities.

Developing and following a plan to guide the analysis not only alleviates the anxiety; more importantly, it sets the path to developing insights. The pragmatic approach to analyzing listening data is to consider the nature of the data and connect it to the business goals the listening effort supports. Because listening data comes from conversations, a question-based analytic approach makes sense; then the data can provide answers. Questions provide a context that directs data interpretation. From a reporting standpoint, the results are presented in another important context, that of the decisions your organization needs to make to drive business forward.

The standard “4Ps” marketing framework—product, price, promotion, and place—will serve to illustrate. For each, we note the type of research solution used to show some of the ways they are applied.

  • What improvements would people like to see in our product? NASCAR drew upon its private community to learn what would make its racing events more exciting to fans. Through its listening, the organization made two improvements, including racing rules (Chapter 13).

    Research technique: private community

  • How do people feel about the price? Gillette looked into this question through traditional survey and listening research among lapsed users for its Fusion razor (Chapter 12). The survey findings showed that a “too high” price drove shavers to drop the Fusion, and the listening research put that into context and explained why: The price was too high for the expected quality of the shave. The shave was better, but not enough to justify the price premium. That more nuanced interpretation led Gillette to better align price with shave quality and then communicate that to shavers.

    Research technique: text analytics

  • How should we position our product in advertising? Listening research reveals the “people's positioning” that can be used to evaluate current positioning strategy or suggest changes that will make it resonate more with customers and prospects. Although Tylenol and Advil are popular analgesics, results showed that people positioned each product differently in their minds. Tylenol was seen as safer for children, while Advil was seen as an effective pain reliever (see Chapter 18).

    Research technique: text analytics (content analysis)

  • Where should our product be available to help increase sales? Looking to boost sales for its Tassimo pod coffee maker, Kraft sought to find where a competitive advantage would lie. After sizing up Tassimo in comparison with category leader Keurig, Kraft discovered that people considered the two brands to be parity products: They didn't see differences in features or likes or dislikes. Still looking for an edge, Kraft turned to search, which provided the “aha” moment: Tassimo interest concentrated on the East Coast, with hot spots in the Middle Atlantic and New England states. Kraft had its insight and used it to recommend improvements to distribution and retail strategy.

    Research technique: social media monitoring and search analysis

As you see, questions about listening data can be asked across the marketing spectrum and, by extension, within any framework that you choose. For example, you might have a framework for comparing competitor strategy, or one for innovation; either one will work. The trick is to have one and repurpose it to frame research questions for listening.

Bring as many different voices to bear as appropriate when analyzing. Conversations are always open to interpretation, and a variety of perspectives can strengthen both insight and its eventual contribution.

Triangulate to Increase Confidence

Triangulation means to use multiple and different sources to confirm a finding or insight. In effect, it is a way of “building the case.” This method is vital for proper listening, for several reasons. First, the process of listening is still unfamiliar; since many people don't fully trust it yet, it helps to be able to show supporting evidence. Second, no single data source contains the answer, and any single data source is limited in its perspective (Grimes 2008a).

Let's take a case of a quick-service restaurant to see triangulation in action. Looking to reach a younger audience, a popular quick-service restaurant chain decided to enlarge its menu and develop new advertising; it offered new items and launched a humorous campaign. Franchisees wanted to know: Would the campaign raise awareness of the new products and increase same-store sales? The brand needed to know: Should we continue to invest in this effort?

The brand engaged full-service social media listening vendor J.D. Power and Associates Web Intelligence, which monitored the blogosphere for discussions about the brand and the new items for the first three months of the new advertising run.

People's online conversations are influenced by offline conversations and advertising. J.D. Power's analysis looked at the relationships between the ad spend, the media plan and commercial schedule, and the buzz. It determined that the new ad campaign “had an immediate and long-lasting impact on total blog postings about the brand, as well as dramatically increasing the brand's percentage of positive versus negative postings. Analyzing verbatims, bloggers proactively played back the storyline of the various commercials, accurately identifying the new menu items and responding positively to the humor used in the campaign.” Same-store sales increased shortly after the ads hit. The chain reported that the franchisees were satisfied with the campaign and performance of new items; based on these results, the brand invested more money in the national media to sustain the momentum (J.D. Power and Associates 2009).

Increasing Confidence in Sentiment

Sentiment is a computation about people's feelings—positive, negative, or neutral—toward people, products, companies, and topics discussed in social media conversations. The accepted wisdom is that positive sentiment has positive effects and negative sentiment has negative impacts. While that's an oversimplification—there are times when negative sentiment can have positive impacts (Berger et al. 2009)—we are concerned with gaining confidence that the scoring is accurate: that positive is positive and negative is negative. We face a conundrum, however: Sentiment scoring is important yet imperfect. As a practical matter, machines need to score sentiment because the volume of content is so great. Listening efforts do not deal with 500 posts; they deal with thousands, tens or hundreds of thousands. It is, therefore, out of the question to have humans analyze the complete data collection.

Here are five tactics for improving confidence in sentiment scoring:

  • Clean the data to a reasonable extent to make sure that categorization is accurate.
  • Use machine-scored sentiment to identify trends in a data collection.
  • Use human analysis of sentiment when context and pinpoint accuracy for individual posts are required.
  • If the listening solution used allows for its sentiment engine to be trained, use a hybrid approach: take two passes. First use the machine, and then manually code a sample of the machine-scored text for confirmation, augmentation, and correction. As accuracy improves, the human pass will fade (Grimes 2008b).
  • Consider using additional information about sentiment, such as star ratings or like/dislike ratios.

More than knowing, but also having confidence in sentiment can provide marketers and advertisers with better guidance for engaging in conversation with customers and prospects; recalibrating the brand to consumer mood; or taking marketing or public relations actions that aim to maximize brand advantages and minimize risk or harm. That said, the guidance is only as good as the quality of the sentiment scoring.

Reporting

There are two types of reports: real-time dashboards and reporting results and insights to colleagues in presentations or using conventional documents with text and graphics. Real-time dashboards are better suited to listening efforts where monitoring and responding are central to the effort. Gatorade offers a stellar example (see Chapter 12), and such dashboards also provide statistical reports on activity.

Though some organizations report listening findings and insights using conventional formats, they are usually boring. The reason is because the reports don't capture or reflect the tenor or excitement of the conversations from which the data emerges. A number of companies recognize this and see it as necessary for listening research to make an impact on the company as a whole, not just on the insights department. As Coke's Stan Sthanunathan writes in Chapter 21, research must be “inspiring”; if it isn't, it's failing.

Companies are training staff in the art of storytelling to help make this trend more prevalent. Stories provide context, connection among ideas through narrative, and—most importantly—an emotional connection with the audience. The best stories meet the needs and interests of the people listening. You must first and foremost understand what's important to your audience when you tell stories. If you're talking to product development or the CFO, craft your tale to leave them with the most relevant information for their purposes. Avoid the trap of making up a standard story about the research that you tell to everybody else; instead, truly cater the information to the person to whom you're presenting it.

Evaluate the Listening Initiative

Evaluate listening initiatives according to their capability to meet the key performance indicators (KPIs) cited in Chapter 2. The point we stressed earlier is that listening's contribution comes from the application of findings and insights to other business processes, and that listening should be judged on the value of its contribution to them. That is what giants like Comcast, JetBlue, AT&T, and smaller ones like True Citrus do (see Chapters 14 and 8). Don't just grab metrics from popular lists circulating on the Web; develop ones that have meaning in your business and for your business.

Summary

The contribution listening makes to business advantage comes from doing solid research. This chapter builds on Chapters 1 and 2, stressing the importance of data quality, ongoing checking of data and procedures, and instilling confidence in the analysis. More engaging reporting—that which conveys the dynamics of conversations and makes emotional connections with colleagues—increases the value and contribution of listening efforts. Last, listening needs to be evaluated in the terms of its contribution to the business processes it supports.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset