image

Chapter 5

The single biggest problem in communication is the illusion that it has taken place1

Strategies and tools to effectively communicate research results by using reports, presentations, and more cool stuff

This chapter discusses strategies and tactics for better communication of research results to stakeholders. The chapter goes deep into writing reports and discusses other forms of communicating results including presentations, videos, posters, and so on. The chapter ends with a discussion about soft communication skills, such as ways to bring bad news to your team.

image image Arnie Lund

Ok, the deck is done. It’s beautiful. An insight on every page. It had better kick influence butt!

Introduction

Reports are important. I don’t like them.

Deep down, I understand and appreciate the importance of writing research reports. Written reports add value and have many benefits. They help crystallize the most important things that come up during a study. When you write a report, it makes you think about priorities. It helps you focus on the important things rather than on what very articulate, vocal participants say. When you write a report, you go through a thought process that prepares you for your pitch to stakeholders. Here are some of the benefits of writing a report:

They help create a presentation. After creating a report, it is much easier to come up with a presentation as opposed to starting a presentation from scratch. You just pick the really important things and put them in a slide deck in a coherent, appealing way.

They are useful in the future. Your reports might become useful for others as a resource. Imagine a new product manager joining the team, or a new designer, or the researcher who will take your place one day. Having a written report about past studies is extremely helpful when learning about a product and picking up on a team’s knowledge.

You can refer to them professionally. When you are in a meeting, or when you are presenting, or even when you write a different report, it looks professional to refer to a written report. It looks more professional compared to saying something such as, “Remember the study we ran about a year ago? I think we found that people had a problem with the registration work flow.”

They contain lots of data. A report is a great format for including large amounts of data, should that be appropriate. For example, a report can always have an appendix that includes the entire study script or tables with raw data. Personally, I prefer to write short reports in most cases, but I do realize that in some situations, it’s necessary to include lengthy materials.

They are expected. Like it or not, reports are probably the most popular way to share UX study results with colleagues and clients. This is what people expect, and when you divert from this path without setting expectations, you will be eventually asked to prepare a report.

On the other hand, the fact that everybody does something doesn’t mean that’s the right thing or the most effective thing to do. I always try to think of ways to report findings in ways that are more compelling than reports. These are the reasons I insist on it:

Writing takes time. Although I have seen people who meet the definition of report writing machines, I cannot complete a report in 24 hours. I need time. After the last user has left the building, I need a couple of days to recover by doing something else. Only then do I begin writing the report.

Nobody reads reports. This is a complaint I hear from many UX practitioners who conduct research. I am extremely familiar with it. I consider myself lucky if three people read any of my reports cover to cover. Most people don’t read them; some glance over them quickly. I accept this as a fact of life. I can’t make people read my reports.

Reports are static, passive, and silent. Reports are not a format that allows for dynamic, active discussion. They are a piece of paper that someone either reads or not. They might encourage thinking, but they are not a medium that allows for conversation about its content.

Reports have a short shelf life. What happens to a report after it has been read? If the report is a hard copy, it is either shredded or stored in a drawer or cabinet and then thrown away. Its contents are forgotten very quickly. If the report appears in a digital copy, it is either sent as an attachment to an email, or stored in a library within an Intranet site. This destiny is a bit better than that of the hard copy report, in that a digital report can be searched and found, but not ideal.

Readers get lost in the details. It’s harder to write short reports than long ones. When you don’t limit yourself, you pretty much dump everything you have in the report. You need to make a special effort to come up with a short report that includes only the necessary, important things. Thus, in most cases, people write long reports with lots of details that make it hard for stakeholders to read and sift through.

Reports aren’t sexy. Face it, nobody comes to work saying, “Woo hoo, what a great day I am going to have today! I am going to read a report!” (Or do you?)

image Watch my interview with Jared Spool, CEO and Founding Principal at UIE (User Interface Engineering). Jared argues that teams don’t need your deliverables: they develop their own. He urges you to ask your stakeholders what they need from you rather than automatically writing a report. Use QR code 116 to access the video, a quick summary of the interview, and Jared’s biography.

I realize that many people want to stop writing reports but just can’t, for different legitimate reasons. This chapter highlights communication techniques for getting stakeholder buy-in. It provides tips and techniques for improving your reporting and presentation skills, and discusses alternative communication tools and soft skills.

image Watch my interview with Leah Buley, Staff Interaction Designer at Intuit (with Adaptive Path at the time of the interview). Leah says that she has not written a report in a long time. Instead, she produces short video clips and runs end-of-research generative workshops. Use QR code 117 to access the video, a quick summary of the interview, and Leah’s biography.

Reports

Avoid the report-which-is-actually-a-presentation

I have seen this result many times and I cannot figure it out. Consultants who have many clients have to give reports in various ways and at various times during a project, depending on the client’s wishes. Some clients request a final report in PowerPoint. I have seen this in my career as well. Other researchers I have worked with sometimes choose to prepare PowerPoint reports with slides that contain a lot of information in bullet points. These slides are not meant for usage in actual presentations in front of other people. Rather, they are to be flipped through privately.

To me – and you might disagree – a report is something you read and a presentation is something that accompanies a speaker. The format of a presentation does not work well as a report. It doesn’t have a table of contents, it makes you work twice as hard on formatting, and most important, it makes people who actually attend a presentation suffer. Human beings are not good at reading and listening at the same time (Csikszentmihalyi 2004). They first focus on reading from the slide. When they are done reading, the speaker is usually halfway through his or her slide, and then people get bored because they have already read what the speaker is talking about (even the progressive disclosure feature doesn’t make things better). At that point, the speaker has lost the audience. Exactly what you want when you talk about research results and action items, right?

When people hear me make these claims, some respond, “Well then, what do you provide to people who can’t attend the presentation?” My answer is simple: a report. If you are in a situation where a report and a presentation are required, don’t cut corners and create a report-which-is-actually-a-presentation (or a presentation-which-is-actually-a-report, I’m not really sure). Write a report, then prepare a presentation. They are very different ways of communicating research results.

Having said that and also that I don’t like to write reports, they are an important aspect of what we do. So let’s discuss in detail several report strategies and tactics.

Share key findings before your report is ready

On one hand, stakeholders need quick results. On the other hand, it doesn’t usually take 24 hours to complete a study analysis and report. Let’s imagine a product manager who needs to make a decision based on the results of a study you are conducting. You planned the study together; you agreed on the goals, questions, and participants; and the product manager even observed a couple of participant sessions. The product manager probably decided what should be done after the study before it was completed. Sometimes she waits for a report. Other times, she will decide to not wait for the report and to make the decision without research input. If that decision contradicts the conclusions of the study you have just completed, you will probably be frustrated. You might even confront the product manager to try to persuade her to wait or change her mind.

There are two effective, appreciated things you can do to avoid these situations altogether. I highly recommend you use both:

Agree on a deadline in advance. In Chapter 2, I mentioned several questions that you should ask your stakeholders when you kick off a research project. One of them was, “When do you need the results?” Assuming that you got an answer or negotiated one, remind your product manager of the agreed-upon deadline throughout the project (during planning, execution, and analysis).

Share findings quickly after data collection. Whether you do that in a meeting or in an email that you send 24 hours after the last participant has left the building, it is extremely effective and important to communicate what you found fast. Maybe you don’t have any recommendations and conclusions, but you can collect your main findings and share them with your stakeholders. If they make a decision before the report is ready, at least they can do it based on the findings. I have seen practitioners who tend to share chunks of the yet-to-be-published report with their stakeholders. This approach has two goals: obtaining feedback from the stakeholders and showing them that progress is being made.

One technique that will help you if you are a bit slow to complete reports is writing down one summary paragraph after every participant session. As soon as the session is over, take five minutes and summarize the main findings and participant characteristics. It is hard to do because all you want to do is get your cup of coffee or tea, rest, and get ready for the next session, but the effort is definitely worth it. These paragraph-long summaries have several benefits:

ent They will tremendously help you remember what happened during each session.

ent Providing proper editing, they can serve as participant descriptions in your final report.

ent They can be the things you share with your stakeholders immediately after you complete participant sessions. And by “immediately,” I mean five minutes after the last participant session.

One way I use eye-tracking results as a cool communication tool is at that stage between actual data collection with participants and the time the report is ready. I generate visual eye tracking deliverables such as bee-swarm videos, bouncing-ball videos, and heat maps and show them to key stakeholders, describing what they can teach us about our users. As soon as I have something ready, I get up, walk to the stakeholders’ desks (or IM them), and ask if they want to see something cool. If they do – and in most cases they do – I invite them to my desk and show them the eye-tracking deliverable. I do that with three to five stakeholders and not necessarily with leaders in the team. Sometimes I’ll pick a software engineer who I know will share what they just saw with his or her team. By working from the ground up in this way, I am able to raise interest in the report even before it is ready.

Report structures

You have probably written many standard introduction/method/findings/recommendations reports. This section discusses report organization techniques and the contents of the results and recommendations sections. I have been futzing with my report structures throughout my entire career. I always learn, improve, and adapt. Throughout the years, I have learned that certain structures work for certain people and that these same structures might not be what others expect or are comfortable with.

Organize your report by product areas, problem types, questions, priorities, or tasks

There are different ways to organize the findings and recommendations section in the report. My preferred way to do so is by research questions, but sometimes other organization schemes are more appropriate.

Organize by product areas. Let’s assume that you have just finished a study for a content management system including the following main areas:

ent Creation. Where content items such as articles and videos are created in the system.

ent Approval. Where content items go through a process of editorial, legal, and executive approvals.

ent Publishing. Where content items are published on the website after being approved.

ent Reports. Where various data analytics and metrics about website traffic are generated in a visual way.

ent Admin. Where access control, approval chains, users, and roles are managed.

Two appropriate situations for organizing your findings and recommendations section by product areas are when findings for different areas are extremely unique and when different development teams work on each area. This way, you are creating a report that is easy for different stakeholders to digest. For example, if there is a dedicated reporting product manager and development team, it is very easy for them to skip all of the sections and jump right in to the report section that is relevant for them. With a different organization scheme, these stakeholders would have to put more time and effort into sifting through findings that are not relevant to them. And because you are in the business of making things easier for people, you might want to prevent this from happening.

Organize by problem types or areas. Sometimes findings logically “belong” to very distinct topics, for instance:

ent Help (four issues)

ent Quality (three issues)

ent Lack of system feedback (six issues)

ent Confusing information architecture (four issues)

In such a case, it makes a lot of sense to organize a report by these topics.

Organize by priority or severity. When it is extremely important to communicate the hierarchy of issues and the priority of fixing them, reports can be organized so that the most severe problems appear first, then the medium-severity issues, followed by the least severe ones. The disadvantage of this structure is that issues that appear next to one another might be somewhat disconnected, which can make it hard for stakeholders to focus on a problem area.

Organize by study tasks. When tasks are linear or very distinct, it might make sense to organize a report by task. For example, it makes a lot of sense to have report sections for the following study tasks for a content management system:

ent Sign up and sign in

ent Create a content item

ent Approve a content item

ent Publish a content item

ent View a report about a content item’s performance

Organize by research questions. What I have found to be most engaging and effective with my stakeholders throughout the years is organizing a report by the research questions we all agreed upon in advance. With all of the previously mentioned organization techniques, stakeholders might find themselves puzzled with regard to the research questions. In Chapter 3, I discussed the importance of defining a clear set of research questions. When you choose any of the previously mentioned organizational schemes for the report without clearly answering the research questions, you are leaving your stakeholders hanging. Chapters with titles that are identical to research questions make a lot of sense to stakeholders. After each research question title, list both the positive findings and the opportunities for improvements.

Slice reports into digestible nuggets

Some report components are read more than others, and some components are of interest to certain stakeholders but not others. One effective technique for dealing with these facts of life is slicing reports into digestible nuggets. For example, software developers might get a report that includes a short background, a list of things that are working well and are not to be changed, and a checklist of things to be fixed. UX researchers who work with you might get a broader report that includes the full problem analysis, recommendations, and detailed methodology and participants sections.

If you are thinking that I’m being unreasonable here and that coming up with seven versions of one report is not going to happen, ever, Figure 5.1 shows a little twist to this suggestion.

image

Figure 5.1 Sample report sections.

Personalizing the research results for the various stakeholders is key to communicating effectively. When you send a report to stakeholders, indicate the report sections that you think they will find most relevant. For example (using sections from Figure 5.1),

ent 3: what we did

ent 5: positive findings

ent 6: opportunities for improvement

ent 11: videos and documents (watch the first three videos)

If necessary, you might prepare seven different report announcement emails. Stakeholders will appreciate that you’ve thought about what interests them.

Methodology first or last?

Another dilemma for UX practitioners is whether to put the methodology section in the first or last part of the report. A good reason to put it first is that this is what’s acceptable in many other research reports and that it is a reasonable, logical place for it. After all, it makes sense to explain what was done, then discuss results. The thing is that methodology sections tend to get long, and by the time your readers finish them, they are exhausted. And they didn’t have a chance to read about the study findings and recommendations. That is why many report authors tend to put methodology last. Their line of thinking is that this is not a section stakeholders are interested in and if they really want to read it, they’ll find it in the appendix.

Both arguments make sense. What I do in my reports is a compromise between the two approaches. My reports open with a “bottom line” section (the executive summary), followed by a very short “what we did” section (the methodology section). I keep it shorter than 100 words, similar to the length of this paragraph. I pretty much copy the methodology section from the study plan, with some edits (see the section “Selecting a methodology and describing it” in Chapter 3).

Report only the most severe findings

Imagine that you conduct a usability test and come up with 10 positive findings and 50 opportunities for improvement. After analyzing the problems’ impact on users, you classify 10 as high-severity opportunities, 15 as medium-severity opportunities, and 25 as low-severity opportunities. It is now time to report your findings. What do you do?

a) Report all 50 opportunities for improvement.

b) Report only the high- and medium-severity opportunities.

c) Report only the high-severity opportunities.

d) It depends.

The correct answer depends on your self-confidence. If you always report lots of findings, it could mean that you aren’t confident enough about your results to emphasize the select few that are most important. Every time I see a lot of problems reported, I wish the researcher had shown a bit more confidence. Ask yourself why you would want to report so many problems. Yes, in some situations, reporting all the problems you find is unavoidable. Imagine working in a consultancy for a paying client. The client is hiring your company to find usability problems, and the team that hired you expects you to report everything you find. Other report-all-you-find scenarios happen in regulated industries when you sometimes just have to do it. If you feel some obligation (perhaps even an ethical one) for documenting problems that were found so that someone can dig into these if they want, include a statement about putting the problems in an appendix or auxiliary source.

Let’s discuss other situations in which you don’t have to report everything. I argue that the less you report, the better the chances that problems are fixed. I report only high-severity problems. The literature offers several severity-rating scales, and it is a popular topic for repeated debate in the UX research community (Wilson & Pernice Coyne 2001; Wilson 1999; Nielsen 1994). I use the following severity scale:

ent High: Users will not be able to complete the task if the problem is not fixed.

ent Medium: Users are able to complete the task but are having serious problems.

ent Low: Users are able to complete the task but are annoyed while doing so.

I try to report only issues that prevent users from completing tasks that they (or the people behind the product they are using) intended to complete. Here is an interesting example: imagine that you’ve found 12 issues that you classify as low severity. All of them fall under the definition of user interface annoyances such as deleting data that users entered when they come back to an incomplete form or needing to scroll through long pages just to get to a button they need to click, and similar issues. Imagine that four out of six study participants commented that these are so annoying that it makes them want to stop using the product. Can these still be classified as low-severity issues? I don’t think so. I think these 12 low-severity issues are actually one high-severity problem with 12 examples. Semantics? Maybe. I believe that if people actually stop using the product, it is a critical problem for the business that’s developing it.

In many situations, I have reported many problems with different severities. The thing with low-severity problems is that usually they are easy to fix, and high-severity ones are sometimes hard problems that require many resources. I’ve seen teams fix 40 low-severity problems claiming that this work highly improved a product while avoiding five critical problems that we were hard to fix.

When research and usability is an integral, frequent part of the development process, I highly suggest that you report only a maximum of 10 high-severity problems per study: no more. This way, you and your stakeholders focus on fixing things that have the most positive impact on the user experience.

The executive summary

The executive summary is a very important tool to get stakeholders’ buy-in. Normally, an executive summary is used by stakeholders – sometimes by important stakeholders – to get the gist of what was found and what should be done about it. If the report doesn’t include an executive summary or if it includes one that is not written very well, you risk losing the attention of your stakeholders.

To grab your stakeholders’ attention, prepare a half-page executive summary (complex studies might require longer summaries) with the following information:

ent A short opening paragraph with details about what was done, when, where, by whom, and why. If the paragraph is shorter than two lines or longer than five, you are doing something wrong.

ent A list of three positive findings. Choose the three most important positive things that study participants did or said and briefly describe them. Choose only significant positive issues. Otherwise, you are delivering a message that it was hard to find good things to say about the product.

ent A list of three opportunities for improvement. Choose the three most important things that need to improve, and briefly describe what should be done and why. To help you pick the most significant opportunities, ask yourself what the three things are that if changed would have the highest positive impact on product users.

For a sample executive summary, read Molich’s sample usability test report for Tower Records (2010).

There’s always the question of how long a report should be. What’s too long? What’s too short? What’s the threshold? Though there is no definitive answer to these questions, the next section provides some insights.

How long should a report be?

The different components included in reports directly affect their length and the chances that your stakeholders will read them. The components you choose to include will probably differ if you are writing a report in a consultancy for a paying client or if you are an in-house practitioner trying to engage the people you work with on a day-to-day basis.

A standard report for most of the research studies should be about five to ten pages long. That number includes any images you choose to add to it, not including a title page. My experience has shown me that longer reports are rarely read and are considered to be a burden on stakeholders. I tend to write reports from five to ten pages long. I never plan the length in advance but I know I should stay within this range.

image

Figure 5.2 Suggested page count for short and long reports.

Here are a couple of exceptions to the rule. If I worked in a consultancy, I would probably align my strong beliefs with what clients expect and write longer reports. I would, however, try to convince everyone involved, including the client, that long does not mean better or more comprehensive or smarter. I would argue that longer reports are much easier to write than shorter ones. When you write a short report, you work hard to communicate only what’s important. If you, the client, have well-crafted, succinct answers to your research questions, what else do you need? If you have straightforward, actionable recommendations that you can do something about tomorrow morning, why would you need 10,000 more words to back them up? Disclaimer: I refer to product research in a corporate setting. In this context, using lots of words is often a waste. In other organizations, it might be different.

There are situations when you need to write long reports for good reasons. When the FDA asks for a usability study to approve a certain medical device, it needs all the details. In other corporate environments, long reports are less likely to be read.

When the report is ready, don’t send it immediately. Be careful not to start unnecessary fires.

Don’t start fires

Here’s why I recommend not sending your report as soon as it’s ready. Sometimes a few of your findings might make people feel uncomfortable. Some will feel that they are being attacked or that their work is being attacked. You might not be aware of the effect your report will have. Thus, it might be a good idea to first share the report with a small number of stakeholders before you make it available to the entire team or to a larger audience. These recipients could be your manager, a colleague you trust, and the immediate stakeholders such as the product manager or the software developer. If there is one specific stakeholder who always pokes holes in your reports, maybe he or she is the one with whom you should share your report first. Let this person find these holes and help you come up with a better report. He or she, in return, will feel appreciated. You both win.

If you find out during these feedback cycles that you have offended someone with something you wrote or included in the report, I suggest doing the following: if the report is not done yet, share the sensitive point with the person who might get offended. Explain that the last thing you want to do is hurt somebody with your reports and apologize. Offer to rephrase or completely remove the points you are about to make public. Offer to work with that person to solve the usability problem confidentially. Consider it an opportunity to closely introduce your expertise to a person who might not be familiar with the UX field. If the person resists your attempts to solve the issue, respect his or her wishes. Do not include this issue in the report. Personally, I prefer to not solve a usability problem rather than solve it but burn bridges. Good things happen when you build rather than destroy bridges.

Use the report as a live communication tool

What usually happens after a report is sent? In most cases, the researcher gives a presentation of the report highlights and often follows up with stakeholders to make sure they act upon findings. And what about the report? Usually it is long gone and forgotten deep in physical or mental drawers a couple of days after it was sent. If you have decided that a report is what’s needed (or if nobody really says whether you should write one), it is up to you to make the most out of it. Turn the report into a live communication tool by taking advantage of cloud computing. Use an online word processor (such as Google Docs) and share the report with stakeholders. By adding two simple words (‘PM’ and ‘Eng’) after each opportunity for improvement, you turn the report and the negotiation phase that follows it into a transparent process. The findings in my reports include the following:

ent A number for easy referencing later on.

ent The description of the finding, including a very short analysis explaining why it happened.

ent A numbered opportunity for improvement that describes in one or two short sentences what is recommended to fix the problem.

ent A placeholder for the product manager’s response to the finding and opportunity. I just put “PM:” and color it red.

ent A placeholder for the lead software engineer’s response to the finding and opportunity. I just put “Eng:” and color it red.

It looks something like this:

2. Confusing information architecture, vague terminology

All of the participants could not find what they were looking for by using the navigation bar. The reason was that the terminology used for navigation elements is different from what participants are associating with this type of website. For example, they did not associate the term “Mortgage,” which they were looking for, with the term “Loans,” which appeared in the navigation bar.

Opportunity 4: change the term “Loans” to “Loans & Mortgages” or consider separating “Mortgages” from “Loans,” making it a higher-level navigation element.

PM:

Eng:

Before I share the report with the entire team, I share it with my immediate stakeholders, the product manager and the lead software engineer. I ask them to read it and briefly respond to the findings and opportunities. After they do so, I share the report with the entire team. I then schedule a follow-up meeting with my stakeholders and discuss next steps based on everyone’s input.

The next section deals with a popular tool for communicating research results – the presentation. A lot has been said about the art of presenting. This next section describes several techniques that will help you engage your stakeholders with presentations.

Presentations

Learn the art of presenting

Tell a story

Stories are a tremendous tool for delivering UX research findings. In the very few minutes it takes to tell a story, powerful messages can be transferred. Stories have their way of making vague and sometimes boring details of a user experience study into vivid images of situations and what a development team can and should do about them. I highly recommend the excellent book Storytelling for User Experience (Quesenbery & Brooks 2010) to get an even better idea of how to collect, create, and use stories effectively in the user experience design process. I’d like to share with you two ways I use stories that I’ve found to be extremely effective in engaging stakeholders, especially in a presentation setting:

Lightweight storytelling means that during study sessions (a traditional lab study, a field visit, or a phone interview), I always listen for pieces of information that indicate an interesting story behind them. I always ask participants to tell me these stories. Sometimes I directly ask participants to tell me a story. For example, “Tell me about the last time you were really late to an important event or meeting.” This last question generates tons of rich data and very interesting stories. Notice that it is not focused on a product or its features; nevertheless, it can be extremely helpful in shaping a list of characteristics of a product, such as a reminder system of some sort. I use these short, sometimes anecdotal, stories during presentations of study results to spice things up. They make findings more real and believable, and they make your stakeholders think about real people and their needs instead of features of a product.

Heavyweight storytelling is not for everyone, although I highly encourage anyone to try it out. Heavyweight storytelling usually involves a field study or a more strategic study, definitely not a study that has a goal of finding usability problems with a certain design. After completing the study sessions (usually visits or interviews), you and the team come up with the emerging themes from the study. At this point, if I think a story is appropriate, I try to write it based on these themes and on what we heard from study participants. Sometimes I write a story per theme if they are big and meaty. I use the composite story (or composite scenario) technique, in which I create a story that combines other told stories. I base the story on what participants said or did while using exaggeration in a way that makes everything happen to a couple of characters in a matter of a couple of days (or less) to better illustrate the findings. Because a composite story might have credibility issues, I tell my audience that the organizations and characters mentioned in the story are fictional and that the situations and things the characters do and say are based on what participants shared during the study.

When I present with a heavyweight story, I usually start with it. I just read it in front of the team. I must say that they usually are very surprised when I do that. But they get the idea and the reasoning behind it as the story progresses. By the time I am done with the story, they are open and ready to hear what I suggest. I usually quickly go over the main messages that the story delivered, then I move on to what we should do about it. And then I open things up for discussion.

Stories work like magic. They make people listen, they are fun, and they are effective. If you can’t write stories, the least you can do is collect them from your study participants and share them in creative ways with your stakeholders.

image Watch my interview with Donna Tedesco, a staff usability specialist in the user experience research team at a large organization located in Boston. Donna suggests telling stories using quotes because stakeholders respond well to words that come straight from users’ mouths. Use QR code 114 to access the video, a quick summary of the interview, and Donna’s biography.

Another technique for engaging stakeholders is using more than words to communicate results.

Make them see, hear, and touch findings

Words on a slide – especially lots of them on one slide – might be shutting your audience off. To open them up and catch their attention with research findings and recommendations, you need to first use fewer words than your intuition guides you to use. Have a good reason for every word you put on a slide, and after you do that, cut the number of words you use even more. To make people engage with your message and want to act upon what you are suggesting, also use visuals, printouts, posters, study artifacts, videos, and audio recordings.

image Watch my interview with Donna Spencer, a freelance information architect, interaction designer, and writer from Australia. Donna says that her primary deliverable is a whiteboard and a marker, which she uses to tell stories to her stakeholders. Use QR code 120 to access the video, a quick summary of the interview, and Donna’s biography.

Here are some examples of things that have worked for me in the past:

ent Printing out an interesting spreadsheet one participant created to make sense of how certain data works in a certain application. Have a couple of printouts and pass them around during your presentation. Have only a couple of printouts to create suspense among stakeholders.

ent Hang a couple of posters to demonstrate main points. People will explore them before the presentation starts, when you point them out during the presentation, and when they are leaving the room.

ent A short video of a participant interacting with the evaluated product or being asked a key question. Stop the video at key points and have a short quiz on what people think the participant will do or say next. Then roll the video and compare the results.

ent Play an audio recording of an interview segment. I found that people concentrate even more when they listen to an audio recording rather than watch a video.

Sometimes I think there’s no wonder we UX people “get it” more than others. We have the advantage of directly interacting with users or potential users of products. Many people in our teams don’t have this luxury. The least we can do is bring users to them in any way we can so they can see what users do, hear them out, and experience what we UX people have learned. Words on slides can’t create that effect. This is why good true stories are important.

image Watch my interview with Ido Mor, Director of Strategic Research at Cheskin Added Value in California. Ido acknowledges that they make PowerPoint presentations for their clients, but they try to make them as experiential as possible. He says that Cheskin is constantly trying to find ways to avoid creating PowerPoint presentations. Use QR code 123 to access the video, a quick summary of the interview, and Ido’s biography.

The following story, from Japan, is an example of a reporting method that helps stakeholders get a better feel for study results.

Why a Book Matters
Takashi Sasaki, Partner and Consultant, Infield Design, Japan

In today’s ever more challenging design practices, a great place to start a project still is – and always will be – observing users in their real lives. But simply observing users is not enough for good design. Worse, we tend to jump from one isolated episode that we’ve observed to a direct design solution. In order to make the most out of users, we need to carefully synthesize the dots we observed and envision lines, patterns, and shapes that represent an abstract framework for desirable user experiences. Then comes another challenge: sharing our thoughts and ideas with project stakeholders not directly involved, including key decision makers. Vivid impressions from fieldwork and subsequent ideation processes are inherently difficult to share. One way to get our stakeholders equally excited about our achievements is creative documentation, where a book really shines.

A physical book might appear obsolete. We at Infield Design, however, have produced more than ten project books in the past six years, each of which has been greatly appreciated by its client (see figure 5.3). We learned through those experiences that the following two reasons, among others, are why our clients particularly like producing project books:

ent Photos, texts, illustrations, diagrams all come in handy. Even a cutting-edge display cannot beat the crisp realization of printed ink on paper, at least for now. Books also have a much friendlier posture than a deck of PowerPoint slides; they can be easily browsed, skimmed, or passed around at a coffee shop.

ent It’s a great opportunity for all team members to reflect on a project. Producing a book is a lot of work, including prioritizing information, writing and editing texts, preparing photos and sketches, and integrating all materials into actual page layouts. Those activities are indispensable exercises for all team members to reflect on what the team has achieved. We especially encourage client members to write about their fresh impressions, their thoughts, and their ideas. Doing so helps them internalize their project experience and enhances their sense of ownership. When we see our writings appear in a project book, we strongly feel that we are part of a collective achievement. We might go back to the book from time to time. We might passionately talk about the book with others. Even a colleague from a different department might browse the book and become interested. The book will stay alive and will tell its own story.

image

Figure 5.3 Books prepared by Infield Design, Japan (printed with permission).

Many researchers wonder how they should divide the time within a presentation they are giving. The next section makes some recommendations.

Focus on findings, principles, guidelines, and action items

When preparing a presentation of study results to stakeholder teams, UX practitioners have to balance between types of content that are interesting to themselves and those that are interesting to their stakeholders. I am always torn over this, but I have come to realize and accept that I am not the client here. In a standard study results presentation, stakeholders are mostly interested in what was found and what they should do about it. Everything else is noise to most of them. The two things that are important for them to know before you start with findings and recommendations are the goals of the study (why was it conducted) and the methodology (what you did). As for the latter, I’d like to emphasize that it should be very short: one or two sentences at most. Give the goal and methodology each one slide. As a rule of thumb, the time you allocate to findings and recommendations should be about 10 times more than what you allocate to other topics. For example, if you give a 60-minute presentation, dedicate about 5 minutes to your introduction and about 55 minutes to findings and recommendations, including discussion.

To make your presentation engaging and relevant to stakeholders, carefully develop and discuss findings, design principles, guidelines your audience can follow, and action items. You might think this approach is too direct and even “bossy” – and you might be right. My experience has taught me that telling your stakeholders what you want them to do and why has better results than circling around the problem and hoping they will figure out the obvious thing to do. Your audience is not stupid; they are smart professionals. Be as direct as possible with them while keeping in mind that nothing’s personal. You are all there to be successful. You are there to make users successful in achieving their goals with products. Being direct in the way you report findings, recommendations, principles, guidelines, and action items is one of your most important tools. Use it.

image Watch my interview with Aza Raskin, cofounder of Massive Health, who was until recently Creative Lead for Firefox. Previously, he was a founding member of Mozilla Labs. Aza says it is the role of researchers and designers to cross the empathy bridge to stakeholders and to present their findings in a way that is useful for others. Use QR code 113 to access the video, a quick summary of the interview, and Aza’s biography.

One proven way of engaging people with the content of a presentation is using pictures. Large pictures. Many pictures. The next section provides some ideas for how to go about that.

Present with pictures

People use slides because they are afraid of forgetting what they need to say during the presentation. I am not going to give you the entire Presentation Zen spiel. I highly recommend you read it (Reynolds 2008; Tufte 2005). That is not to say that you should follow everything they say, but the general idea is as follows. Human beings are not capable of digesting two channels of communication at the same time. We cannot read and listen at the same time. What sadly happens in the vast majority of presentations is that the presenter uses slides with long sentences organized in bullet lists. When slides come up, the presenter turns his or her back to the audience and starts reading to the audience. Because people read faster than the presenter speaks, by the time he or she is halfway through, they are done with the slide. They are now bored because they already know what the speaker is going to talk about, so they forget him or her. After a couple of slides, they just give up and don’t even bother listening or reading. Their thoughts start drifting off, they just do other things (smartphones and laptops are great saviors here), or (if they don’t have a smartphone) they just fall asleep.

The way I fight this problem is by presenting with pictures and very few words on the slides. Others use charts and pictograms. I use associative pictures to remind me of what I need to say and to entertain my audience so that they are kept interested. I have found that using pictures in slides makes people listen to what I have to say and makes it easier for them to remember what they hear for a long time after the presentation is over. Having said that, not everything can be boiled down to pictures. For example, presenting results from a competitive benchmark might require you to use textual slides.

The following story from eBay features a slide deck that people remembered.

The Ice Cream Presentation
Beverly Freeman, Senior User Experience Researcher, eBay, United States

eBay has had two main buying formats (auction-style and Buy It Now) for many years. The idea of dividing the search results page into two columns (one for auction-style and the other for Buy It Now) has been considered. After conducting research that revealed some disadvantages to this approach, the challenge was how to convey the insights in a compelling manner. Inspired by those in the design team who use sketching as a communication tool, I bought myself a digital pen tablet.

The first few slides of my Quest for Ice Cream presentation (see figure 5.4 A through D) consisted of sketches of a stick figure at “eBay Grocers” seeking a particular flavor of ice cream but instead seeing the frozen food aisles organized not by food type, but rather by calories in one scenario, price in another scenario, and so on. The analogy helped stakeholders understand that this layout might be useful in some situations but may not be the best default view. “The ice cream deck” kind of took a life of its own after that, and to this day people refer to it as such.

imageimageimageimage

Figure 5.4 Slides from the Quest for Ice Cream presentation (printed with permission).

Practice, practice, practice

I’m pretty sure it’s not breaking news when I say you should always practice giving your presentation. If you can practice more than once, that’s great. If you can practice with an audience whom you trust to give you honest feedback, that’s awesome. I know all this means you need to step out of your comfort zone, but hey, being brilliant in a presentation causes stakeholders to listen to you. Think about the result you want. Practicing in front of others is a tool for you to get to that result.

What doesn’t count as practicing:

ent Silently walking through your presentation

ent Constantly flipping through your slides

ent Closing yourself in a room and mumbling your talk to yourself

ent Telling a friend or colleague what you will be talking about

Practicing means you stand up and present while using your voice with the exact things you will say during the presentation. It also means you control the presentation of your slides. You can do it alone or with others as a pretend audience.

Here are the ten steps I go through to practice my presentations. This approach may or may not work for you. I’m including the steps to prepare the presentation because I see them as a part of practicing it:

1. Sketch. I brainstorm while sketching ideas on a piece of paper or a whiteboard.

2. Slides. I create slides with almost no words – mostly pictures.

3. Script. I write down a script for my presentation. I pretty much write down word for word what I want to say during the presentation.

4. Edit. I read the script, edit it, and make necessary changes to the slide deck.

5. Read. I read the script several times.

6. Practice 1. I start by giving the presentation alone and trying not to look at the script. It doesn’t really work the first two or three times, so I peek at the script whenever I forget what to say.

7. Cards. I grab cards and write down the highlights of my script.

8. Practice 2. I practice a few times with the cards, trying not to look at the script. At this point, I usually peek once or twice at the script.

9. Practice 3. I practice a few times without the script or the cards.

10. Relax. I try not to go over the cards or the script or the deck at least two hours before the actual presentation.

I’m sure you noticed that I violate my own recommendation to give the presentation to a trusted colleague. To be honest, when I do this step, it is highly beneficial. But I find myself avoiding it more than doing it. It makes me feel uncomfortable. You can make your own choice, of course. Again, if you are more comfortable with this step than I am, you will find it extremely helpful.

The art of presenting is also about choosing whom to present to, not just how to present. The next story describes a technique for eliminating stakeholder objections to results during presentations.

Present to Your Biggest Critic

It happened when I was first experimenting with measuring usability metrics in lab studies to support qualitative findings. I was extremely excited about this, and I used to generate many charts that looked very cool (in my humble opinion). When I presented study results to my team, there was always this one software developer who constantly poked holes in my presentation, especially in my cool charts! He did it in front of the entire team and made his points quietly, but sharply. His primary concern was that I presented results in a way that dramatized the actual findings. He claimed that the details of my findings and charts were good enough to present the case for fixing issues. The most annoying thing was that he was always right. And that he said all that in front of everybody I worked with every day. My initial response was to avoid him. I did not want to present anything when he was attending and I did not want to work with him. How dare he humiliate me like that? Today, I laugh at my initial response. I was clearly a very young, inexperienced, unconfident practitioner. I did not have a good working relationship with this guy for a couple of months and it probably affected the user experience of the application that he and his team were working on.

But that’s not how the story ended.

At a certain point, I decided to take a step back and reflect on my behavior. Being insulted is a choice, I believe. You choose to take it or not take it to heart. And I initially chose the wrong path. I tried to think of a way to change things for the better … and I found it. When the results of my next study were ready and I was about to present them, I asked that developer if he was willing to be the first person to see the presentation and help me make it better. He was happy to do it. We met for an hour and I gave him the presentation as if he were the team and he gave me his feedback. This was great for several reasons:

ent He got to provide his feedback and be heard.

ent I got to receive feedback and improve the presentation.

ent He didn’t feel the need to poke holes in my logic and delivery during the team presentation.

ent He felt that his opinion counted.

ent He was first to see the results.

This preview became a habit for both of us. Each time I was about to give a presentation with research results, I let my biggest critic poke holes in it, privately. But it did not stop there. At a certain point, I thought he could be a real champion for UX research in the team and company. I wanted him to experience more than what usability testing has to offer. When the right opportunity presented itself, I invited him to join me in a large-scale international field study. During one of the preparation meetings for this study, he suddenly looked at me and said, “Wow, I didn’t realize that UX research involves so much thought, methodology, and detail!” That moment was when I knew he had become a UX research champion.

Present to multiple teams

Usually, there is one team or a couple of people who are (or should be) interested in the results of a UX study. These are the immediate stakeholders. They are the most important people to get buy-in from because they can change the design based on research results. But there are many other peripheral teams that might be interested. Some should be extremely interested. It is your job to share the results of UX studies with these teams – especially if you’d like to make a positive impact with your research. Here are two examples.

When I try to answer research questions related to a variety of aspects of a product, it is a great opportunity to share the results with multiple teams. I once studied the experience of getting help through context help “bubbles” and through the main help area of a product that I was supporting. I found many interesting things and had a bunch of recommendations to improve our Help approach. I shared the results with my immediate stakeholders. I also shared them with product managers who oversaw other products our company developed. I shared them with our technical writing group and with software developers who sometimes wrote help-related content on their own. I believe that this approach helped create better products and increased awareness to what UX research brings to the table.

When I conducted my first eye-tracking study, it was the first time it was done in my team for a product that was planned and developed for a long time by many people worldwide. The study had very specific goals and was intended to fine-tune a design by learning about what elements users fixate on and how they scan certain pages. The immediate stakeholders were the designers of the product. During the preparation of this study, I realized that there was a lot of interest from the team. Product managers, software developers, salespeople, and technical support staff were all interested in the study and were anxious to see the results. When the analysis was done, I first wrote a report and shared it with the design team. We had follow-up meetings during which we agreed on next steps. Next, I crafted a presentation that was ten minutes long. It included two slides with the study background and about ten slides with the eye-tracking results. Each results slide included a brief text (not more than ten words) that described a certain conclusion and a large visual, primarily a heat map or an area-of-interest chart. I gave this presentation during weekly team meetings of the product management team, the software engineering team, technical support, services, and sales. I talked for ten minutes and took questions for about ten more minutes. It was amazing to see how different people were so interested in these results for different reasons. Salespeople and product managers began using one of the heat maps in their pitches to new and existing clients to demonstrate how seriously our company takes user experience design.

One more reason for giving a results presentation to multiple teams is the opportunity it carries for you to practice. If there is one specific team that is more important for you to persuade with your findings and recommendations, first present to teams that are more peripheral and in less immediate need of the results. This way you get to sharpen your slide deck and presentation based on your experience and people’s response and comments.

Although reports and presentations are probably the most popular communication methods for UX research results, other tools and techniques are also extremely effective. The next section introduces and discusses their characteristics and usage.

Other communication tools and techniques

Videos

image Video highlight clips can be powerful and highly effective. They work especially well when you deal with stakeholders who have no time or patience and ones who have what I call a “SQUIRREL disorder.” People with a SQUIRREL disorder have a hard time focusing on one thing at a time. Use QR code 141 to access a YouTube video that demonstrates the SQUIRREL disorder. If you have a stakeholder who suffers from a SQUIRREL disorder, I recommend communicating findings with short highlight clips. These videos will grab their attention. They will be their squirrels.

I asked Chris Hass, an esteemed colleague, to share his experience and knowledge of best practices for creating a highlight clip. If you are thinking of creating a highlight clip or have been creating them for a while, Chris has great advice for you.

The Most Powerful Two to Ten Minutes of Your Research Findings Presentation
Chris Hass, Senior Vice President Experience Design, Mad*Pow, United States

Nothing convinces a recalcitrant team (or CEO) that change must happen like video clips of study participants. Two minutes of the right video clips can instantly realign team politics, philosophies, and agendas into harmonies so sweet you could sing them. Video is the nuclear bomb of change making. But why?

If you’ve planned your research with the client and their stakeholders; if your research plan is unbiased and repeatable; if your recruitment screener and moderator’s guide were well conceived and everyone that matters had signed off on them; if your research execution was unbiased and professional; if your data and conclusions can withstand unflinching scrutiny – and they certainly should be able to – then your findings should be objective. Truthful. Unassailable.

And nothing illustrates unassailability like video. Why? Your clients and colleagues might trust you. They might trust the process. They might trust the findings. They will always trust their eyes. However, there’s a catch: video is a lie. Once you turn 8 (or 80) hours of video into 10 minutes of salient, representative clips, you are reporting, editorializing, or both. Clip selection defines the impact your findings will have. What moments best support your conclusions? (Reporting.) What will move clients towards change? (Editorializing.) It’s your responsibility to ensure that clips fairly represent what happened during the research and why it’s important.

Here are some tips for making clips that convince:

ent Pick key moments: Which best represent key findings? Which solve team struggles? Are they witty, trenchant, or funny?

ent Recognize the limitations: Presentations often happen in conference rooms with no sound systems or speakers. Choose representative moments that can be quickly understood and easily heard. Bring speakers.

ent Give viewers time: It can take 5 to 10 seconds – and sometimes longer – to attune to a video clip. Insert appropriate pauses before and after clips.

ent Caption it and recap it: Before showing a video, describe what the clips contain and their relevance. Add a summary caption under the video window or use captioning software. Afterward, briefly restate their importance.

ent Begin and end on a positive note: The best final clip is a user saying, “There’s much work to be done, but with some effort it could really be great. I’d be proud to use it.”

Don’t underestimate the power of video. Present your study findings, support them with video clips, and watch consensus build.

image Watch my interview with Caroline Jarrett, an independent usability consultant from the United Kingdom. Caroline indicates that a deliverable that works best for her is finding the stakeholder whose job is to align what they do with UX and getting that stakeholder to actually have the experience in some way, whether video or role-playing. Use QR code 122 to access the video, a quick summary of the interview, and Caroline’s biography.

Visualize and design posters

Most people are more attracted to exploring information that is presented to them graphically rather than in written words. I have seen this time and time again. People are more engaged by a chart than by a two-page condensed document that describes the information in the same chart. Visualizations of research findings or recommendations are especially useful in the following activities:

ent Brainstorming sessions: Visualizations help participants come up with more ideas.

ent Interaction design and participatory design: The visibility of posters with research findings makes them helpful and relevant when designing products with and for users.

ent Use cases: Nothing boosts the communicativeness of a wordy use case better than a picture, drawing, or sketch.

ent Work flows, decision trees, rich maps, and storyboards: These deliverables all benefit highly from a visual aspect.

When communicating research findings, I found that posters have a special power. A1-size posters are especially effective (33×23 inches, 84×59 cm). If done right (i.e., professionally designed), they are a great tool for communicating results, encouraging discussion, and persuading stakeholders to fix things.

image Watch my interview with Paul Adams, a product manager at Facebook and former UX Researcher. Paul says that he is a big fan of physical artifacts as research deliverables. He designs huge posters with many details and put them in places where stakeholders hang out. Use QR code 112 to access the video, a quick summary of the interview, and Paul’s biography.

I have asked a few of my colleagues to share examples of visualizations they created to engage their stakeholders. I specifically asked them to share those that have been most successful at achieving this goal. The following examples show you the results.

image Watch the video contributed by Amberlight Partners from the United Kingdom. One of the ideas they use to make sure they communicate their ideas and concepts as clearly as possible is illustration. The video features the work of a UX illustrator before, during, and after the research process. Use QR code 137 to access the video, a summary of it, and a short description of Amberlight Partners.

Visual Thinking and Communication
Filip Healy, Director of Consulting, and Roland Stahel, User Experience Illustrator, Amberlight Partners, United Kingdom

Hiring a professional illustrator or visual thinker can help communicate concepts, ideas, scenarios, and user requirements more effectively throughout a research project. Figures 5.5 through 5.10 showcase how we use illustration as an integral part of UX processes.

image

User Experience Mapping
Filip Healy, Director of Consulting, and Roland Stahel, User Experience Illustrator, Amberlight Partners, United Kingdom

In order to enhance our ability to communicate our research insights, we often use visual materials to illustrate ideas, processes, or concepts. One way we do this is quickly create a compelling high-level overview or “map” of the user experience that we are analyzing. This can be a user journey through a retailer website (as in the example in Figure 5.11), the journey a voicemail message makes between the sender and the receiver, or it can just be a set of areas or themes shown visually. The idea is to clearly communicate what is important, efficiently and with impact. The UX map:

ent Shows the end-to-end user journey of what the research focused on or even beyond

ent Provides context and identifies where pain points occur as well as other useful information

ent Helps clients visualize what is happening and engage with the key messages

ent Provides a framework that helps the organization communicate the user experience

ent Can show different user types or journeys and requirements across different channels

ent Provides a flexible story (rather than starting on slide 1, we can decide where our interest lies)

ent Can be printed large to create compelling posters

ent Can be interactive to allow interested parties to click through (on digital versions)

image

Figure 5.11 A UX map by Amberlight Partners (printed with permission).

Why a map?

We use the term “map” deliberately, as the aim is to help clients navigate the UX terrain. Maps have been around for centuries, and we can use some of their principles to aid our own communication:

ent Maps have landmarks; similarly, we have clearly defined stages or areas in our UX maps.

ent Direction, distance, and communication routes relate places on maps to each other; similarly, we try to clearly show important relationships such as sequence or causality.

Maps allow their users to see the entire picture of where they are going before selecting the most appropriate route (you may know the feeling you sometimes get from in-car navigation that just gives one instruction at a time with no overall context!). Similarly, we “map” all of the research findings onto the UX map, providing the big picture, which helps users understand the details better.

Visual Survey Results
Bob Thomas, Manager of User Experience, Liberty Mutual, United States

We’re often asked in our jobs as UX professionals to find all the usability issues with a particular application. But I prefer simpler solutions, concentrating on the biggest issues. If our stakeholders can walk away from a presentation on usability findings with the top five issues, and still be talking about them a week later, then I consider our job a success. And because I work for a data-driven company, our key stakeholders are persuaded by data and, more specifically, by data that tells a story.

I recently ran a usability test of three new home page designs. We created three well-designed concepts for the test, each with different layouts and each encouraging the user to interact with it in different ways. The order in which the designs were presented to test participants was counterbalanced. After we gathered qualitative feedback on each design, we asked participants to complete a survey of 25 questions, constructed on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). These questions were focused on such areas as visual design, navigation, content, and efficiency. Our participants completed a survey for each of the three designs.

In a technique I learned from Chris Hass while at the Bentley University Design and Usability Center, I used Microsoft Excel to enter the survey data, conditionally formatting cells so that survey results displayed red for negative results (a Likert score of 1 or 2), yellow for neutral results (a Likert score of 3), and green for positive results (a Likert score of 4 or 5). In other words, green is good and red is bad.

The tabulated Excel spreadsheet can paint an instant picture for stakeholders, in this case reactions of users to the three designs we were considering for our home page. In my usability presentation to stakeholders, I simply opened with three slides showing the survey results from the three home page designs, as shown in Figures 5.12 through 5.14. Although I had another 20 slides in my deck, I didn’t really need them to persuade management which design we should go with. The data told the story and sold the solution.

image

Figure 5.12 Survey results for design A (home page 1) (printed with permission).

image

Figure 5.13 Survey results for design B (home page 2) (printed with permission).

image

Figure 5.14 Survey results for design C (home page 3) (printed with permission).

Product Concept Brochure
Sauli Laitinen, Design Manager, Vaisala, Finland

At Vaisala, we have found product concepts to be an efficient and effective way to communicate the results of user research. Experience has taught us that it is often better to show a sample solution rather than list the product issues.

Most often, we present the product concepts as five- to fifteen-page marketing brochures. On the cover of the brochure, we have a picture of the product in use. This is followed by a one-page summary of what the product is all about. The bulk of the brochure is dedicated to the storyboard that illustrates the key features of the product and how it is used (see figure 5.15). After that, the relevant technology, architecture, and business information is presented in visual format. The brochures are printed out professionally and handed over in person to the project stakeholders.

image

Figure 5.15 A sample page from a product concept brochure project (printed with permission).

Based on our experience, product concepts are not only useful when communicating the results of user research in the beginning of the project, but also help keep key findings fresh in mind at later stages of the development process.

Visualizing Key Differences in Findings
Michael Hawley, Chief Design Officer, Mad*Pow, United States

Frequently, researchers need to communicate findings that show the differences between several entities – personas, competitor sites, effectiveness of design concepts, and so on. Describing these differences with words is tedious and a challenge for audiences to read. Polar displays (a.k.a. radar or spider diagrams) overcome this challenge by visually communicating differences across several dimensions in an easy-to-scan diagram.

Start by defining spokes as key dimensions for comparison. Then label each spoke with the opposite end of the spectrum for that criterion (see figure 5.16).

image

Figure 5.16 Labeled polar display (printed with permission).

Then, for each entity (persona, etc.), score the selected competitor sites along the various dimensions based on the research and plot them visually. The resulting diagrams help audiences see differences with minimal reading (see figure 5.17).

image

Figure 5.17 Polar displays of personas (printed with permission).

Finally, here is my own example of a visualization I used and found to work very well with stakeholders.

The Top Ten Themes Poster

After a large-scale field study, a team of two researchers and a designer analyzed the themes that emerged and summed them up in a single poster. Each theme included the following components:

ent A short title

ent An illustration

ent A description of the theme

ent The design implications

The poster (see figure 5.18) was used to spark discussions and debate and copies of it were printed and given to relevant stakeholders. Two years after the study, the poster is still proudly hung in the team working area and references to it are still being made in design meetings.

image

Figure 5.18 Top ten field study themes poster (some of the text was blurred for confidentiality) (printed with permission).

Visualizations and posters are key elements of another effective communication tool: the expo.

Run a research expo

A research expo is a full-day event during which stakeholders experience research instead of reading about it. I came up with the idea of having an expo as a communication tool after concluding a field study with the goals of defining strategy around a certain product, learning more about its users, and testing preliminary design concepts. Data collection took place in several countries and involved contextual observation, interviews, and an artifact walk-through. Participants were also asked to keep a diary to share their experiences on a daily basis. The study team included two UX researchers, an interaction designer, a product manager, and the lead software engineer.

The field study produced a large number of insightful findings that we needed to communicate to our stakeholders. However, because we had not previously conducted fieldwork for these particular stakeholders, there was uncertainty surrounding the value and substance of the research. Additionally, we were unsure how the stakeholders processed information or utilized research findings. We also wished to promote and demonstrate the value of field studies to other organizations such as support, sales, product management, and engineering.

Fearful that we might end up writing a report that would get passed over, we decided to try the new idea of holding an expo at which stakeholders could “experience” the research instead of reading about it. We imagined a large meeting room with a self-guided exhibition of posters, artifacts, and videos inviting stakeholders to learn about our results. We planned to conduct this expo for a full day, during which the study team was available to discuss research findings and recommendations with the extended team.

Preparing the expo. During the preparation phase, we brainstormed the contents of the expo and developed a “mind map” of findings. We recruited a designer, who created visual representations of our findings and helped us design several posters for the expo. Posters included:

ent Study background

ent What is a field study?

ent Methodology

ent Participant map

ent Task work flow

ent Participant quotes

ent Themes with product implications (see Figure 5.18)

Representative artifacts gathered from participants were selected to showcase. Insightful entries from the incident diaries were also included. In addition, edited video clips from the study sessions were set up in viewing stations around the expo room. A slideshow was produced and included the following topics:

ent Research questions

ent Process work flow

ent Tools and systems users use

ent Task matrix

ent Top ten issues that mess up a certain process

ent Who is the product for?

ent Pictures of participants

ent Artifacts

Holding the expo. We created a multimedia experience and set up the room like a gallery exhibit, including video viewing stations (to watch select user clips), posters illustrating key findings and product implications, printed blog posts (participants’ diary entries), collected artifacts that people could pick up and discuss, and a slideshow that ran in a continuous loop in the room (see Figure 5.19). During the expo, we (the researchers), the product manager, and the lead engineer answered questions about findings, encouraged discussions about the meaning of the findings, and shared our field study experience.

image

Figure 5.19 The expo room. Projected presentation, posters, video stations (laptops), and artifacts (arranged on table) (printed with permission).

After the expo, we provided copies of the posters to the product manager, engineering director, and product management director. The following week, we gave presentations to those stakeholders who were unable to attend the expo.

The website. Using all the content we had prepared for the expo, we created an internal website that was launched on the morning of the expo. The site served as a repository of artifacts, diary entries, videos, and notes from the study. This interactive “report” pretty much wrote itself, thanks to all the expo preparation. The website was easily discoverable through an intranet search, provided an engaging presentation format, and directly linked the report to the project site.

Outcomes and lessons learned. The results of the expo exceeded our expectations. Approximately 50 people attended the expo, and more than 100 visited the expo website. It’s highly doubtful that this many people would have taken the time to read a standard research report. Product managers, engineers, sales representatives, support staff, and UX researchers and designers visited the room throughout the day, watching video clips, discussing the artifacts, and intensely debating the study findings and their implications (see Figure 5.20).

image

Figure 5.20 Stakeholders interact with expo materials (printed with permission).

The chief benefits of holding an expo included creating a high level of engagement, rendering study results more memorable, raising the profile and impact of UX research, and increasing acceptance for field studies. Reflecting on this effort, expo attendees still utilized findings and recommendations from this study – even two years after it was conducted – and our team members ask for more studies with similar deliverables. The expo helped us to better appreciate the power of face-to-face interaction with our stakeholders. Facing so many tangible findings in an expo setting made our stakeholders engage with the study results and recommendations.

We found that presenting findings via an expo “democratized” the experience because attendees were more willing to ask questions and engage with the material. This participation is less likely during traditional report presentations, which are often dominated by lead product managers and one or two vocal participants. As a result, many more ideas were generated from a wider group of people. Additional lessons we learned included:

ent Consider giving visitors something to take away (for example, a handout of key findings).

ent Promotion and marketing is key to a good turnout.

ent It pays to include stakeholders as part of the study team.

ent Having a great designer is necessary to create strong posters, presentations, and an inviting overall expo experience.

ent Including a multimedia component was very conducive to engagement.

image Watch my interview with Cennydd Bowles, interaction and UX designer from the United Kingdom and author of Undercover User Experience Design and Designing the Wider Web. Cennydd says that we, the UX people, rely too much on the big reports and on formal documentation of our work. He argues that disruptive research should be communicated disruptively. His number-one deliverable is not creating a deliverable. Use QR code 118 to access the video, a quick summary of the interview, and Cennydd’s biography.

The recipe for an expo, like any recipe, can be tweaked and adapted by the chef to match the nature of the research being reported and the stakeholders involved. The important point is that rich reporting is an improvement beyond the traditional written report, leading to a more meaningful engagement among a wider variety of stakeholders.

When presenting research results to stakeholders, whether as a part of an expo or not, one extremely effective tool is combining quantitative and qualitative data.

Combine quantitative and qualitative data

In Chapter 3, I discussed the magic of injecting quantitative data into qualitative findings. The context of communicating study results is an excellent way to demonstrate this practice. When I plan a traditional lab usability study involving tasks that participants are asked to complete, I come up with a set of usability metrics that add value to qualitative findings. I usually – but not always – measure task success, number and type of errors, satisfaction (per task and posttest), and lostness.

Lostness

Lostness is a usability metric that indicates how lost people are when they attempt to complete a given task (Tullis & Albert 2008). It is calculated based on the following three parameters:

ent The minimum number of pages that must be visited to complete the task (R)

ent The number of different pages actually visited while completing the task (N)

ent The number of pages visited while completing the task, counting revisits to the same page (S)

For example, imagine a task that can be completed by accessing a home page and then an inner page, so the minimum number of pages to complete is 2. When a user tries to complete the task, she accesses the home page, then the inner page, then back to the home page, and back to the inner page to complete the task. That gives us a value of 2 for the number of pages that were visited, and a value of 4 counting revisits to the same page. If you put all these values in the formula shown in figure 5.21 for calculating lostness, you get a score of 0.5.

image

Figure 5.21 The lostness formula (Tullis & Albert 2008).

Lostness scores run from 0 to 1. The higher the score, the more lost a user is. If you sit next to a person attempting to complete a task, you can see with your own eyes that he or she is lost for scores higher than 0.4.

I have found the lostness score to be one of the most engaging things I present to stakeholders. They just love it.

After I run the study and analyze the qualitative and quantitative results, there are two ways to present the data (in a report or presentation):

Separate quantitative metrics from qualitative findings. In the past, I used to have a section called “Findings and Recommendations” followed by a section called “User Experience Metrics” in which I presented charts and analysis for the metrics I measured. If I was utilizing these same metrics in previous studies, I added a section called “Comparative User Experience Metrics” in which I compared results measured in this study with results from past studies to show trends.

Integrate qualitative and quantitative findings by telling compelling stories. After realizing that many stakeholders are not reading the one or two added sections with quantitative analysis (mentioned previously), I started communicating qualitative and quantitative results together in a more integrated way. For example:

When attempting to create a new user role, most participants had trouble choosing an appropriate user type. Many of them spent a considerable amount of time tryingto figure out the different types, and still got it wrong. A few participants didn’t even hesitate when they picked wrong user types. They just moved on.

When integrating quantitative data to support this type of qualitative finding, I add the following:

When attempting to create a new user role, participants were unsuccessful, extremely lost, and thought it was hard to complete:

1. 19% (±13%) success rate

2. 0.9 lostness score

3. 4.4 (±0.6) out of 7 ease-of-use rating

I also add the charts shown in Figure 5.22 to demonstrate the numbers I use and how they compare to other tasks.

image

Figure 5.22 Data to support qualitative findings.

My experience tells me that when I integrate quantitative findings with qualitative findings, my stakeholders pay more attention. To them, the numbers are “hard” facts and the stories and quotes are “soft.” Together, they make a compelling case for highlighting an issue and pitching its fix.

The next section introduces another effective communication tool you can implement with your stakeholders: a top ten list.

Develop top ten lists

Top ten lists are great for communicating research-based issues with a product. You can develop top ten lists of opportunities for improvement, positive findings, findings per quarter or year, or research-based ideas that require further exploration. Typically, a top ten list of opportunities for improvement include the following in a spreadsheet format:

1. Opportunity status: Use one of three values: No progress, In progress, and Done. I highlight the background of the status with red, green, and blue, respectively. I don’t use a red-orange-green color code because this will mean that stakeholders will probably see only red and orange most of the time. I’m trying to remain positive by using green for “in progress.”

2. Opportunity title: Short, specific, and actionable.

3. Details: Describe the opportunity in a couple of sentences and provide a suggested or agreed-upon solution.

4. Product management and engineering owners: These are the people who need to agree to act upon the opportunity. They are the primary stakeholders.

5. Affected users: Sometimes an opportunity is relevant for only a certain type of users, not for another.

6. Resources: Indicate the resource of the opportunity (which study, link to report, etc.).

Top ten lists are a great tool for communicating with stakeholders, especially direct ones (such as product managers) and executives. Their primary power is their dynamics. Keep them updated and visually show that things are being acted upon. Add new opportunities when new studies uncover new things. Keep discussing the items on the list with your stakeholders.

Some communication skills are not taught in any university program. The next section discusses a few of these “soft” communication skills, which can help you get stakeholder buy-in for UX research.

Soft communication skills

ALWAYS communicate what works well

Keeping a positive attitude is probably one of the most effective communication tools you have. It is easier to communicate bad news when opening with the good news first. After all, it is almost impossible to run a study and uncover only bad things. Another good reason to communicate what works well with a product or a design first is that you don’t want people to mistakenly “fix” things that work well. Stakeholders sometimes are so affected by bad news studies that they want to change almost everything. It is important to communicate what works well to preserve these areas and to make sure they serve as examples or best practices for future efforts.

Another thing to remember when communicating what works well is that if you are listing things of minor importance, you are in fact communicating a message that the product is all bad. It will be perceived as if you have made an effort to find good aspects of a product but failed. You must find big, meaningful things that are working well. Plan to identify these things in advance. If, during a study session, you realize that nothing is working well for a participant, ask them to tell you what made them happy, what they value in the design they have just evaluated, or what things they want you to keep as is and not change at all.

Opportunities for improvement count for the vast majority of a study report. I try to have from 10 to 30 percent of the report specify positive findings. In some cases, especially if you are running studies as a part of an iterative design effort, you will find more positive things than usability issues.

The bottom line is that you should carefully craft a meaningful positive part to each and every one of your reports, presentations, and any other way you choose to communicate research results. Although it is key to report good news, it is the bad news that we UX practitioners need to communicate with great care, empathy, and attention to detail.

Become immersed in your team

The following techniques will help you become immersed in your team and become one of them, even though your discipline is very different than everybody else’s:

1. Attend social events and team celebrations. Your team won a prize? Everybody goes to drink? A team New Year’s toast was scheduled? Try to attend all of these. It is important that you have face time with everybody in the team – not just when you communicate research results, but also at social and team occasions.

2. Be there during hard times. Showing your face also applies in tough times. Be there when a crisis is happening. Offer help when your team suffers through hard times. Be one of the people who try to solve problems and move forward while keeping a positive, optimistic attitude.

3. Walk the walk (figure 5.23). This one works like magic. In short, when you arrive at the office in the morning, grab your coffee and walk a route that gives you a chance to say good morning and have some small talk with team members. Don’t go directly to your cubicle and hide behind your monitor. Again, face time!

image

Figure 5.23 Walk the walk.

Communicate bad news

Bad news delivers the message that something is not working for someone as intended. For example, “Users don’t understand what the company does after they spend two minutes in the home page.” Or, “Users become extremely frustrated and think things are inefficient when they try to sign up for auto payment services.” Or, “Users don’t add a photo to their profile, either because they can’t find where they can do it or because they don’t see any value in it.”

One of the biggest challenges for a UX researcher is telling someone that the product of their hard work is not good enough or that they were just wrong, then expecting them to fix it or change their opinion based on what “experts” say. The way you communicate bad news to people who have worked hard on something is critical. As much as it depends on you, it also depends on the receiving end of communication. It matters a lot if the software developers think they did an excellent job in designing a product or if they acknowledge they are not highly skilled designers. As you become immersed in your team, it is extremely important that you identify each person’s skills and personality type. It’ll help you in the way you communicate with them – especially when you communicate bad news. Immersion is key. If you are immersed within a team, they will not consider you to be an external consultant once they get to know you and trust you.

image Watch a fascinating interview with Chris St. Hilaire, author of 27 Powers of Persuasion, from Los Angeles. Chris suggests that UX researchers recognize their stakeholders’ pre-dispositions before they try to persuade them of anything. It’s always easier to persuade someone about something they already believe. Use QR code 134 to access the video, a quick summary of the interview, and Chris’s biography.

When you communicate bad news, follow these guidelines:

1. Talk about what was found, not about who designed something. When you focus on discussing findings, you are carrying a message that users are what’s at stake here, not the good or bad job someone in the team did. For example, discuss why users could not find something and what can be done to improve it instead of trash-talking the information architecture. To be more specific, talk about things in the information architecture that caused users to get lost, such as labels that confused them, items that were placed under categories users did not expect to look for, and so on.

2. Explain why these issues are bad news and use the language of business. Designs and products usually have business goals, even if the organization is nonprofit. People who use these products have goals, too. Hopefully for the organization that develops products, the business and user goals complete or somehow match each other. When you explain why something is a problem, do it with this attitude of business and user goals and why users cannot achieve them with the current design unless it is changed.

3. Never, ever make it personal. Although it may seem so from time to time, it’s not about anyone’s personal opinion. Never say “I think.” Instead, turn to higher authorities. Explain the principles you are using, provide supporting data from past studies you conducted, quote external resources if needed. Never point fingers. Never make it about someone doing a lousy job. In almost 100 percent of the cases, people want to do a better job with their designs. Most of them do their best. Making bad news personal is probably one of the biggest mistakes you can make.

Taking Advantage of the Theory of Psychology and Human Factors
Beverly Freeman, Senior User Experience Researcher, eBay, United States

I always point out basic principles from psychology or human factors when predicting or reporting on research results. As one of my professors always says, non–human factors people who do usability testing have only data from that lab study to draw from, but trained human factors professionals know how to interpret empirical data based on a rich foundation. The more we can couch what we have to say in terms of our training, the more we can establish the fact that we are experts, not just lab monkeys.

Never use an escalation mandate

If you are given a mandate by upper management to escalate when people in your team do not follow research results, use it with care. Actually, never use it at all. The day you use that mandate will be the last day people buy in to research. Nothing good can happen after you escalate to executives. I once worked as a researcher in a company with about 600 software engineers. I was the sole UX practitioner there. Shortly after I joined the company, I had a one-on-one meeting with the vice president of R&D. That person told me that I had a full mandate to come and talk with her when my team gave me trouble with research – specifically, if they didn’t listen to my recommendations. At first I thought to myself, “Wow, this company really cares about the user experience.” But then I felt really bad. I imagined a situation during which I was not able to persuade stakeholders to follow one of my recommendations and what would happen if I escalated. I knew that 600 engineers would find out about such a meeting shortly after it happened. Needless to say, I never even thought of using this mandate.

Proxy designers

When researchers have strong, opinionated teams of engineers and product managers, carefully consider trying a technique called “proxy designers.” What it basically means is that research findings and recommendations are pitched not to development teams, but to designers. Instead of negotiating with software engineers and product managers, the UX researcher works closely and solely with a designer. The designer designs the product based on their negotiations with the researcher and delivers it to the development team. Teams are not aware that research was done, but the design encompasses research results.

This technique has many disadvantages and very few advantages. It is great because it eliminates furious arguments and clashes between opinionated individuals and researchers. It is not so good because it indicates a very unhealthy environment, especially if a researcher needs to implement this technique on an ongoing basis. There’s nothing wrong with it if you use it here and there, but if you do that all the time, it means that research is not respected in your organization and that something else needs to be done to defuse the situation.

image Watch my interview with Paul Adams, a product manager at Facebook and former UX researcher. Paul tells about a time when he had weaker relationships with his team and used designers as proxies for his research. Use QR code 112 to access the video, a quick summary of the interview, and Paul’s biography.

References

Csikszentmihalyi, M., 2004. Mihaly Csikszentmihalyi on flow. TED.com. <http://www.ted.com/talks/lang/eng/mihaly_csikszentmihalyi_on_flow.html> (accessed 02.16.11).

Molich, R., 2010. Usability test of <www.towerrecords.com>. <http://www.dialogdesign.dk/tekster/Tower_Test_Report.pdf> (accessed 09.08.11).

Nielsen, J. 1994. Usability Engineering Morgan Kaufmann, San Francisco.

Quesenbery, W. and Brooks, K. 2010. Storytelling for User Experience Rosenfeld Media, New York.

Reynolds, G. 2008. Presentation Zen New Riders, Berkeley, CA.

Tufte, E., 2005. The cognitive style of PowerPoint: Pitching out corrupts within. <http://www.edwardtufte.com/tufte/books_pp> (accessed 01.03.11).

Tullis, T. and Albert, B. 2008. Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics Morgan Kaufman, Burlington, MA.

Wilson, C. and Pernice Coyne, K. 2001. The whiteboard: Tracking usability issues: To bug or not to bug?. Interactions 8(3): 1519.

Wilson, C., 1999. Reader’s questions: Severity scale for classifying usability problems. <http://www.stcsig.org/usability/newsletter/9904-severity-scale.html> (accessed 09.06.11).

Takeaways

This chapter is long. Therefore, it is important to highlight key takeaways of tactics and strategies for communicating via reports, presentations, videos, posters, and visualizations and by demonstrating effective soft communication skills:

1. Look deep down and think: do your stakeholders really need reports? Reports help crystallize key findings; they help create a presentation; they are a resource for the future; they make good references and include lots of data; and they are expected. On the other hand, they are slow to produce, static, passive, and silent; they have short shelf lives; they are not really sexy; and, like it or not, nobody reads them.

2. Avoid the report-which-is-actually-a-presentation. Decide on a form factor that is most suitable and go with it.

3. Share parts of the report with selected stakeholders and get their feedback before the final report is ready.

4. Always open a report with an executive summary that includes an opening paragraph with details of what was done, when, where, by whom, and why; a list of three positive findings; and a list of three opportunities for improvement.

5. Organize the report by research questions.

6. Don’t report more than ten high-severity opportunities for improvement. Don’t report medium- and low-severity opportunities if you don’t have to.

7. Write short reports of up to five pages and long reports of up to ten pages.

8. Allow your key stakeholders to respond to the findings and recommendations in the report before you make it available to the entire team. Then share it with the team. Be transparent.

9. When you present research results, use stories, videos, pictures, and artifacts.

10. Try presenting with pictures.

11. When it comes to presentations, practice, practice, practice.

12. Present to your biggest critic privately, letting him or her poke holes in your presentation.

13. Run an expo.

14. Integrate qualitative and quantitative findings and tell compelling stories.

15. Communicate good news first and slowly, and bad news last and quickly.

16. Become immersed in your team. Walk the morning walk.

1 George Bernard Shaw

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset