WHEN SHOULD YOU CONDUCT A WANTS AND NEEDS ANALYSIS?
THINGS TO BE AWARE OF WHEN CONDUCTING A WANTS AND NEEDS ANALYSIS
PREPARING FOR A WANTS AND NEEDS ANALYSIS
CONDUCTING A WANTS AND NEEDS ANALYSIS
A wants and needs (W&N) analysis is an extremely quick, and relatively inexpensive, brainstorming method to gather data about user needs from multiple users simultaneously. It provides you with a structured methodology to obtain a prioritized list of initial user requirements. In its simplest form, user requirements gathering involves asking users what they want or need. Brainstorming is one of the tools that has been used successfully for many years to get a wealth of ideas from a variety of sources. Brainstorming with users is the key component of the W&N analysis; but the analysis has an added benefit compared to brainstorming alone because it incorporates a prioritization step that allows you to identify the most important wants and needs from the entire pool of ideas that were generated.
This method is ideal when you are trying to scope the features or information that will be included in the next (or first) release of the product. It enables you to find out what your users want and need in your product. Finally, by adding features based on the prioritized list, product teams can prevent feature-creep (i.e., the tendency to add in more and more features over time).
In this chapter, we discuss when to use this method, how to prepare for and conduct a wants and needs analysis session, as well as how to analyze and present your findings. We also share some of the lessons we have learned while using this method over the years. Finally, we provide a case study from industry to illustrate the benefits of this method in the real world.
A wants and needs analysis is a quick way to gather or confirm basic user requirements. This method can be used in the beginning of the product lifecycle to guide development, or in the middle of the lifecycle to determine changes to make to the product. It is always best to do this gathering early in the product development life-cycle, but sometimes product development proceeds without a clear understanding of requirements. You will quickly realize that a team needs this information when conversations that revolve around deciding what features to include keep cropping up and no resolution can be reached. This is a good sign that you ought to suggest that the team “confirm” their understanding of the user requirements. W&N is a perfect tool for this because it is quick and the team finds out what users really need.
IBM’s Ease of Use website (www-306.ibm.com/easy) provides a series of brainstorming questions you can ask users to help design a website. Some of the questions are: “What activities would you like to perform?”, “What information would you like to get from this site?”, and “How would you like to accomplish a particular task?” However, one should be aware that:
People don’t always know what they really would like and are not good at estimating how much they will like a single option
There are always variables that people do not take into consideration
What people say they do and what they actually do may be different.
That is why the questions you ask users cannot be for:
In the wants and needs analysis you are asking users what they want or need, but the questions are well-defined and about things users already have some amount of experience with.
It is critical to note that a W&N analysis is only the beginning, not the end. It should be used as a jumping off point and not as your sole source of information. You should also use other methods detailed in this book, as well as supplementary information from product teams and marketing to tell a complete story (refer to Chapter 1, Introduction to User Requirements, “A Variety of Requirements” section, page 8). By beginning your requirements gathering process with an activity such as the W&N, you can assess what is really important to the users and then plan future activities to drill into these things.
One of the biggest benefits to conducting a W&N is the speed with which you can prepare for, conduct, and analyze the results. In this section, we discuss the preparation required for the activity.
Compared to some of the other user requirements activities, there is not an overwhelming amount of preparation required to conduct a W&N. A W&N session can be prepared for in just a couple of weeks. The most time-intense element is recruiting users. Table 9.1 is a preparation table with the actions you will have to complete in order to prepare for a successful session. The timeline contains approximate times based on our personal experiences and should be used only as a guide. It could take longer or less time for each step depending on a variety of factors – such as responsiveness of the product team, access to users, and resources available.
You should work with the product team to determine the type of information you need and then carefully craft your question. An effective brainstorm session starts with a well thought-out question that summarizes the problem. A well-defined question will enable you to keep your brainstorm session focused.
The question you ask will determine the success of your activity. It is critical that it is not too broad or too specific. Let’s say you are interested in building a program that allows you to book airline travel. You do not want to make your question so specific that you build in assumptions or narrow your participants’ thinking. For example, you don’t want to ask your participants “How do you want and need to add flights to your shopping cart?” Right away you are assuming that people want and need a shopping cart, and by inserting the shopping cart metaphor you limit their ideas. Likewise, you would not want to ask them to tell you “the features they want in an ideal system that allows them to book travel” if you are only interested in airline travel. People purchase travel tickets for traveling in ways other than on airplanes (e.g., buses, trains) so you will get a lot of information that is irrelevant to you. The question “What features do you want and need in an ideal system that allows you to purchase airlines tickets?” would be effective for your purposes. Ask yourself what information you need and then word your question to fulfill that need.
Your goal is to gain an understanding of what the users want and need in the product. Rather than allowing the participants to brainstorm about anything they would like, it is more effective to ask the question so that it targets content, tasks, or characteristics of your product. Based on this assumption, the W&N question can be asked in three different forms:
Information. You can ask a question that will tell you the information that users want and need to be found in or provided by the system. A typical content question might be: “What kind of information do you need from an ideal online travel website?” You might receive answers like: hotels available in a given area, hotel prices, airline departure and arrival time, etc.
Task. You can ask a question that will tell you about the types of activities or actions that users expect to be performed or supported by the system. A typical task-based question might be: “What tasks would you like to perform with an ideal hotel reservation system?” Some of the answers you receive might be: book a hotel, compare accommodations between hotels, create a travel profile, etc.
Characteristic. You can ask questions that will provide you with traits users want or need the system to have. For example: “What are the characteristics of an ideal system that lets you book travel online?” Some responses you might receive are: reliable, fast, and secure.
The question you ask should mention “the ideal system” because you do not want participants limited by what they think technology can do. You want participants to think about “blue sky.”
It is important to run a pilot or practice session to determine the types of answers you might receive (refer to Chapter 5, Preparing for Your User Requirements Activity, “Piloting Your Activity” section, page 193). You may think what you are asking is clear, but it is not until your question is posed to people that you will truly know whether you are asking the right question. If the types of answers you receive in your pilot session are not in line with the information you need, then you will need to rephrase your question. If you do not pilot and wait until the session to see whether your question is correct, you may end up wasting time and money. In the “Lessons Learned” section of this chapter (see page 408), there is a great example of just how important piloting is for a W&N session.
Of course you will need end users to take part in your session, but you will also require three people to conduct the session. In this section we discuss the details of all the individuals involved in a W&N session.
Since the group dynamic is an important component to this method, you should recruit 8–12 end users per session. Groups with more than 12 participants are difficult for a moderator to manage and participants may not have enough opportunity to speak. It is wise to run at least two groups of participants since group dynamics may vary. In addition, this provides you with a higher number of data points and more reliable results. Each group should consist of users with the same profile (refer to Chapter 2, Before You Choose an Activity, “Step 1: User Profile” section, page 43). For example, if you are gathering requirements for your travel website, you may like to speak with travel agents and customers who book travel online. These are two different user profiles. As a result, you would want to run two sessions with each profile, making a total of four sessions. The needs of each user type will likely be very different. If you include both user types in the same session, the results obtained will not accurately reflect either user type.
Speaking with especially effective or expert users can provide a wealth of ideas since “lead” users (the people who first use a product and become proficient with it) are often a source of innovation. However, do not limit yourself to experts, because their wants and needs can be different from those of novice or average users. Specifically, things they ask for may be completely unusable by less sophisticated users. As a result it is beneficial to have an understanding of the perspectives of both expert and novice users.
One moderator is needed per session to facilitate the brainstorming. The moderator will elicit responses from the group, examine each answer to ensure he/she understands what the user is really asking for, and then paraphrase the response for the scribe (see below). Moderating is not quite as simple as one might think. It takes practice to learn how to manage and illicit information from a large group of users.
Figure 9.1 is a moderator’s checklist. For a detailed discussion of the art of moderating, refer to Chapter 6, During Your User Requirements Activity, “Moderating Your Activity” section, page 220.
A scribe is needed to help the moderator. The sole job of the scribe is to write down what the moderator paraphrases and to number these ideas. The scribe does not question the users or write down anything other than what the moderator states. Pick a co-worker who can write large and clearly enough for everyone to read and who will be able to hold their tongue (see “Lessons Learned”, page 407).
If videotaping is possible, you will need a co-worker to record the activity. The videotape can be very helpful after the session if you are referring to the list of brain-stormed ideas and want to obtain more detail. You will find a detailed discussion of how to record and the benefits of videotape in Chapter 6, During Your User Requirements Activity, “Recording and Note-taking” section, page 226. Ideally you should have someone to monitor the video equipment during the session in case something goes wrong; but if that is not possible, set up the shot, hit “Record,” and hope that nothing goes wrong.
If you have the facilities to allow stakeholders to view these sessions, you will find it highly beneficial to invite them. (Chapter 4, Setting Up Facilities for Your User Requirements Activities, page 106 covers how to create an observation area appropriate for observers.) Stakeholders can learn a lot about what users like and dislike about the current product or a competitor’s product, the difficulties they encounter, what they want, and why they think they want it. As with any usability activity, seeing it with their own eyes has a far greater impact than a report alone. Product teams often think they know the user requirements. They need to attend to see how little or how much they really do know. It is also wise to videotape the W&N sessions for any stakeholder who may not be able to attend, as well as for your own future reference.
The materials needed are simple and cheap:
Blue or black marker for the scribe
Self-adhesive flip chart pads for the scribe to write on and post for all to see
It is best to write on flip chart paper instead of a whiteboard because you can continue to add paper and post them around the room if you begin to run out of space on the board. In addition, you can take the flip chart sheets back to your desk after the session and transcribe all the ideas.
In this section we detail the steps involved in the collection phase of the activity. This is the phase where all of the users are in the room and you now need to conduct the session.
The timeline in Table 9.2 gives the detail of the sequence and timing of events to conduct a W&N session. It is based on a one-hour session and will obviously need to be adjusted for shorter or longer sessions. These are approximate times based on our personal experiences and should be used only as a guide, but we have found them to be reliable, regardless of the question or user type.
Table 9.2
Timeline for conducting wants and needs session
Approximate duration | Procedure |
5 minutes | Welcome the participants: Signing of forms (CDA, consent) Creative exercise/participant introductions |
5 minutes | Rules for brainstorming and practice exercise |
40 minutes | Brainstorming |
10 minutes | Complete W&N booklets |
Now that you know the steps involved in conducting a W&N session in outline, we will discuss each step in detail.
This is the time during which you greet your participants, allow them to eat some snacks, ask them to fill out any paperwork, and get them warmed-up for your activity. The details of these stages are described in Chapter 6, During Your User Requirements Activity (refer to “Welcoming Your Participants” section, page 209).
After the warm-up, we jump into a brief overview of the goal and procedure of the activity. We say something along these lines:
“We are currently designing <product description> and we need to understand what <information, tasks, or characteristics> you want and need in this product. This will help us make sure that the product is designed to meet your wants and needs. This session will have two parts. In the first part we will brainstorm <information, tasks, or characteristics> of an ideal system; and then in the second part of the activity we will have you individually prioritize the items that you have brainstormed.”
After the brief overview, the rules that the participants must follow during the brainstorming session are then presented. We always write these on a flip chart and have them visible during the entire session. If anyone breaks one of the rules, the moderator can point to the rule as a polite reminder.
In the brainstorming phase, we want everyone thinking of an ideal system. Sometimes, users do not know what is possible. Encourage them to be creative and remind them that we are talking about the ideal. Because this is the ideal system, all ideas are correct. Something may be ideal for some users and not for others – and that is OK. Don’t worry about unrealistic ideas as they will be weeded out in the prioritization phase.
Some users are steeped in the latest technology and will want to spend the entire session designing the perfect product. Users do not make good graphical or navigational designers; so do not ask them to design.
For example, if a participant says he would like to look up the latest flight information on a Personal Digital Assistant (PDA), we would stop the user and probe for more information. Ask the user why this information must be accessible from a PDA. He may respond, “Well, because I have a PDA that I take everywhere and I want to be able to look up the latest flight information when I am not at my desk.” Ah! So the user really wants the information available from anywhere! You would then ask the scribe to write “Available from anywhere.” It is your job as the moderator to probe what the user is really asking for and paraphrase that accurately for the scribe. In other words, you are drilling down for the user’s ultimate goals or desired outcomes, rather than any particular way of achieving those goals. It is your job, along with the designers and product team, to determine what is technically feasible and to develop designs after the session.
For a discussion of “outcomes analysis” (a technique that focuses on understanding the user’s desired outcome); see Chapter 7, Interviews, “Outcomes Analysis” section, page 252.
Another job of the moderator is to check for duplicates. Sometimes users forget that someone made the exact same suggestion earlier. When you point it out to them, users will respond that they had forgotten about it or not seen it. However, there are times when the user isn’t asking for the exact same thing, it just sounds like it. This is where you must probe for more details and learn how these two suggestions are different from each other so that you can capture what the user is really asking for. It is important to mention this to the participants as a rule so that they do not think that you are challenging their idea – you are simply trying to understand how it differs from another idea.
This rule is important to set the participants’ expectations. Participants may not understand why the scribe isn’t writing down verbatim everything they are saying. It isn’t because the scribe is rude and doesn’t care what the participants are saying. It is because the moderator must understand what the participants really want with each suggestion. What the participant initially says may not be what he or she really wants. The scribe needs to give the moderator time to drill down and get at what the participant is asking for before committing it to paper. Once participants understand this, they will understand that the scribe isn’t being rude but is simply waiting for “the final answer.”
Once everyone understands the rules of brainstorming, we usually do a brief practice exercise. A favorite of ours is: “What tasks do you want and need to perform in an ideal bookstore?” Some answers the group may provide are “search for books,” “pay for books,” “find out what is new,” and “read reviews.” This practice exercise should last for only a couple of minutes.
When you believe everyone has a good grasp of the process, you can begin the official activity. One way to make this assessment is to ensure that everyone in the room has given at least one example that you feel answers the practice question appropriately. During the practice, if anyone gets off track (e.g., offering information, rather than a task), this is the time to refer back to the question and/or the rules. Inevitably, if you are asking about tasks people will give you information, and vice versa. It is your job to catch this and ask the user to rephrase a request. For example, using the question noted above, a participant might respond by saying that he/she wants “the books to have a rating given by other readers.” This is a great idea, but it is information rather than a task. Ask the participant what the task is that relates to this information. Ask: “Is it that you want to be able to find reviews written by other readers?” Work with the participant to make sure the task is elicited. This is also a good time to make sure that the scribe’s writing is large and clear enough for everyone to read.
In addition to displaying the participation rules for all to see, the question is also posted for all to see. Before you begin the generation of ideas, make sure that everyone understands the question. If you have a complex or technical question, you may require a brief clarification discussion. Often, because of the multiple meanings that words can have, people say one thing but mean another. It is important to have everyone on the same page.
For instance, if you were doing a W&N to learn more about the manageability needs for your customers’ database environments, you would probably want to start off with a discussion of what is meant by “manageability” in a database environment. This discussion can take as little or as long as you feel necessary to get everyone thinking along the same lines.
It’s now time to jump into the brainstorming session. As the moderator, there is a lot for you to do (also see Chapter 6, During Your User Requirements Activity, “Moderating Your Activity” section, page 220):
Make sure everyone is participating.
If users digress, bring them back on track by pointing to the question under consideration.
Remember to encourage people to think about an ideal system. They shouldn’t be concerned about the technical constraints, what is available today, or what will be available tomorrow.
Enforce the ground rules for brainstorming.
Make sure everyone can read what is being written.
Verify that what is being written accurately summarizes what the participant is asking for.
Also, be sure to keep an eye on the scribe. Make sure the scribe is:
Accurately getting your summarizations on paper
Keeping up with the pace of the session (if not, slow things down)
Numbering each item will help you identify which idea the user desired. It is especially useful when the handwriting looks like a drunk chicken walked across the page!
Figure 9.2 illustrates a number of important elements of the W&N session. These include:
After about 40 minutes of brainstorming or so, you will notice that the number and quality of ideas tend to decrease. When you ask for additional suggestions, you will probably be met with blank stares. At this point, ask everyone to read through the list of ideas and make sure none is missing. If you are still met with silence, the generation phase is over. It is now time for the prioritization phase.
In the prioritization phase, users spend about 15 minutes picking the most desired items from the pool of brainstormed ideas. They are asked: “If you could have only five items from the brainstormed list what items would you pick?” We ask for five choices because we have found that this will elicit the “cream of the crop.” We also like asking for five because we often ask two W&N questions during a two-hour usability session. For example, during the first hour we may ask about information desired in an ideal system, and in the second hour we may ask about the tasks desired in that same ideal system. By asking for the top five at the end of each brainstorming portion of the session, it keeps the evening to two hours and it does not exhaust the participants. Choosing the top selections is quite an exhausting procedure for the participants, so the more choices you ask for, the more time and effort it takes.
Participants fill in their answers in a “Top 5 booklet.” The booklet asks users to name the item they are choosing, describe it, and state why that item is so important to them (see Figure 9.3). We ask for this additional information to be sure that we are capturing what the users are really asking for. People may choose the same item from the brainstormed list, but have completely different interpretations of these items. The descriptions and “why is it important” paragraph will help you detect these differences in the data analysis phase. A brief set of instructions is provided to users:
Write only one item per page. If more than one answer is provided per sheet, the second answer will be discarded.
Indicate the number of item from the brainstorming flip chart.
The five are not ranked and are of equal weight.
No duplicates are allowed. If anyone votes for the same item more than once, the second vote will be discarded.
Once participants have completed their booklets, you can send them home with thanks and an incentive. If there is another W&N question to answer, give participants a brief break. Typically, you can complete two questions in one two-hour usability session.
In this section, we describe in detail how to analyze the data from your wants and needs session. One of the great advantages of this method is that you can score the data as quickly as you collected it. In one to two hours you can have the results compiled and entered in an electronic format.
Begin by marking each sheet in your Top 5 booklet with an identifier. For example, write “1” on each sheet of the first booklet, “2” on all the sheets for the second booklet, and so forth. After all forms have been marked with a unique identifier, divide the booklets in half (half for you and half for the person assisting you with the analysis) and separate the forms (i.e., remove the staples from the booklets).
Sort the worksheets into groups based on verbatim content. If users provide more than one answer per sheet, ignore the second answer on the page. If you and the scribe call out each answer as you lay the sheets down, this helps you to locate stacks of identical worksheets. Figure 9.4 shows the worksheets being sorted by the moderator and the scribe. As you can see, you will end up with quite a number of piles, so make sure you have a large working space.
Once all worksheets have been sorted, you will want to combine groups that are similar. For example, you may have three sheets in one stack called “Search by cost” and another stack of two sheets called “Find the best price.” Upon closer examination (i.e., reading the selection descriptions and “why it is important”) you determine that these piles are both referring to conducting a search to find the best price. As a result these groups can be combined into a category called “Search by Price.”
The need to combine groups happens for a number of reasons. Firstly, items may have been duplicated during the brainstorming. As in the above example, the concept is the same but the wording is slightly different. Ideally, this should have been caught during the session, but the reality is that it is often difficult to recall everything that has been said during the session and as a result duplicates sometimes slip in. Also, if the items are phrased differently during the brainstorming you may think the items differ, but when you read the descriptions of the items on the Top 5 forms you realize that they are the same. Or perhaps, after discussion with the product team or domain experts, you realize that items you thought were unique are in fact the same.
Secondly, even though you ask all participants to indicate the number of the brain-stormed items they choose, there will be some who don’t do this – which may result in the creation of extra piles when you do your first sort. Alternatively, they might not write verbatim what was written on the flip chart. Believe it or not – it happens all the time.
This is when the identifiers from step 1 are used. When you are tallying the votes in the next step (“Determine the percentage of respondents”), you do not want to count multiple votes from the same user. Each user only gets one vote per item – you stated this rule during your session and you need to stick to it. To adhere to this you need to make sure that each pile does not have any repeating identifiers.
Continuing our example above, let’s say you have a pile with five worksheets in it. If participant #3 voted for “Search by price” and “Search by cost,” her vote would be counted only once in the next step. You know the same participant’s vote is in the pile twice because you can see the identifier “3” in the pile two times. Staple the double selection together. You may be tempted to throw one of the votes away, but sometimes each sheet contains different details in the “why is it important” section and you don’t want to loose this information. By stapling the sheets together you will remember to count this information only once in the next step.
Once you have determined that all the groups are in the highest-level groupings possible, determine the percentage of respondents per group (i.e., for each group, how many of the total participants chose this item?). To do this, count the number of unique votes and divide by the total number of participants.
For example, if there were 12 participants in your session and four worksheets with unique identifiers in a particular group, the percentage for that group would be calculated like this:
When conducting multiple sessions with the same user profile and question, the data from each session should be analyzed separately and compared. The results should be relatively consistent across sessions, with only slight differences in percentages. When comparing the tables, we use the rule that any item that half of the participants have chosen to be one of their Top 5 sections in one table should appear in the other table.
So, looking at the sample tables in Figure 9.5, we would identify the items in each table that at least half of the participants selected as part of their Top 5. These are highlighted in the example. We would then make sure that these items appear in both tables. If an item has half the vote in one table, but less than half the vote in another table, that is OK – we just want to make sure that it appears in the second table (e.g., “Read hotel reviews” in the example).
It is important to acknowledge that we have not empirically tested the reliability of this method so we do not have hard and fast rules to recommend. This is what we do in practice and we have found it to be effective for us. This is one of the challenges of taking a qualitative method and trying to apply quantitative measures to it.
Another thing we do is look at the list of all items brainstormed that evening. Are they similar? They should be. If they are completely different, it is likely that you may have recruited different user types. Examine the users recruited and identify possible differences between the groups. Perhaps you were running sessions with travelers (e.g., prior to the session you did not think that being a frequent traveler or an infrequent traveler would matter) and as it turned out, one session had a majority of frequent travelers while the other had a majority of infrequent travelers. You discover this when you analyze the result from each night’s session and see the brainstormed ideas are dramatically different across each night. You may have assumed that the wants and needs were the same for all travelers. The difference in your results would indicate that this might not be the case. To confirm this suspicion, you would need to conduct a session with all infrequent travelers and another session with all frequent travelers. Differences could exist because of group dynamics (e.g., a domineering individual was present in one group, there were just more creative people one night than another, etc.). Perhaps you did not run enough participants to obtain consistency.
Careful research and preparation can usually prevent such errors (refer to Chapter 2, Before You Choose an Activity, “Learn About Your Users” section, page 41). Procedures during each session could also vary and explain differences in results. For example, perhaps there were two moderators and different instructions were given at each session. It can be helpful to review videotapes of the sessions to see where the differences may lie.
It is important for us to note that just because differences were found between sessions, it is does not automatically mean that the moderators did something wrong. As we mentioned earlier, we have not empirically tested the reliability of this method. It is possible that you may find differences in the results between sessions but will not be able to determine the cause. Although we have never found large differences in the results between sessions, if you do and you cannot figure out why, run another session and see whether the results are similar to either of the other sessions.
In the end, however, if the data across each night are similar (which they will be in almost all cases), combine the worksheets from all sessions and rescore. This provides you with a higher number of data points and more reliable results.
Keep in mind that some selections are so obvious that they may not appear in the brainstorming. In our travel example, if no one suggested “Room availability” as an information need, it does not mean that users don’t want or need that piece of information – they may simply have assumed that the information would be provided. Or users may simply not have thought about it. You must use your domain knowledge, expertise as the user advocate, and simply your common sense. The ideas you are obtaining through the W&N are a jumping off point, but there is still more research required on your part to verify those needs.
Compare the items that received priority attention from the users to the product’s functional specification. The items that received the highest percentage of votes should receive first attention by the product team. Perhaps the team had already planned to incorporate these items into the product. You have just validated their decision. Or you may have a case where the highest priority items were not even on the product team’s radar. Go back to the videotape and listen to the discussion around those items. Why did participants suggest those items? Why did they say they were important on the worksheets? This is fundamental information that needs to be shared with the product team.
The information from the W&N session can also be used to hold feature-creep in check. When a developer finds a cool, new feature that he/she really swears the user must have, the team should go back to the W&N results. Did the participants ever discuss this feature? Did it come up in anyone’s Top 5? If not, more research should be done before resources are spent including the feature. Too many products become unnecessarily complicated when features are added just because someone thought it would be cool.
Alternatively, feature-shedding can also be re-examined with the results of a W&N session. When the product development team starts identifying features to drop, there are many considerations: resources required to build it, business requirements, dependency of other features, etc. One of the considerations should also be user requirements. If a feature that a high percentage of participants selected is on the chopping block, shedding it should be reconsidered. Are there any other features that could be dropped instead?
We present the results of the W&N in a simple table for the product team to review (see Table 9.3). The key elements are:
Item or category (e.g., cost information)
Exemplars of the item or category (e.g., price of single room, price of suite, cost of gym access). The exemplars often come from specific examples that people gave during the brainstorm or from details that they provided on the worksheet.
Percentage of participants who selected that item as a Top 5.
Make the table easy to scan quickly. The table should be ordered from highest to lowest priority. Typically, we will include any item in the table that received at least one vote; however, some people like to include only the items that received at least two votes.
In addition to this table, we include the complete list of brainstormed ideas in our usability report. The brainstormed list can be a source of inspiration or additional features. Since most executives and developers do not read the usability report, the table is the key piece of information. We recommend keeping it to a couple of pages, if possible, and providing a few sentences of recommendation or explanation with the table. This can be posted to a website or it can be enlarged and posted in high-visibility areas for all to see. The most important place to include this information is in the product team’s documentation (e.g., functional specification, software requirements document, high-level design document). The W&N can provide a starting point for such documentation and is one piece of a very large puzzle, but it is a valuable piece. Ideally, additional user requirements techniques should be used along the way to capture new requirements and verify your current requirements.
Because the W&N analysis is so flexible, a number of modifications can be made to customize it to fit your needs. None of these modifications is better or worse than what we have presented in this chapter; they are simply different. Depending on your particular situation they may suit your specific needs better. Modifications can be made to both the brainstorming phase and the prioritization phase.
Below are examples of some of the modifications that can be made during the brainstorming stage of the activity.
There are a few ways, other than effective moderation, to address the issues that are a natural part of brainstorming. These issues include social loafing(i.e., the tendency for individuals to reduce the effort they make toward some task when working together with others) and evaluation apprehension(i.e., the fear of being evaluated by others). To reduce social loafing, participants can be given several minutes at the beginning of the session to silently write down as many ideas as possible. This can be followed by a round-robin in which each user must read one of the ideas from his/her list. Since each user is accountable for providing a new idea during his/her turn, the user cannot sit back and avoid participating. Users may generate new ideas during the sharing of ideas written, and therefore benefit from the group synergy (i.e., an idea from one participant positively influences another participant, resulting in an additional idea that would not have been generated without the initial idea). You must be sure to follow up with users and understand why they are suggesting the ideas, otherwise, you may never know.
Some studies have shown that people generate more ideas when working separately and then pooling their ideas rather than working as a group (Mullen, Johnson, & Salas 1991; Paulus, Larey, & Ortega 1995). One study found that when college psychology students were allowed to work alone and/or were not held accountable for the number of ideas generated, they actually generated more ideas than those working face to face (Kass, Inzana, & Willis 1995). This method would be particularly effective for someone who did not feel comfortable with his or her moderation skills. It takes a skilled moderator to counteract social loafing and evaluation apprehension (refer to Chapter 6, During Your User Requirements Activity, “Moderating Your Activity” section, page 220). This modification helps to deal with the issue for you. The main disadvantages of this modification is that it can be more time-consuming.
A group of individuals at University of Arizona created one of the first commercially available electronic brainstorming tools – called GroupSystems. Although this system forces all the individuals in the group to be in the same room, current tools are available that allow remote participants to join in on the brainstorming via the web (e.g., Groove, MSN Messenger, Yahoo Chat, Lotus Notes). Such a computer aid can allow users to participate anonymously, thus avoiding evaluation apprehension. Since answers can be typed in simultaneously, no one has to wait to be recognized (i.e., production blocking) and there is no problem with overbearing or expert users jumping queue. However, because most people type more slowly than they speak, communication speed is decreased and social loafing often increases.
One study showed that with groups of four or more people, electronic brainstorming produced more ideas than verbal brainstorming; and with groups of eight or more, electronic brainstorming produced more ideas than paper-and-pencil brainstorming (Dennis & Williams 2003). However, you should be aware of the difficulties of using electronic tools for brainstorming. If you wish to have everyone in the same room at the same time, you will need to have one computer per user. So, based on the research findings noted above, to see a benefit over verbal groups you would require four or more computers. To realize a benefit over paper-and-pencil brainstorms you would need eight or more computers. That is quite an expense to incur for brainstorming activities. Also, you will need to train the users how to use the software, and you will have to go in with the awareness that computer problems may arise.
If you choose to go the remote route, be keenly aware that you have no control over who is participating. You cannot be sure that your desired user is actually contributing the ideas on the screen. The security of your brainstorming session is also at risk.
Finally, you will lose the opportunity to ask users why they are suggesting the ideas on the screen. Since it is anonymous, you do not know whom to ask. It is much more difficult to control the quality of responses that get on to the list of brain-stormed ideas. If participants begin designing their ideal system, you will not be able to probe deeper into their ideas or get at the core of what they are asking for. Considering the cost, security, lack of contact, and complexity with electronic brainstorming, we do not currently use it, either in a group or over the web; however, others swear by it.
The Usability Net website (www.usabilitynet.org/home.htm) suggests another modification to brainstorming techniques: Ask users to create an affinity diagram as part of the brainstorming. Users should write each idea on a separate sticky note.
Each note is then placed on a large wall or whiteboard. To avoid a chaotic free-forall, you can ask one participant at a time to post their stickies. As the individual adds a new sticky note, he/she must announce it to the group. This can prevent duplicate sticky notes and can spark ideas for other participants. Similar ideas should be placed in close proximity to each other. Once everyone has posted their ideas, you may identify categories of ideas as a group and then discuss them. One of the advantages of this method is that the sorting portion of the data analysis is done for you during the session. For more information on affinity diagrams, see Appendix F, page 714.
Outcome analysis is a technique that is focused on the outcome that the users want (i.e., what they want the product to do for them), rather than the features or information they desire (Ulwick 2002). This technique is particularly useful when your goal is innovation – the reason being that users can relate only to what they have experience with.
Outcome analysis is typically used in an interview format, but the principles can be applied to a W&N. During the brainstorming phase, the moderator is responsible for translating each brainstormed item into an outcome. For example, let’s say you are discussing the ideal tasks that you want and need to perform in a system for booking travel. If a participant says “Search for flight information,” you need to translate this into an outcome. What is the ultimate outcome the participant wants from searching for flight information? A reasonable outcome might be to identify a flight that suits his/her schedule and budget. The moderator poses this outcome to the participant to see whether he/she agrees, ensuring there is no misinterpretation. The outcome is then written on the flip chart. The moderator often does the translation rather than the participant because the latter sometimes have a difficult time making this jump. After participants understand that you are looking for outcomes, they may begin brainstorming in terms of the outcomes, not just the tasks. At the end of the session the outcomes are then prioritized.
Below are examples of some of the modifications that can be made during the prioritization stage of the activity.
Some people who employ this method use an additional question on the worksheet: “How do you know when the ideal system has the characteristic/task/information?” This is intended to help the evaluator better understand what the user is really asking for and why he/she wants it. That is, you want to understand how a feature will be used or how it contributes value to the user’s end goals.
For example, if you are designing an ideal system to transport you to work and you chose “fast” as a characteristic of the system, how would you know that your system is really fast? The answer could be “When I can leave for work five minutes before I need to be there.” Unfortunately, no matter how many examples of this question and answer we give during a session, participants never seem to “get it.” They understand the examples but have a difficult time applying them to their actual brainstorm at hand. The typical response is “I know I have it when it is there.” We also found that participants try to design the system when answering this question and, as we discussed earlier, you shouldn’t rely on users to design your system. You may have better luck with this question than we have, so we offer it here for you to try. We find that we capture the desired information about why the participant wants this item in the response to the “Why is it important to you?” question. You may also add any other questions that you think will provide you with beneficial information to understand what the participant is truly looking for.
As mentioned earlier in the chapter, you do not have to use the “Top 5.” You may increase the selection to a participant’s top 10 or 15, or whatever number you desire. Due to the desire to extract the “cream of the crop” from the brainstorming, as well as participants’ attention and energy levels in the evenings, we find that the “Top 5” works best; but we have increased the number when running a one-hour session with only one brainstorm question. If you are running only one session it can be a nice idea to collect more prioritization information.
You could ask the users to rank order their selections or rate the importance of each selection. This will give you a more detailed breakdown of the users’ priorities. Keep in mind that this will complicate the data analysis. You may choose to state how many people selected an item as their first choice, second choice, etc. Alternatively, you can assign each ranking a point value (e.g., first choice equals five points, second choice equals four points, etc.). Add up the total number of points each item earned and then create your prioritized list from there. The top items don’t change with this variation.
The evaluator can go through the list of ideas generated and remove any that are impractical to implement, duplicates that were not caught earlier, ideas that the product team has already included (or plans to include) in the product, or ideas that do not meet some preset criteria. You can also eliminate “obvious” ideas such as being able to purchase a book from an online bookstore. After the brainstorm is complete, the moderator does this by putting an X next to the items on the list that he/she does not want the participants to include in their Top 5 selections. By removing obvious, impractical, or already implemented ideas, participants will not “waste” their votes on ideas that aren’t useful. We recommend using this modification for products that already have a released version or have a detailed and well-researched functional specification.
Another way to handle “obvious” answers is by asking users to select their top five “wants,” followed by the selection of their top five “needs.” For example, a user may not need to read through all the hotel descriptions to find a suitable hotel, but may want the travel site to hold information about his/her profile and preferences and then automatically find hotels to match the needs.
We have been using the W&N methodology since August, 1999. Over the years, we have learned many things that have helped us improve our techniques. We thought it would be helpful to share these things with you so that you don’t make the same mistakes we did.
Probably the biggest lesson learned was that your scribe must be well trained. That may sound painfully obvious or maybe it sounds trivial. Apparently, all the scribe has to do is write what you say. How hard can it be? How wrong we were! The most common problem is when the scribe tries to be a para-legal and capture every word the participants are saying, rather than writing only what the moderator paraphrases. We have also had occasions where scribes have written down their own ideas! In another case, a scribe stopped writing and began chatting with the participants. “Oh yeah! I hate it when that happens! You know what I hate is ….” Usually, it is purely innocent, but it can impact your results. It can be awkward to correct the scribe in front of a group of participants, as well as observers watching from another room.
A scribe must be able to write quickly and clearly, and be able to remember what the moderator paraphrases while the rest of the room is chatting. In addition, the scribe cannot be afraid to say “Slow down – I need to catch up” or “I did not catch that. Could you repeat it?” It isn’t an easy task and not everyone can do it. The scribe should take time to watch a W&N session, noting what the other scribe does well and not so well. The pilot or rehearsal session is another important training opportunity. If the scribe does anything incorrectly at that point, it is far less embarrassing to correct him/her there. You can also determine whether the scribe has the “write stuff!” If not, you may want to quickly seek out an alternative scribe.
As we have mentioned, running a pilot or rehearsal session is important. You would never run a usability test without conducting a pilot session to test your tasks, protocol, and timing. So why would you conduct a group activity without running a pilot session? We have found that wording a question for a W&N can be tricky. Is the question too broad or narrow? Does it make sense? Use this opportunity to work any bugs out of your protocol, train your scribe, and do a check of all materials. We have learned that a pilot is an investment well made!
In one particular instance, the following question was not piloted: “What information do you want and need in an ideal mobile device?” This question was posed to a group of field sales representatives and the results that were brainstormed were almost all focused on design. Participants keep referring to their Palm Pilots and cell-phones. As you recall, during a W&N you do not want to focus on design. As hard as the moderator tried, she could not get the group to stop thinking about their current devices. The session was a bust! Had the question been piloted, this could have been avoided. It was determined after the session that the problem was that the question contained the words “mobile device.” The next night the question was rephrased to “What information do you want and need when you are out of your office?” This session was a great success! Another lesson learned the hard way.
In this chapter, we have illustrated the details of conducting a wants and needs analysis. You should now be able to determine when this activity is appropriate, prepare for the session, and collect and analyze your data. We hope you have as much success with this activity as we have.
When I worked for Oracle they wanted to learn more about how people select healthcare providers (i.e., physicians). This information would inform the design of their Provider Finder (the search, advanced search, and information page for each provider). We ran a series of wants and needs (W&N) sessions to examine three types of patients: healthy patients, frequent visitors, and patient agents. These types of patient are fundamentally different and we wanted to be sure we understood how those differences might affect the way each user type selected providers. Healthy patients do not need regular medical care and usually go to the doctor only once a year for checkups. Frequent visitors, on the other hand, visited a single physician or multiple physicians on a regular basis. These individuals included pregnant women, chemotherapy patients, AIDS patients, and dialysis patients. Finally, a patient agent is an individual who acts on the medical behalf of another. This could be a parent, a person caring for a seriously ill partner, or an adult child caring for an elderly parent.
It was decided that a W&N analysis would be a great starting point to understand how these types of patients differed in terms of what they want and need in a product that would allow them to choose a healthcare provider. The majority of participants were recruited through an online community job board. The remaining participants were recruited through word of mouth in our department. It took a couple of weeks to recruit all the users, with frequent visitors being the most difficult group to recruit. The wording for ads recruiting participants and the screener to schedule participants had to be worded carefully. We had to respect people’s privacy and be considerate of their situation. We could be speaking to someone who is dealing with cancer or taking care of a dying parent. We also had to be mindful of people’s potential situation during the W&N session. We needed to avoid probing questions that might embarrass a participant or ask for too much personal information in a group setting. Everything we created was proofread by the product team (many of whom were physicians and registered nurses themselves) as well as by other members of our group.
Each session lasted two hours. All three user types were in separate sessions but were asked the same two questions. In the first hour, participants were asked what information they wanted and needed to know in order to select a new provider. In the second hour, participants were asked what tasks they would perform with an ideal system that would help them manage their healthcare (or the healthcare of a loved one, in the case of patient agents). In this case study, I will discuss only the first question.
Because of the number of user types under investigation and the expense of recruiting so many participants, only one session of 10 participants each was conducted for a total of 30 participants. We hoped we could follow up with additional sessions but were unable to.
The day after each session, the moderator and scribe met for an hour to sort the worksheets and discuss the results. Table 9.4 shows the results for frequent visitors. (For confidentiality reasons, the tables for healthy patients and patient agents are not shown in this study.)
After all three sessions were conducted, we compared the results across groups. We found that the majority of items appearing in the prioritized lists were the same but the percentage of users requesting often differed between groups. This reflected the different priorities of each group.
The largest percentage of users, regardless of group affiliation, wanted to know the qualifications of the provider they were considering. However, the next highest priority need varied between groups. Healthy individuals were concerned about insurance acceptance (70%). Patient agents were equally concerned with knowing the insurance information and location/hospital affiliation (60% each). However, frequent visitors preferred to know the doctor/patient ratio rather than insurance acceptance (70% compared to 40%).
We determined that, because the core user requirements were the same across user types, the differences between user types were not significant enough to warrant different interfaces for each user type. This list of information and tasks wanted/needed was added into the functional specification of the product. In addition, these findings were used to determine the fields for the prototype’s advanced-search screen that enables users to find a provider. The results were also used to determine what details are displayed when a provider is found.
The W&N analysis was valuable for the product design because participants shared with us their concerns when selecting a healthcare provider. The development team was unaware of many of the concerns participants had and realized they could easily be addressed in the design of the product. This information was used by the team in the generation of their first prototype.
We knew it would be difficult to recruit frequent visitors, but we didn’t realize how difficult. In addition to the moderator, an intern spent several hours interviewing participants and asking both patient agents and healthy patients whether they could recommend any frequent visitors for the session. We now know that at least three weeks should be allowed to recruit difficult user profiles.
In addition, we would have preferred to break the groups of ten into two groups of five for each user type. That might have made the frequent visitors more comfortable when speaking in front of a smaller group. The added benefit would have been having groups to validate the findings against. It would have taken an additional three days but we think it would have been worth it.