Chapter 10

Case Studies

Contents

This chapter presents five case studies showing how other UX researchers and practitioners have used metrics in their work. These case studies highlight the amazing breadth of products and UX metrics. We thank the authors of these case studies: Erin Bradner from Autodesk; Mary Theofanos, Yee-Yin Choong, and Brian Stanton from the National Institute of Standards and Technology (NIST); Tanya Payne, Grant Baldwin, and Tony Haverda from Open Text; Viki Stirling and Caroline Jarrett from Open University; and Amanda Davis, Elizabeth Rosenzweig, and Fiona Tranquada from Bentley University.

10.1 Net Promoter Scores and the Value of a Good User Experience

Erin Bradner,    Autodesk

Net Promoter is a measure of customer satisfaction that grew out of the Customer Loyalty research by Frederick Reichheld (2003). Reichheld developed the Net Promoter Score (NPS) to simplify the characteristically long and cumbersome surveys that typified customer satisfaction research at that time. His research found a correlation between a company’s revenue growth and their customers’ willingness to recommend them. The procedure used to calculate the NPS is decidedly simple and is outlined here. In short, Reichheld argued that revenues grow as the percentage of customers willing to recommend a product or company increases actively relative to the percentage likely to recommend against it. (Note: Net Promoter is a registered trademark of Satmetrix, Bain and Reichheld.)

At Autodesk we’ve been using the Net Promoter method to analyze user satisfaction with our products for 2 years (Bradner, 2010). We chose Net Promoter as model for user satisfaction because we wanted more than an average satisfaction score. We wanted to understand how the overall ease-of-use and feature set of an established product factor into our customers’ total product experience (Sauro & Kindlund, 2005). Through multivariate analysis—used frequently in conjunction with Net Promoter—we identified the experience attributes that inspire customers to promote our product actively. These attributes include the user experience of the software (ease of use), customer experience (phone calls to product support), and purchase experience (value for the price).

This case study explains the specific steps we followed to build this model of user satisfaction and outlines how we used it to quantify the value of a good user experience.

10.1.1 Methods

In 2010, we launched a survey aimed at measuring user satisfaction with the discoverability, ease of use, and relevance of a feature of our software we’ll refer to here as the L&T feature. We used an 11-point scale and asked users’ satisfaction with the feature, along with their likelihood to recommend the product. The recommend question is the question that is the defining feature of the Net Promoter model. To calculate the NPS, we:

1. Asked customers if they’d recommend our product using a scale from 0 to 10, where 10 means extremely likely and 0 means extremely unlikely.

2. Segmented the responses into three buckets:

Promoters: Responses from 9 to 10

Passives: Responses from 7 to 8

Detractors: Responses from 0 to 6

3. Calculated the percentage of promoters and percentage of detractors.

4. Subtracted the percentage of detractors from the percentage of promoter responses to get the NPS.

This calculation gave us a NPS. Knowing that we had 40% more customers promoting than detracting our product does mean something. But it also begged the question: is 40% a good score?

Industry benchmarks do exist for NPS. For example, the consumer software industry (Sauro, 2011) has an average NPS of 21%—meaning a 20% is about average for products such as Quicken, QuickBooks, Excel, Photoshop, and iTunes. Common practice at Autodesk is to place less stock in benchmarks but rather focus carefully on the aspects of the user experience that drive up the promoters while reducing the detractors.

To isolate the “drivers” of a good user experience, we also included rating questions in our survey that asked about the overall product quality, product value, and product ease of use. We asked these questions on the same 11-point scale used for the recommendation question. We then calculated mean satisfaction scores for each experience variable. Satisfaction is plotted along the x axis shown in Figure 10.1.

image

Figure 10.1 Anatomy of a Key Driver Analysis. Note that some graph data are simulated.

Next we ran a multiple regression analysis with Net Promoter as the dependent variable and the attributes as independent variables. This analysis showed us which experience attributes were significant contributors to users’ likelihood to recommend the product. Because it uses the beta coefficient, the analysis takes into account the correlation between each variable. Those correlations are plotted against the y axis in Figure 10.1. The y axis represents the standardized beta coefficient. We call the y axis “Importance” because correlation to the question “would you recommend this product?” is what tells us how important each experience variable is to our users. Plotting satisfaction against importance gives us insight into which experience attributes (interface, quality, or price) are most important to our users.

10.1.2 Results

According to Reichheld (2003), no one is going to recommend a product without really liking it. When we recommend something, especially in a professional setting, we put our reputations on the line. Recommending a product is admitting we are more than satisfied with the product. It signifies we are willing to do a little marketing and promotion on behalf of this product.

This altruistic, highly credible, and free promotion from enthusiastic customers is what makes the recommend question meaningful to measure. Promoters are going to actively encourage others to purchase our product and, according to Reichheld’s research, are more likely to repurchase.

We wanted to determine how a customer’s likelihood to recommend a given product was driven by specific features and by the overall ease of use of that product. A new feature (we’ll call L&T) was included in the product we were studying. When we plotted users’ satisfaction with the L&T feature against their willingness to recommend the product containing the L&T feature, we found that the L&T feature was lower on the y axis relative to other aspects of the interface (as shown in Figure 10.1). Using the L&T feature (L&T Ease of Use) and locating it (L&T Discoverability) scored lower in satisfaction than Product Quality, Product Value, and Product Ease of Use, but they also scored lower in Importance. Users place less importance on this new feature relative to quality, value, and ease of use. Data show that users’ satisfaction with the L&T feature is not as strongly correlated as quality and ease of use to their likelihood to recommend the product and is therefore not as important to driving growth of product sales.

The labels on the quadrants in Figure 10.1 tell us exactly which aspects of the user experience to improve next. Features that plot in the upper left quadrant, labeled FIX, are the highest priority because they have the highest importance and lowest satisfaction.

Data in Figure 10.1 indicate that if we were to redesign the L&T feature, we should invest in L&T Relevance as it plotted higher on the Importance axis than L&T Discoverability and Ease of Use. Discoverability and ease of use of the L&T feature are in the HOLD quadrant, indicating that these should be prioritized last.

10.1.3 Prioritizing Investments in Interface Design

So how much does the user interface of a software product contribute to users’ willingness to recommend the product? We had been told by our peers in the business intelligence department that the strongest predictors to a user’s willingness to recommend a product are:

1. Helpful and responsive customer support (Support)

2. Useful functionality at a good price (Value).

We ran a multiple-regression on our survey data set (Figure 10.2) and found that variables for the software user experience contribute 36% to the likelihood to recommend (n = 2170). Product Value accounted for 13% and Support accounted for another 9%. To verify the contribution of software user experience to willingness to recommend, we ran another multiple regression on data from a second, similar survey (n = 1061) and found the contribution of user experience variables to be 40%.

image

Figure 10.2 Simulated analysis of aspects of the customer experience contributing to customers’ likelihood to recommend a product. Note that some graph data are simulated.

We then ran a third survey 1 year later. Regression formulas from the first survey and the third survey are shown, where LTR represents Likelihood to Recommend. In Year 1 we calculated the improvement targets shown in Figure 10.3 (left). We set a target of 5% increase in users’ likelihood to recommend our product and we knew how to achieve that increase from the regression formula: assuming that the other contributing factors remain constant, if we could increase the satisfaction scores for the overall product ease of use, for the usability of Feature 1 and for the usability of Feature 2, then we would see an increase in users’ Likelihood to Recommend of 5%.

image

Figure 10.3 Target Increase in Likelihood to Recommend (top) vs Actual Increase (bottom).

In Year 2, we reran the analysis. We found that the actual increase in Likelihood to Recommend was 3%. This 3% increase was driven by a 3% increase in ease of use, a 1% increase in Feature 1’s usability, and a 0% increase in Feature 2’s usability, as summarized by Figure 10.3 (right). Regression formulas for the product studied are shown here:

image

image

10.1.4 Discussion

Thus, we found that running the multivariate analysis showed that the user experience contributed 36% to increasing product recommendations. At Year 2, we hadn’t met our target of increasing Likelihood to Recommend our product by 5%, but by investing in ease of use and in a few key features we were able to improve the Likelihood to Recommend by 3%. The Net Promoter model had provided us with a way to define and prioritize investment in user experience design and had given us a way to track the return of that investment year after year.

We wanted to test the Net Promoter model further. Could the model be used as a predictor of sales growth, as it was originally intended (Reichheld, 2003)? We know the average sales price of our products. We know, from the multivariate analysis, that interface design contributes 36% to motivating users to recommend our product. If we knew how many promoters refer the product actively, we could estimate the revenue gains associated with improved user experience of our software.

What we did next is determine if there is a link between “promoters” and an increase in customer referrals. In our survey, we asked if the respondent—all were existing customers—had referred the product to a friend in the last year (Owen & Brooks, 2008). From these data we derived the proportion of customers obtained through referrals and who likely refer others. This allowed us to approximate the number of referrals necessary to acquire one new customer (see Figure 10.4). Data used to derive this number are proprietary. For the purpose of this chapter, we use the number eight: we need eight referrals to acquire one new customer. In the NPS model, it is promoters who refer a product actively. But we didn’t want to assume that every respondent who answered 9 or 10 to the likelihood to recommend question, that is, every promoter, had referred our product actively. The actual percentage of promoters who referred our product actively within the last year was 63%. From this, we derived that the total number of promoters needed to acquire one new customer was 13.

image

Figure 10.4 How many promoters are necessary to acquire one new customer?

10.1.5 Conclusion

By calculating the number of promoters required to acquire a new customer, we were able to connect the proverbial dots in the software business: a good user experience design drives our users to recommend our products and product recommendations increase customer acquisition, which increases revenue growth. Through multivariate analysis, we have shown that experience design contributes 36% to motivating users to recommend our product. Since we know the average sales price of our product, we were able to estimate the revenue gains associated with improving the user experience of our software. We quantified the value of a good user experience. By tying user experience to customer acquisition, we are able to prioritize design investment in ease of use and in research to improve the user experience of our products.

In summary, this case study shows:

• Multivariate analysis of user experience attributes can be used to prioritize investment in user experience design and research.

• User experience attributes, such as ease of use, contribute significantly to customer loyalty.

• Knowing the average sales price of our products and the number of promoters needed to acquire one new customer, we can quantify the return on investment of a good user experience.

At Autodesk, we’ve found that calculating a net promoter score isn’t as useful as graphing and using key driver charts. Key driver charts target the aspects of the user experience that are needed most urgently in design improvements. By calculating drivers from year to year, we see how our investments in key areas pay out by increasing our users’ likelihood to recommend our products. We watch a features move from the FIX quadrant safely into the LEVERAGE quadrant. Inspiring more customers to promote our product through designing excellent user experiences is what motivates us. It’s not about a score or solely about acquiring new customers, it’s about designing software experiences that are so good, our users will promote them actively.

References

1. Bradner, E. (2010,). Recommending net promoter. Retrieved on 23.10.2011 from DUX: Designing the User Experience at Autodesk <http://dux.typepad.com/dux/2010/11/recommending-net-promoter.html>.

2. Reichheld, F. (2003). The one number you need to grow. Harvard Business Review.

3. Owen R, Brooks L. Answering the ultimate question San Francisco: Jossey-Bass; 2008.

4. Sauro, J. (2011). Usability and net promoter benchmarks for consumer software. Retrieved on 23.10.2011 from Measuring Usability <http://www.measuringusability.com/software-benchmarks.php>.

5. Sauro, J., & Kindlund, E. (2005). Using a single usability metric (SUM) to compare the usability of competing products. In Proceeding of the human computer interaction international conference (HCII 2005), Las Vegas, NV.

Biography

Erin Bradner works for Autodesk, Inc.—makers of AutoCAD and a world leader in 3D design software for manufacturing, building, engineering, and entertainment. Erin manages user experience research across several of Autodesk’s engineering and design products. She actively researches topics ranging from the future of computer-aided design, to how best to integrate marking menus into AutoCAD, to the contribution of user experience to likelihood to recommend a product. Erin has a Ph.D. in Human–Computer Interaction and 15 years of experience using both quantitative and qualitative research methods. Prior to Autodesk, Erin consulted for IBM, Boeing, and AT&T.

10.2 Measuring the Effect of Feedback on Fingerprint Capture

Mary Theofanos, Yee-Yin Choong and Brian Stanton,    National Institute of Standards and Technology

The National Institute of Standards and Technology’s Biometrics Usability Group is studying how to provide real-time feedback to fingerprint users in order to improve biometric capture at U.S. ports of entry. Currently, the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program collects fingerprints from all foreign visitors entering the United States using an operator-assisted process. US-VISIT is considering unassisted biometric capture for specific applications. But ensuring acceptable quality images requires that users receive real-time feedback on performance. Many factors influence image quality, including fingerprint positioning, alignment, and pressure. What form this informative feedback should take is the challenge for an international audience.

To address this need, Guan and colleagues (2011) designed an innovative, cost-efficient, real-time algorithm for fingertip detection, slap/thumb rotation detection, and finger region intensity estimation that feeds rich information back to the user instantaneously during the acquisition process by measuring objective parameters of the image. This study investigates whether such rich, real-time feedback is enough to enable people to capture their own fingerprints without the assistance of an operator. A second objective is to investigate if providing an overlay guide will help people in better positioning their hands for fingerprint self-capture.

10.2.1 Methodology

Experimental design

We used a within-subject, single factor design with 80 participants who performed two fingerprint self-capture tasks: one with a fingerprint overlay displayed on a monitor to guide them on positioning their fingers during the capture process and the other task without the overlay. Order of receiving conditions was reversed for half the participants. The dependent variables are:

• Task completion rate—ratio of the number of participants who completed the self-capture task versus the number who did not complete or completed the task with assistance

• Errors—number of hand-positioning corrections until an acceptable fingerprint image is recorded

• Quality of the fingerprint image—the NIST fingerprint imaging quality (NFIQ) scores of the fingerprint images

• Attempt time—time from the moment a participant presents her hand until the end of the capture

• Task completion time—total time it takes to complete a capture task

• User satisfaction—user’s ratings from the post-task questionnaire

Participants

Eighty adults [36 females and 44 males; ages ranging from 22 to 77(mean = 46.5)] were recruited from the general population (Washington, DC, area). Participants were distributed diversely across education, occupation, and ethnicity. Fifty-four participants indicated that they had been fingerprinted before: 18 had prior experience with inked and rolled fingerprinting and 36 did not indicate the type of their fingerprint experience. All fingerprint experiences were assisted.

Materials1

We used a CrossMatch Guardian Fingerprint scanner as used by US-VISIT. Specifications include 500 ppi resolution, effective scanning area 3.2″ × 3.0″ (81 × 76 mm), single prism, single imager, uniform capture area. The system runs on an Intel core 2 CPU 4300 @1.8-GHz processor PC, with 3.23 GB RAM and a 20-inch LCD monitor.

Figure 10.5 shows the experiment configuration: scanner on a height-adjustable table with height set to 39 inches (common counter height at US-VISIT facilities). The scanner was placed at the recommended 20° angle (Theofanos et al., 2008). A webcam was mounted on the ceiling above the scanner to record participants’ hand movements.

image

Figure 10.5 Experimental setup.

Procedure

Each participant was instructed to perform two self-capture tasks using on-screen instructions. Participants were informed verbally that both tasks required them to capture four fingerprint images following the same sequence: right slap (RS), right thumb (RT), left slap (LS), and left thumb (LT).

The test scenario is described in Figure 10.6: task 1 includes the overlay and task 2 includes the nonoverlay. Half of the participants were assigned randomly to start without an overlay, followed by a task with the overlay. The other half of the participants received the reverse.

image

Figure 10.6 Test scenario.

When the participant was ready, a generic fingerprint capture symbol as in Figure 10.7 was displayed, marking the start of the process.

image

Figure 10.7 Overlay condition—fingerprint self-capture process with examples.2

Participants filled out a post-task questionnaire and discussed their overall impressions with the test administrator.

Results

Applying the ISO (1998) definition of usability—“the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”—we measured effectiveness, efficiency, and user satisfaction. The α of all tests for statistical significance was set to 0.05. Data were not distributed normally; thus, a nonparametric test of difference, the Wilcoxon matched-pairs signed-ranks test, was used on all statistical within-subject comparisons.

Effectiveness

Three dependent variables are related to the effectiveness of the self-capture system: number of participants that completed the tasks (task completion rate), errors, and quality of the fingerprint image.

Task completion rate

Overall, 37 out of 40 (92.5%) participants for Task 1 and 39 out of 39 (100%) participants for Task 2 completed the self-capture tasks successfully by following the on-screen instructions without assistance or prompts from the test administrator.

Errors

The average number of hand-positioning corrections (errors) for each fingerprint is shown in Table 10.1. As shown in Table 10.2, errors can be classified into four categories.

Table 10.1

Errors by condition

Image

Table 10.2

Error category, condition, and text

Image

Figure 10.8 shows the seven most common errors. For slaps (both right and left hands), not enough pressure applied was the most common error: RS: 214, LS: 101. There were many occurrences where not all four fingers were detected at the same time, indicating that participants had some difficulties placing their fingers evenly on the scanner: RS: 132, LS: 51.

image

Figure 10.8 Most common correction errors.

Quality of the fingerprint image

We used NIST fingerprint imaging software to compute the NFIQ (Tabassi et al., 2004) score for each finger. NFIQ scores range from 1 (highest quality) to 5 (lowest quality). The medians of individual NFIQ scores are shown in Figure 10.9.

image

Figure 10.9 Fingerprint image quality.

As there is not yet consensus in the biometrics community on how to determine the quality of a slap image, we used a proposed quality scoring method under consideration by US-VISIT to assess the overall quality of the images. A slap is accepted if the index finger and middle finger have an NFIQ value of 1 or 2 and the ring finger and little finger have an NFIQ score of 1, 2, or 3. A thumb is accepted if it has an NFIQ value of 1 or 2. The results of applying the criteria are also shown in Figure 10.9. The acceptance rates are RT, 78.8%; RS, 67.5%; LT, 76.3%; and LS, 68.4%.

Efficiency

Two dependent variables are related to the efficiency of the self-capture system: attempt time and task completion time (Table 10.3).

Table 10.3

Time by conditions

Image

Attempt time

Attempt time is the time it takes from the moment a participant presents her hand until the end of capture of a fingerprint image. As expected, it took longer to capture slaps than thumbs. Means of the attempt time in seconds are RS, 28.66; RT, 5.87; LS, 17.14; and LT, 5.54.

Task completion time

Task completion time is the total time spent to complete a capture task of four fingerprint images. As in Table 10.3, on average, it took participants approximately 1½ minutes to complete a self-capture task, including four fingerprint images.

Satisfaction

Participants filled out a questionnaire regarding their experience with the self-capture task after each task. The questionnaire consisted of six questions with a five-point semantic distance scale. Mean ratings across all participants are summarized in Table 10.4. Overall, the participants responded very positively to the self-capture tasks.

Table 10.4

Post-task satisfaction questions and ratings

Image

During the discussion after the test, we asked each participant if the overlay assisted the self-capture process. Forty-six participants (57.5%) found the overlay helpful in guiding them to better position their hand on the scanner; six of those participants indicated that it would be more helpful if the overlay were directly on the scanner (rather than projected on the monitor). Twenty-eight participants (35%) did not find the overlay helpful; nine of those participants indicated that the overlay would be helpful if it were directly on the scanner. Six participants (7.5%) indicated that the overlay was not a factor, as the process is quite simple and straightforward; one participant from this group indicated that it would be more helpful if the overlay were on the scanner.

10.2.2 Discussion

In this study, we examined whether people can perform fingerprint self-captures successfully using the proposed real-time feedback system. We found that the participants were very effective and efficient in performing the self-capture tasks with great satisfaction.

The participant completion rate was high (92.5% in Task 1, then improved to 100%) with fewer than 11 errors on average. When examining only positioning errors (errors related to pressure or angles were excluded), the average errors dropped (mean = 5.94). Fingerprint image quality was comparable, if not better, to images taken in the attended situation. In a study with an attended setup, Stanton et al. (2012) reported that the acceptance rates of slaps ranged from 55 to 63%, based on the US-VISIT acceptance criteria. In this self-capture study, acceptance rates ranged from 67.5 to 68.4% for slaps, higher than those in Stanton et al. (2012).

Using the on-screen instructions, participants were able to position their hands accordingly, make adjustments when needed, and capture fingerprint images in approximately 1½ minutes. Ratings on the post-task questionnaire indicate that the participants felt comfortable and confident and interacted without much difficulty with the self-capture process. It was clear to the participants when the capture process began and ended. In debriefing, participants indicated that the self-capture process was easy, straightforward, and quick. They praised the experience of “do-it-yourself” as it gave them a sense of being in control and being trusted; as one participant put it: “The self capture process was very neat. It is easy enough that anybody can do it. It is elementary, easy to use, even children can do it.”

Our second research question was to investigate whether an overlay facilitates the self-capture fingerprint process. The overlay condition did not show consistent advantages or disadvantages of performance, that is, time, errors, and image quality, over the nonoverlay condition. However, more participants (57.5%) perceived that having the overlay helped them with the positioning of their hands and providing visual feedback. One reason for the discrepancy between performance and preference is the experimental configuration. The setup required participants to place their hands on the scanner (often looking down) and look up at the LCD monitor for the fingerprint image and feedback for corrections if needed. The overlay guide was superimposed onto the screen, which added another level of hand–eye coordination. Participants realized corrections were needed according to the visual feedback of their fingerprint image in relation to the overlay on the screen, but had to move their hands to make the actual corrections on the scanner. Participants were observed being more careful in placing their hands on the scanner in the overlay condition as if they wanted to make sure their hands were aligned properly with the overlay, whereas participants positioned their hands more freely in the nonoverlay condition. Sixteen participants indicated that the overlay guide would be very helpful if it were placed on the scanner in order to align their hand on the overlay as they placed their hand on the scanner instead of looking up and down trying to make a perfect alignment.

We observed that participants learned to use the system very quickly. Within-subject comparisons were performed to examine the ease of learning of the system. When comparing the performance between Task 1 and Task 2, it was found that Task 2 performed significantly better on RS (attempt time and errors), RT (errors), task completion time, and total errors. Learning was even evident within a task.

10.2.3 Conclusion

The real-time feedback fingerprint system is a highly usable system and shows evidence of great potential for fingerprint self-captures. By following the on-screen, real-time instructions, participants quickly learned and felt comfortable and confident in capturing their own fingerprints without any assistance. The next step is to determine if users will benefit more in a language-free environment in which all instructions are presented in graphical format (symbols or icons, without any textual elements). With the findings from this study, we are planning future research to answer this question. Although the overlay guide did not show consistent advantages or disadvantages with respect to performance, it was perceived as helpful with hand positioning and provided visual feedback of where users’ hands were in relation to the scanner area. Use of the overlay guide is recommended for use in the fingerprint self-capture process; however, we would recommend that it be placed directly on the scanner.3

Acknowledgment

This work was funded by the Department of Homeland Security Science and Technology Directorate.

References

1. Guan, H., Theofanos, M., Choong, Y. Y., & Stanton, B. (2011). Real-time feedback for usable fingerprint systems. International Joint Conference on Biometrics (IJCB), pp. 1–8.

2. International Organization for Standards (ISO)(1998). 9241–11 Ergonomic requirements for office work with visual display terminals (VDTs). Part 11. Guidance on usability. Geneva, Switzerland.

3. Stanton, B., Theofanos, M., Steves, M., Chisnell, D., & Wald, H. (2012). Fingerprint scanner affordances, NISTIR (to be published), National Institute of Standards and Technology, Gaithersburg, MD.

4. Tabassi, E., Wilson, C., & Watson, C. (2004). Fingerprint image quality. NISTIR 7151, National Institute of Standards and Technology, Gaithersburg, MD <http://www.nist.gov/customcf/get_pdf.cfm?pub_id=905710>.

5. Theofanos, M., Stanton, B., Sheppard, C., Micheals, R., Zhang, N. F., Wydler, J., et al. (2008). Usability testing of height and angles of ten-print fingerprint capture, NISTIR 7504, National Institute of Standards and Technology, Gaithersburg, MD.

Biographies

Mary Frances Theofanos is a computer scientist at the National Institute of Standards and Technology where she is the program manager of the Common Industry Format Standards for usability and the principal architect of the Usability and Security Program evaluating human factors and usability of cyber security and biometric systems. She spent 15 years as a program manager for software technology at the Oak Ridge National Laboratory complex of the U.S. DOE. She received a Master’s in Computer Science from the University of Virginia.

Brian Stanton obtained his Master’s degree in Cognitive Psychology from Rensselaer Polytechnic Institute and is a cognitive scientist in the Visualization and Usability Group at the National Institute of Standards and Technology where he works on the Common Industry Format project developing usability standards and investigates usability and security issues ranging from password rules and analysis to privacy concerns. He has also worked on biometric projects for the Department of Homeland Security, Federal Bureau of Investigation’s hostage rescue team, and with latent fingerprint examiners. Previously he worked in private industry designing user interfaces for air traffic control systems and B2B web applications.

Yee-Yin Choong is a research scientist at the National Institute of Standards and Technology. Her research focuses on applying human factors and usability disciplines to technologies, including graphical user interface design, symbols and icons design, biometrics technology usability, and cyber security and usability. Yee-Yin holds graduate degrees in Industrial Engineering from Pennsylvania State University and Purdue University, respectively.

10.3 Redesign of a Web Experience Management System

Tanya Payne, Grant Baldwin and Tony Haverda,    OpenText

Web Experience Management is an enterprise software product designed to create, edit, and manage websites. Generally, the websites it manages are quite large and complex. For example, they use large databases to store the website assets and presentation rules to decide what dynamic content is displayed. The product has both a console and a preview (close to WYSIWYG) interface for interacting with the content. The console view is designed for managing lists of content and bulk actions where the preview view is for editing content and the design.

The previous release of the Web Experience Management system in-context tools palette was widely criticized as being too large and “in the way” all the time. The User Experience team was tasked with redesigning the in-context tools so that they were easy to use as well as small and out of the way. The original “tools palette” can be seen in Figure 10.10.

image

Figure 10.10 The large tools palette can be seen on top of a web page. It is easy to see that the tools palette covers a great deal of the page.

The design team focused on creating a minimal sized “toolbar,” effectively reducing the existing large tools palette to the size of a menu bar. The full functionality and options in the existing tools palette could be accessed through slide-out expansion “drawers” as a user made selections from the primary icons on the new toolbar. This approach reduced the size and overall presence of the in-context tools dramatically.

10.3.1 Test Iterations

Using Axure-generated HTML prototypes, we performed a total of six rounds of usability testing on the toolbar approach during the design phase. We iterated on the design between rounds of testing, adding functionality as more requirements were incorporated. If a particular area of the product performed below our expectations, we often retested it in subsequent rounds. Because our focus was improving the design, if data integrity conflicted with the right thing to do for the design, we naturally always chose the design. Since we work in an Agile environment, we had to keep the usability testing cycles quite short, usually less than 2 weeks per round, often aiming for 1 week. Generally, we tried to keep one or two iterations ahead of the actual coding work that was going on.

For the purpose of this case study, we focused on four rounds of testing because they repeated the same workflow task with four different designs. We will refer to these as Rounds 1–4.

Employing a resource we often use to expand the coverage of our team, Round 1 of testing was performed in person by a graduate student from the School of Information at the University of Texas at Austin under the mentorship of Associate Professor Randolph Bias. We conducted Rounds 2, 3, and 4 of testing remotely via a conference call and WebEx. We shared our desktop via WebEx and gave the participant control of the mouse and keyboard.

There were a total of 25 participants across the four rounds of testing. We had 3 participants in Round 1, 9 participants in Round 2, 4 participants in Round 3, and 9 participants in Round 4. Participant groups varied for the rounds of testing, partially due to budget constraints, but also due to the fact that users of the Web Experience Management product can vary significantly. Users can range from being long-time, full-time users of the system to brand new, occasional users of the system. Rounds 2 and 3 involved current users of the system, as well as users of competitors’ systems recruited by a market research firm. Round 1 involved representative users from the University of Texas, and Round 4 involved current customers exclusively.

10.3.2 Data Collection

Even though all the usability tests were “formative” in nature, we collected usability metrics for each of the rounds of testing similar to the methodology reported by Bergstrom, Olmsted-Hawala, Chen, and Murphy (2011). From our perspective, usability metrics are just another way of communicating what happened during a usability test. Of course, we also collect qualitative data and that data still represent the bulk of our formative usability test results and recommendations. However, we have found metrics, being numbers, are concise and easy for management and developers to digest. Also, we find them quick and easy to collect and analyze.

We have standardized a set of metrics at OpenText and these were reported and tracked across the rounds of tests to communicate improvements in the design to product owners. Following the ISO definition of usability of “effectiveness,” “efficiency,” and “satisfaction,” we collected task completion rate, time on task, the Single Ease Question (SEQ; Sauro & Dumas, 2009; Sauro & Lewis, 2012) after every task and the System Usability Scale at the end of the test. The SEQ is a single question asking participants to rate the difficulty of the task on a seven-point scale, where 1 is “very difficult” and 7 is “very easy.”

We collected the data using Excel spreadsheets based on a template we use for every usability test that includes formulas for calculating our metrics. As a result, we can report the quantitative results almost immediately after finishing a study. Since we had product release goals, the team was very interested in hearing quickly if we were getting closer to our goals.

The only downside with our spreadsheet methodology is that one of our experimenters had a difficult time with the spreadsheet’s keyboard shortcuts in the beginning of these studies. As a result, we lost many time-on-task measurements and do not have enough data to present those findings here. However, we often see interesting differences in time on task, even when testing mockups.

10.3.3 Workflow

For the purposes of this case study, we focused on one aspect of the project: workflow design and results. The workflow task in all four rounds of usability testing was essentially the same: accept a workflow task assigned to the participant, approve the page being edited, add a note to the workflow, and finish the workflow task assigned. Task instructions given to the participant were:

You’ve received an automatic workflow item! When you made changes to the homepage, a workflow automatically triggered and sent a workflow to you to approve the page. Please complete the workflow and add the note “fixed typo, changed image” so that your editor knows what you’ve changed.

Workflow was a difficult task for us to get right from a design perspective. In the new slim design, we had very little room to work with to communicate complex task requirements, and we had to work within the existing system limitations. There wasn’t time or resources to allow for a complete rewrite of the code. Figures 10.1110.15 show screenshots of the design and how it changed over the four testing iterations. We started out with a very modular approach, requiring the user to find each piece of the functionality on the toolbar and ended with a more “wizard”-like approach where the user is guided through the process. In each version of the design, the toolbar is shown at the bottom of the screen in gray. The “task panel” (or later “Task Inbox”) is the blue panel just above the toolbar on the left-center portion of the screen. Early versions of the design (1 and 2) included some task functions (“Accept Task”) within the task panel, while later versions (3 and 4) moved those actions into a second “Task Editor” window (the panel on the right side of Design 3, and Screen 2 for Design 4). The designs, along with a brief description, appear in Figures 10.1110.15.

image

Figure 10.11 An early version of the new “toolbar” design can be seen on top of a web page. The “unaccepted task” window slides out on selection of the “Task: Please approve….” Yellow boxes indicate required selections for accepting a workflow task.

image

Figure 10.12 Second iteration of the workflow functionality in a toolbar design. Yellow boxes indicate required selections for accepting a workflow task.

image

Figure 10.13 Third iteration of the workflow functionality in a toolbar design. Yellow boxes indicate required selections for accepting a workflow task.

image

Figure 10.14 Fourth iteration of the workflow functionality in a toolbar design. A more “modular” or “wizard” approach was taken, so two images have been used to describe the interaction. Yellow boxes indicate required selections for accepting a workflow task.

image

Figure 10.15 Fourth iteration of the workflow functionality in a toolbar design. A more “modular” or “wizard” approach was taken, so we have two images to describe the interaction. This page shows the actual task screen. Yellow boxes indicate required selections for accepting a workflow task.

Workflow Design 1

The first design was focused on trying to stay with the very modular design of the new toolbar. The toolbar can be seen at the bottom of Figure 10.11 with the word “tasks” to the left. Users had to perform individual steps: open the task, accept the task, approve the page and add a note, and then finish the task (the accept task became a finish task button upon selection) using different buttons. The buttons required are highlighted in yellow.

Workflow Design 2

The second design, as shown in Figure 10.12, attempted to streamline the experience of accepting a task and adding notes, while leaving the “approve page” button outside of the workflow. Again, participants needed to select the tasks, accept the task, add a note, approve the page, and finish the task. Buttons required to perform the tasks are highlighted in yellow.

Workflow Design 3

The third set of workflow designs removed the concept of a “tasks” area of the toolbar and moved all of the actions into the “editor” of the content. The concept of an automatic accept with a “flag” for rejection was also explored here, as we had seen this example at a customer site. In this design, participants were required to select tasks, select the correct task and then accept the task, reject the design, add a note, and finish the task (see Figure 10.13). Buttons required to do the tasks are highlighted in yellow.

Workflow 4 Screen 1

The fourth and final design took the idea of putting workflow into a single popup further, making the popup a bit more like a “wizard” experience. Participants needed to select “tasks” and select the correct task, after which a popup came up, represented by Figure 10.14.

Workflow 4 Screen 2

In screen 2 (see Figure 10.15), participants started at the top of the screen with a “start task” button. Then participants moved down the screen to approve the item by clicking directly on a green check box associated with the item and added a note. At the time the item was “approved,” the “next” button at the bottom of the screen was replaced with a “finish task” button.

10.3.4 Results

We were able to demonstrate an improvement in the design of workflow, as indicated by the SEQ and task completion rate (see Figures 10.16 and 10.17).

image

Figure 10.16 Mean SEQ ratings increased from Round 1 to Round 4.

image

Figure 10.17 Task completion rates were slightly higher in Round 4.

The SEQ yielded our most interesting results. The mean SEQ score for the workflow task increased in each round of testing, indicating that participants found the task easier with each design iteration. In the initial round of testing, the mean SEQ for workflow was a 3.0. In Round 2, the mean SEQ increased to 3.9, and in Round 3, to 4.5. By Round 4 of testing, the mean SEQ had increased to 5.9.

The task completion rate for workflow also increased from Round 1 to Round 4, but not in the same way as the SEQ scores. The task completion rate went from 33% in Round 1 to 44% in Round 2, but decreased to 25% in Round 3. Finally, the task completion rate was highest in Round 4, at 67%. The drop in the task completion rate in Round 3 was interesting; although fewer participants completed the task successfully, the SEQ score was higher than Rounds 1 and 2. Participants rated the design in Round 3 easier to use than Round 1 or 2, even though it was actually more difficult for them to use.

Because we were conducting formative testing, we were not overly concerned with statistically significant differences between rounds of testing. However, we still calculated 95% confidence intervals as a way of assessing the variability in our data. Even with the small and unequal sample sizes between rounds, we were able to resolve some differences in SEQ scores. The confidence intervals for SEQ (shown as error bars in Figure 10.16) suggest that participants thought the workflow task in Round 4 was substantially easier than in Round 1 or 2. In contrast, the confidence intervals for task completion rate (error bars in Figure 10.17) were much larger and suggest that the differences we saw were small.

Because the new toolbar design was such a large departure from previous designs, we also looked for any differences in data between current users of the system and users of competitive systems during Rounds 2 and 3. We did not see any differences between the different user groups.

10.3.5 Conclusions

Like Bergstrom and colleagues (2011), we found quantitative metrics to be useful in formative testing with rapid iteration cycles. For us, that included rounds of testing conducted both by us and by partners at the University of Texas at Austin. By using task-level measures such as task completion rate and SEQ, we could retest certain aspects of the design, such as workflow, across multiple design iterations independently from the rest of the product. That allowed us to track the progress we made in our design, while also allowing us to add or remove tasks to test other parts of the product.

Because our test changed from round to round, we found that using the SEQ was very useful for us. The SEQ provided us with a metric we could use at the task level to get participants’ subjective impressions of ease of use. We found that the SEQ could be sensitive enough to resolve differences between different designs, even with only three or four participants per round of testing.

References

1. Bergstrom J, Olmsted-Hawala E, Chen J, Murphy E. Conducting iterative usability testing on a web site: Challenges and benefits. Journal of Usability Studies. 2011;7(1):9–30.

2. ISO 9241-124. Ergonomics of human-system interaction.

3. Sauro, J., & Dumas, J. S. (2009). Comparison of three one-question, post-task usability questionnaires. Computer human interaction conference. <http://www.measuringusability.com/papers/Sauro_Dumas_CHI2009.pdf)>.

4. Sauro J, Lewis JR. Quantifying the user experience: practical statistics for user research Morgan Kaufmann 2012.

Biographies

Tanya Payne has been working in the User Experience field for about 17 years. During that time she’s worked as a contractor, a consultant, and an in-house employee. She’s worked on a variety of consumer and enterprise products, including cell phones, printers, RISC600 servers, children’s touch screen paint product, and content management applications. Tanya received her Ph.D. in Cognitive Psychology from the University of New Mexico. She currently works at OpenText as a Senior User Experience Designer.

Grant Baldwin has been a User Experience professional at OpenText for 2 years, working on a range of enterprise software applications. Grant has an M.A. in Cognitive Psychology from The University of Texas at Austin and a B.S. from Ohio State University. He currently works at OpenText as a User Experience Designer.

Tony Haverda has worked in the User Experience field for over 23 years. Most recently he has been senior manager of User Experience Design at Open Text for the past 4 years. Tony holds a M.S. degree in Industrial Engineering–Human Factors and a B.S. degree in Computer Science, both from Texas A&M University. He current is senior manager of the User Experience Design Group at OpenText.

10.4 Using Metrics to Help Improve a University Prospectus

Viki Stirling and Caroline Jarrett,    Open University

The Open University is the U.K.’s largest university, with over 200,000 students, and the only one dedicated solely to distance learning. Its online prospectus receives approximately six million visitors each year. Ninety percent of the students register online, accounting for approximately £200 million (about US $300 million) of registrations each year.

The team, with overall responsibility for development of the Open University’s web presence, is led by Ian Roddis, head of Digital Engagement in the communications team. He co-ordinates the efforts of stakeholder groups, including developers, user experience consultants, academics, and many others. The team has been committed to user-centered design for many years now by involving users directly in usability tests, participatory design sessions, and other research and indirectly through a variety of different data sets, including search logs and web tracking. But the real value comes from triangulation, using several different sets of data together—as illustrated in Figure 10.18, from Jarrett and Roddis (2002).

image

Figure 10.18 WOW: Results and value—sketch from 2002 presentation on the value of UX measurement and triangulation.

10.4.1 Example 1: Deciding on Actions after Usability Testing

One of our earliest examples of triangulation started with a usability test. The prospectus homepage consisted of a long list of subjects (Figure 10.19).

image

Figure 10.19 Original list of subjects on the prospectus homepage, as seen on a typical screen.

Most people who consider university study start by looking for the subject they are interested in. When we asked participants in a usability test to look for the subject they wanted, we observed that some of them struggled:

• When viewed on a typical screen at that time, some of the list was “below the fold” and not visible to the user (Figure 10.20).

image

Figure 10.20 Scrolling down revealed “missing” subjects, such as Information Technology, more sciences, Social Work, and Teacher Training.

• The list was presented in alphabetical order, which meant that some related subjects (e.g., Computing and Information Technology) were separated from each other.

We could have done more testing with more participants to measure exactly how much of a problem this was, but instead we decided to use web analytics to investigate the actual behavior of site visitors.

Web Analytics Tools at the Open University

The Open University uses commercial web analytics tools, reviewing the choice of tools from time to time. Our current tracking tool is Digital Analytix from comScore. We tag each web page that we want to track, and we ask web visitors to give us permission to use cookies to track their visits. The tool then logs each page visit and the path taken by each visitor through the website.

We can also distinguish between visits by logged-in visitors (students and staff) and by other visitors. We distinguish between a single visit—the path someone takes through our site in one continuous experience—and the experience of a visitor—the aggregation of multiple visits from a computer where someone has given us permission to use cookies.

It can be tricky to distinguish different types of visit and visitor, so we find that it’s best to try to focus on the big overall picture and not stress too much about finer details.

For example, we discovered that 37% of visits that involved Information Technology also involved Computing, but that only 27% of visits that involved Computing also involved Information Technology. In addition, we found that Computing was receiving 33% more visitors than Information Technology. This confirmed what we’d seen in usability testing: our participants were more likely to click on Computing (above the fold) than on Information Technology (below the fold).

We looked at the content of these two subjects and discovered that prospective students should really think about both of them before choosing either. From this type of analysis, across the entire list of subjects, we recommended a new design with a much shorter list of subject areas based on actual user behavior, and the clusters of subjects they tended to view together (see Figure 10.21).

image

Figure 10.21 The prospectus homepage in 2012; a short but effective list of subjects.

The previous organization of subject areas reflected the internal structure of the university at that time; for example, the Mathematics and Computing faculty taught Computing, but the Technology faculty taught Information Technology. The revised organization aligns with visitor expectations and needs and has performed well (with a few tweaks) ever since.

10.4.2 Example 2: Site-Tracking Data

The usability test described in Section 10.4.1 was a major initiative that required a lot of data to persuade many different stakeholders—the type of thing you only want to do occasionally.

This second example is more typical of our everyday work. Some stakeholders came to Viki Stirling, who looks after analytics and optimization, with a problem: they weren’t getting the expected level of conversion from part of their website.

Viki took the site-tracking data and fed the appropriate tracking data into NodeXL, a visualization tool.

Looking at the flows by visits, the problem jumped out at her immediately: A lot of visits arrive at a particular page, but few continue after that (highlighted in red in Figure 10.22). The big arrows from node to node should continue, getting only slightly smaller at each step. At the problematic page, we suddenly see that the larger arrow flows backward up the chain, with only a small arrow moving on to the next step.

image

Figure 10.22 This view of flows by visit shows one page where plenty of visits arrive, but few move on to the next step.

When she investigated the problematic page, it was obvious how to revise it. But Viki was suspicious: although this task isn’t common, it’s important for the relatively few visitors who attempt it. She investigated further, looking at the flows by visitor (see Figure 10.23). This revealed that the previous step in the process was also causing problems: visitors are moving backward and forward from that step, clearly trying to make progress but failing. Once again, a look at the relevant web page quickly revealed the necessary changes.

image

Figure 10.23 Flows by visitor show that an earlier step in the process is also causing difficulty.

From the UX point of view, we might immediately ask: why didn’t the stakeholders do usability testing, which would probably have revealed these problems ahead of time? The answer is that, of course, the Open University does lots of usability testing, but they face a challenge familiar to any organization with a huge and complex website, which is one of prioritization. In this example, the problematic task is rather unusual and relevant only to a small number of users at a very specific point in their progression from enquirer to student.

10.4.3 Example 3: Triangulation for Iteration of Personas

The two previous examples demonstrate the use of measurement techniques for specific changes. Our third example illustrates the use of metrics for one of the UX tools we use all the time: personas.

We first started using personas after Caroline Jarrett learned about them from Whitney Quesenbery at the Society for Technical Communication Conference in 2002. They were based on our experience of usability test participants over a few years—by that point we had been usability testing since 1998—and Sarah Allen validated them against various internal data sources at the time. With Whitney’s help, we’ve been using, updating, and revalidating the personas ever since. Pruitt and Adlin (2006) include a short overview of our experience with personas.

For example, the Open University introduced Foundation Degrees, shorter degree programs focused on training for particular jobs that are somewhat similar to the U.S. “Associates Degree.” To help with our design activities around Foundation Degrees, we added in a persona, “Winston,” who was interested in the Foundation Degree in Materials Fabrication and Engineering. But we discovered that we weren’t meeting Winstons in usability tests. Viki Stirling had the idea of doing some visit tracking to see whether the routes through the site that we envisaged for the personas were actually sufficiently based in data. She discovered that most of them were, but Winston really wasn’t justified, the numbers just weren’t there. Winston became Win, interested in the Foundation Degree in Early Years (see Figure 10.24).

image

Figure 10.24 Persona “Win” at the start of her journey to becoming a student.

Lindsay’s reasons for studying are slightly different to Win’s, and she’s focusing slightly more on costs and fees—but overall, she’s close enough that we can be confident that a design intended for persona Win will also work for real aspiring students like Lindsay.

10.4.4 Summary

Most user experience techniques are valuable on their own, and we’re happy to use them individually, as illustrated by our everyday example, number 2 above.

We find that the real value comes from comparing what we learn from larger scale quantitative techniques with what we learn from small-scale, qualitative techniques—and continuing to do that over many years.

Acknowledgments

We thank our colleagues at the Open University: Sarah Allen and Ian Roddis, and at Whitney Interactive Design: Whitney Quesenbery.

References

1. Jarrett C, Roddis I. How to obtain maximum insight by cross-referring site statistics, focus groups and usability techniques Web based surveys and usability testing San Francisco, CA: Institute for International Research; 2002.

2. Pruitt J, Adlin T. The persona lifecycle: Keeping people in mind throughout product design San Francisco: Morgan Kaufmann; 2006.

Biographies

Viki Stirling, as eBusiness Manager of Analytics and Optimization in the Digital Engagement team at Open University, is responsible for leading the understanding of actual customer on/off-site behaviors. She manages the integration and implementation of online business analytics, both quantitative (web analytics) and qualitative (sentiment), to provide insight and recommendations that inform institutional strategy, addresses the university’s business objectives, and improves e-business performance. As she is particularly interested in the relationship between analytics and user experience, she regularly provides analytics insight to support usability testing, persona development, and optimization of the customer journey.

Caroline Jarrett, after 13 years as a project manager, started her business, Effortmark Limited. She became fascinated with the problem of getting accurate answers from users when she was consulting with HM Revenue and Customs (the U.K. tax authority) on how to deal with large volumes of tax forms. She became an expert in forms design and is coauthor of “Forms That Work: Designing Web Forms for Usability.” Along the way, she completed an MBA with the Open University, which led to coauthoring the textbook “User Interface Design and Evaluation” and to consulting on the user experience of their vast and complex website. Caroline is a Chartered Engineer, Fellow of the Society for Technical Communication, and the cofounder of the Design to Read project, which aims to bring together practitioners and researchers working on designing for people who do not read easily.

10.5 Measuring Usability Through Biometrics

Amanda Davis, Elizabeth Rosenzweig and Fiona Tranquada,    Design and Usability Center, Bentley University

A group of Bentley University researchers from the Design and Usability Center (DUC) wanted to understand how the emotional experience of using a digital textbook compared to a printed textbook. In 2011, our team of graduate students (Amanda Davis, Vignesh Krubai, and Diego Mendes), supervised by DUC principal consultant Elizabeth Rosenzweig, explored this question using a unique combination of affective biometric measurement and qualitative user feedback. This case study describes how these techniques were combined to measure emotional stimulation and cognitive load.

10.5.1 Background

As user experience research achieves greater prominence in business organizations, we are often asked to help gauge the emotional experience of a product, as well as its usability. Usability professionals have a variety of tools and techniques available to understanding human behavior. However, the tools used commonly to measure the emotion of participants while attempting to complete tasks rely on either an observer’s interpretation of how the participant is feeling or the participants’ description of their reactions (e.g., through think-aloud, protocolor post-task ratings). These interpretations are subject to phenomena such as the observer effect, the participants’ inclination to please, and the time passed since they had the reaction. Other tools, such as the Microsoft Product Reaction Cards, show the direction of a participant’s response (positive or negative) to a product, but not the magnitude of that emotion (Benedek & Miner, 2002).

Adding biometric measures to user research provides a way to measure users’ arousal as they use a product. Arousal describes the overall activation (emotional stimulation and cognitive load) experienced by a user, as measured by biometric measures such as electrodermal activity (EDA). These measures capture physiological changes that co-occur with emotional states (Picard, 2010). Because biometric measures are collected in real time during a user’s interaction with a product, they provide a direct measurement of arousal that is not affected by observer or participant interpretation.

This case study describes initial research to gauge the effectiveness of a new technique that assigns meaning to biometric measures. We hypothesized that by combining biometric measures with feedback from the Microsoft Product Reaction Cards, we could gain a detailed description of a user’s arousal, the interaction that activated a change in arousal, and assignment of emotion (positive or negative) to that interaction. For example, if we saw a user’s arousal level increase sharply while attempting a search task, the Microsoft Product Reaction Cards selected would indicate whether the arousal increased due to frustration or pleasure. This combination would let practitioners quickly identify areas of a product that participants found more or less engaging or frustrating, even if the participants do not articulate their reaction.

Our Bentley DUC team partnered with Pearson Education, a textbook publisher that had recently moved into the digital textbook space on the iPad. We focused this new technique on a usability study that would highlight any differences in arousal between digital and paper textbooks.

10.5.2 Methods

Participants

We recruited 10 undergraduates who owned and used iPads. Each 60-minute session was one on one with a moderator from the Bentley Design and Usability Center in the room with the participant.

Technology

To gather affective measurement and user feedback, we used two innovative tools. Affectiva’s Q Sensor was used to identify moments of increased arousal. The words selected by participants using the Microsoft’s Product Cards enabled us to understand the direction (positive or negative) of their emotions.

The Affectiva Q Sensor is a wearable, wireless biosensor that measures emotional arousal via skin conductance. The unit of measure is electrodermal activity, which increases when the user is in a state of excitement, attention, or anxiety and reduces when the user experiences boredom or relaxation. Since EDA captures both cognitive load and stress (Setz, Arnrich, Schumm, La Marca, Troster, & Ehlert, 2009), we used this technology to accurately pinpoint moments of user engagement with digital and printed textbooks. Affectiva’s analysis software provides markers used to indicate areas of interest in the data. Depending on the study, areas of interest may include task start and end times. These markers can be set during a study or post-test.

To better understand the emotions of the user, we utilized a toolkit developed by Microsoft called Microsoft Product Cards. These cards are given to users to form the basis for discussion about a product (Benedek & Miner, 2002). The main advantage of this technique is that it does not rely on a questionnaire or rating scales, and users do not have to generate words themselves. The 118 product reaction cards targeted a 60% positive and 40% neutral balance. A study out of Southern Polytechnic in Georgia found that cards encourage users to tell a richer and more revealing description of their experiences (Barnum & Palmer, 2010). This user feedback helped the DUC team assign specific emotions to the Q Sensor’s readings. Without these cards, we would have needed to make inferences about the peaks and lulls found in the Q Sensor data.

Procedure

Each session was structured as follows:

1. When participants arrived, we attached the Q Sensor biosensors to each of their hands. Participants were asked to walk down the hallway and back so a small amount of electrolyte solution (sweat) would be generated. This sweat was necessary to establish a connection between the skin surface and the Q Sensor’s electrodes.

2. Participants attempted seven tasks (“homework questions”) on the digital textbook and the paper textbook. Half of the participants conducted tasks using the digital textbook first, while the other set of participants used the printed textbook first. The first set of tasks included four tasks and the second set of tasks included three tasks. Participants were asked to think aloud as they completed their tasks.

3. After they completed their tasks on either the digital textbook or the printed textbook, participants used the Product Reaction Cards to indicate their reaction to the experience.

4. After participants had used both textbooks, we asked a few open-ended questions about the comparative experience, what they liked best and least, and which version of the textbook they would prefer if they had to select one.

10.5.3 Biometric Findings

Q Sensor data results

For Q Sensor data analysis, we divided Q Sensor data by task. As each participant was wearing two gloves, we were able to collect two different sets of data for each task. Of the 140 tasks data points (10 participants with two gloves, across seven tasks), 102 data points from 9 participants remained after removing poor quality biometric data and missed tasks. Figure 10.25 provides an example of the Q Sensor analysis software.

image

Figure 10.25 Affectiva’s analysis software showing a single participant’s results from the Q Sensor. The bottom half screen shows the participant’s electrodermal activity during the session, while the top right zooms in to a particular shorter period of time.

We then compared the number of peaks per minute for each participant’s tasks using the digital textbook to tasks using the printed textbook. Results showed that the digital and the paper textbooks had average peaks per minute of 6.2 and 7.6, respectively. However, using a paired samples t test, this difference was not statistically significant (p = 0.23) at a 95% confidence interval (see Figure 10.26). Figure 10.27 shows the average number of peaks by participant for the two groups. Comparing the peaks per minute across the different tasks, the paper textbook had higher peaks per minute than the iPad textbook on six out of the seven tasks.

image

Figure 10.26 Average peaks per minute with 95% confidence limits for printed textbook and digital textbook tasks.

image

Figure 10.27 Average peaks per minute per task broken out by group. Group A used the digital textbook first; group B used the digital textbook first.

10.5.4 Qualitative Findings

Once we observed that average peaks per minute were trending higher for the paper textbook than the digital textbook, we compiled the qualitative feedback that we had collected from the Microsoft Reaction Cards, as well as the poststudy questions. While participants described the digital version as “organized,” “easy to use,” and “efficient,” participants described the paper textbook as “slow,” “time-consuming” and “old.” Figures 10.28 and 10.29 show word clouds for the paper text book and digital textbook, respectively. The larger the font, the most frequently the card was selected. The shade of the text does not have any meaning.

image

Figure 10.28 Word cloud from Microsoft Reaction Cards for paper textbook.

image

Figure 10.29 Word cloud from Microsoft Reaction Cards for digital textbook.

Affectiva Q Sensor data showed us that participants experienced higher arousal while using the printed textbook, but combining those results with qualitative data from the Microsoft Product Reaction Cards, as well as moderated discussion, revealed that that the higher levels were due to negative emotions from a difficulty in performing search and comprehension tasks.

At the end of the session, the moderators asked participants to choose between digital and printed versions of the textbook; surprisingly, participants were split with five preferring the digital and five preferring the printed textbook. Although this was not the focus of our research, this split suggests that the decision between paper and digital textbooks relies on more than the difference in emotional experiences.

10.5.5 Conclusions and Practitioner Take-Aways

This study successfully tested the feasibility of integrating biometric measures with qualitative user feedback. Results showed that the benefits of this technique over standard usability testing include:

• Direct measurements of a participant’s arousal

• Triangulation of sources to explain and validate findings

Specifically, additional information gained from the Q Sensor for this study redirected and clarified the impressions that we had based on observation and the participants’ thinking aloud. Based on those measures, we expected that the digital textbook would have been more stimulating based on how the participants described and interacted with it. However, Q Sensor data revealed that participants had a higher arousal level while using the paper textbook. Other qualitative data indicated a negative direction for those emotions, as participants struggled with their tasks. This negative emotion was stronger than the pleasurable emotions felt while using the digital textbook.

This approach would be ideal for projects whose goal is to understand participant emotional responses and the severity of those reactions throughout their interaction with a product. By associating metrics across data sets, researchers can pinpoint a participant’s exact emotional reaction, and what was causing that reaction, at any point during their session. This unique combination of metrics provides a new window into a participant’s emotional reactions above and beyond what is articulated during a standard think-aloud usability study.

However, these techniques require additional time to set up the study appropriately and to analyze the results. For example, researchers will want to plan on using time markers with the Q Sensor. We learned during our data analysis that we could have saved significant efforts postanalysis by adding more markers to Q Sensor data during the sessions. For this study, the team spent approximately 2 work weeks scrubbing, combining, and analyzing the data. However, a more recent project that’s used these same techniques only took us 3 work days as we used more Q Sensor markers during the sessions. This method probably won’t make sense for a basic formative usability study, but we believe would offer benefits for projects with a larger scope.

We are continuing to refine and build out these techniques through additional projects and are applying them to new domains.

Acknowledgments

Thanks to the Design and Usability Center at Bentley University for their support, to Affectiva for use of the Q Sensor and analysis support, and Pearson Education for providing the digital and printed textbooks. Also, thanks to Vignesh Krubai, Diego Mendes, and Lydia Sankey for their contributions to this research.

References

1. Barnum C, Palmer L. More than a feeling: Understanding the desirability factor in user experience. CHI 2010;4703–4715.

2. Benedek J, Miner T. Measuring desirability: New methods for evaluating desirability in a usability lab setting. Proceedings of Usability Professionals Association 2002;8–12.

3. Picard R. Emotion research by the people, for the people. Emotion Review. 2010;2:250–254.

4. Setz C, Arnrich B, Schumm J, La Marca R, Tröster G, Ehlert U. Discriminating stress from cognitive load using a wearable EDA device. IEEE Transactions on Information Technology in Biomedicine. 2010;14(2):410–417.

Biographies

Amanda Davis is a research associate at the Design and Usability Center at Bentley University. Amanda specializes in applying eye-tracking and biometric measurement tools to usability. She holds a B.A. in Economics from Wellesley College, where she focused on behavioral economics. She is currently pursuing an M.S. in Human Factors in Information Design at Bentley University.

Elizabeth Rosenzweig is a principal usability consultant at the Design and Usability Center at Bentley University and founding director of World Usability Day. She holds four patents in intelligent user interface design. Her work includes design, research, and development in areas such as digital imaging, voting technology, mobile devices, and financial and health care systems.

Fiona Tranquada is a senior usability consultant at the Design and Usability Center at Bentley University. She leads user research projects for clients across many industries, including financial services, health care, and e-commerce. Fiona received an M.S. in Human Factors and Information Design degree from Bentley University


1Specific products and/or technologies are identified solely to describe the experimental procedures accurately. In no case does such identification imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the products and equipment identified are necessarily the best available for the purpose.

2The fingerprint images were blurred to prevent possible identification of the participants.

3The material in this chapter was taken from Y.Y. Choong, M.F. Theofanos, and H. Guan, Fingerprint Self Capture: Usability of a Fingerprint System with Real-Time Feedback. IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), September 23–26, 2012. Please refer to that paper for the complete report.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset