References

1. Albert, W., & Dixon, E. (2003). Is this what you expected? The use of expectation measures in usability testing. Proceedings of Usability Professionals Association 2003 Conference, Scottsdale, AZ.

2. Albert, W., Gribbons, W., & Almadas, J. (2009). Pre-conscious assessment of trust: a case study of financial and health care web sites. Human factors and ergonomics society annual meeting proceedings, 53, 449–453. Also <http://www.measuringux.com/Albert_Gribbons_Preconsciousness.pdf>.

3. Albert W, Tedesco D. Reliability of self-reported awareness measures based on eye tracking. Journal of Usability Studies. 2010;5(2):50–64.

4. Aldenderfer M, Blashfield R. Cluster analysis (quantitative applications in the social sciences) Beverly Hills, CA: Sage Publications, Inc.; 1984.

5. American Institutes for Research. (2001). Windows XP Home Edition vs. Windows Millennium Edition (ME) public report. New England Research Center, Concord, MA. Available at <http://download.microsoft.com/download/d/8/1/d810ce49-d481-4a55-ae63-3fe2800cbabd/ME_Public.doc>.

6. Andre, A. (2003). When every minute counts, all automatic external defibrillators are not created equal. Published in June, 2003 by Interface Analysis Associates <http://www.usernomics.com/iaa_aed_2003.pdf>.

7. Bangor A, Kortum P, Miller JA. Determining what individual SUS scores mean: adding an adjective rating scale. Journal of Usability Studies. 2009;4:3.

8. Bargas-Avila, J. A. & Hornbæk, K. (2011). Old wine in new bottles or novel challenges? a critical analysis of empirical studies of user experience, CHI ‘11 Proceedings of the 2011 annual conference on human factors in computing systems, 2689–2698.

9. Barnum, C., Bevan, N., Cockton, G., Nielsen, J., Spool, J., & Wixon, D. (2003). The “magic number 5”: is it enough for web testing? 2003, April 5–10, Ft. Lauderdale, FL: CHI.

10. Benedek, J., & Miner, T. (2002). Measuring desirability: new methods for evaluating desirability in a usability lab setting. Usability professionals association 2002 conference, Orlando, FL, July 8–12. Also available at <http://www.microsoft.com/usability/UEPostings/DesirabilityToolkit.doc>. Also see the appendix listing the Product Reaction Cards at <http://www.microsoft.com/usability/UEPostings/ProductReactionCards.doc>.

11. Bias R, Mayhew D. Cost-justifying usability, Second edition: an update for the Internet age San Francisco: Morgan Kaufmann; 2005.

12. Birns, J., Joffre, K., Leclerc, J., & Paulsen, C. A. (2002). Getting the whole picture: Collecting usability data using two methods – concurrent think aloud and retrospective probing. Proceedings of the 2002 Usability Professionals’ Association Conference, Orlando, FL. Available from <http://concordevaluation.com/papers/paulsen_thinkaloud_2002.pdf>.

13. Breyfogle F. Implementing six sigma: smarter solutions using statistical methods New York: John Wiley and Sons; 1999.

14. Brooke J. SUS: a quick and dirty usability scale. In: Jordan PW, Thomas B, Weerdmeester BA, McClelland IL, eds. Usability evaluation in industry. London: Taylor & Francis; 1996.

15. Burby J, Atchison S. Actionable web analytics: using data to make smart business decisions Indianapolis, IN: Sybex; 2007.

16. Card SK, Moran TP, Newell A. The psychology of human-computer interaction London: Lawrence Erlbaum Associates; 1983.

17. Catani, M., & Biers, D. (1998).Usability evaluation and prototype fidelity. In Proceedings of the human factors and ergonomic society.

18. Chadwick-Dias, A., McNulty, M., & Tullis, T. (2003). Web usability and age: how design changes can improve performance. Proceedings of the 2003 ACM conference on universal usability, Vancouver, BC, Canada.

19. Chin, J. P., Diehl, V. A., & Norman, K. L. (1988). Development of an instrument measuring user satisfaction of the human-computer interface. ACM CHI’88 proceedings, 213–218.

20. Clifton B. Advanced web metrics with Google analytics Indianapolis, IN: Sybex; 2012.

21. Cockton, G., & Woolrych, A. (2001). Understanding inspection methods: lessons from an assessment of heuristic evaluation. Joint Proceedings of HCI and IHM: people and computers, XV.

22. Cox EP. The optimal number of response alternatives for a scale: a review. Journal of Marketing Research. 1980;17(4):407–422.

23. Cunningham K. The accessibility handbook Sebastopol, CA: O’Reilly Media; 2012.

24. Dennerlein, J., Becker, T., Johnson, P., Reynolds, C. J., & Picard, R. W. (2003). Frustrating computer users increases exposure to physical factors. In Proceedings of the international ergonomics association, August 24–29, Seoul.

25. Dillman, D. A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., Berck, J., et al. (2008). Response rate and measurement differences in mixed mode surveys using mail, telephone, interactive voice response, and the internet. Available at <http://www.sesrc.wsu.edu/dillman/papers/2008/ResponseRateandMeasurement.pdf>.

26. Ekman P, Friesen W. Unmasking the face Englewood Cliffs, NJ: Prentice-Hall; 1975.

27. Everett, S. P., Byrne, M. D., & Greene, K. K. (2006). Measuring the usability of paper ballots: efficiency, effectiveness, and satisfaction. Proceedings of the human factors and ergonomics society 50th annual meeting. Santa Monica, CA: Human Factors and Ergonomics Society.

28. Few S. Information dashboard design: the effective visual communication of data Sebastopol, CA: O’Reilly Media, Inc.; 2006.

29. Few S. Now you see it: simple visualization techniques for quantitative analysis Oakland, CA: Analytics Press; 2009.

30. Few S. Show me the numbers: designing tables and graphs to enlighten 2nd ed. Oakland, CA: Analytics Press; 2012.

31. Finstad K. Response interpolation and scale sensitivity: evidence against 5-point scales. Journal of Usability Studies. 2010;5(3):104–110.

32. Fogg, B. J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., et al. (2001). What makes web sites credible? a report on a large quantitative study. Proceedings of CHI’01, human factors in computing systems, 61–68.

33. Foraker. (2010). Usability ROI case study: breastcancer.org discussion forums. Retrieved 4/18/2013 from <http://www.usabilityfirst.com/documents/U1st_BCO_CaseStudy.pdf>.

34. Foresee. (2012). ACSI e-government satisfaction index (Q4 2012). <http://www.foreseeresults.com/research-white-papers/_downloads/acsi-egov-q4-2012-foresee.pdf>.

35. Friedman HH, Friedman LW. On the danger of using too few points in a rating scale: a test of validity. Journal of Data Collection. 1986;26(2):60–63.

36. Garland R. The mid-point on a rating scale: is it desirable? Marketing Bulletin 1991;(2):66–70 Research Note 3.

37. Guan, Z., Lee, S., Cuddihy, E., & Ramey, J. (2006). The validity of the stimulated retrospective think-aloud method as measured by eye tracking. In Proceedings of the ACM SIGCHI conference on human factors in computing systems, 2006 (pp.1253–1262). New York, New York, USA. ACM Press. Available from <http://dub.washington.edu:2007/pubs/chi2006/paper285-guan.pdf>.

38. Gwizdka J, Spence I. Implicit measures of lostness and success in web navigation. Interacting with Computers. 2007;19(3):357–369.

39. Hart, T. (2004). Designing “senior friendly” websites: do guidelines help? Usability News, 6.1. <http://psychology.wichita.edu/surl/usabilitynews/61/older_adults-withexp.htm>.

40. Henry SL. Just ask: integrating accessibility throughout design Raleigh, NC: Lulu.com; 2007.

41. Hertzum, M., Jacobsen, N., & Molich, R. (2002). Usability inspections by groups of specialists: perceived agreement in spite of disparate observations. CHI, Minneapolis.

42. Hewett TT. The role of iterative evaluation in designing systems for usability. In: Harrison MD, Monk AF, eds. People and computers: designing for usability. Cambridge: Cambridge University Press; 1986;196–214.

43. Holland, A. (2012a). Ecommerce button copy test: did ‘Personalize Now’ or ‘Customize It’ get 48% more revenue per visitor? Retrieved on 4/18/2013 from <http://whichtestwon.com/archives/14511>.

44. Holland, A. (2012b). Online newspaper layout test: should photos alternate sides or always appear to the right of stories? Retrieved on 4/18/2013 from <https://whichtestwon.com/archives/18744>.

45. Hornbæk K, Frøkjær E. A study of the evaluator effect in usability testing. Human-Computer Interaction. 2008;23(3):251–277.

46. Human Factors International. (2002). HFI helps staples.com boost repeat customers by 67%. Retrieved 4/18/2013 from <http://www.humanfactors.com/downloads/documents/staples.pdf>.

47. Hyman IE, Boss SM, Wise BM, McKenzie KE, Caggiano JM. Did you see the unicycling clown? Inattentional blindness while walking and talking on a cell phone. Applied Cognitive Psychology. 2010;24:597–607.

48. ISO/IEC 25062 (2006). Software engineering – Software product Quality Requirements and Evaluation (SQuaRE) – Common Industry Format (CIF) for usability test reports.

49. Jacobsen, N., Hertzum, M., & John, B. (1998). The evaluator effect in usability studies: problem detection and severity judgments. In Proceedings of the human factors and ergonomics society.

50. Kapoor, A., Mota, S., & Picard, R. (2001). Towards a learning companion that recognizes affect. AAAI Fall Symposium, November, North Falmouth, MA.

51. Kaushik A. Web analytics 2.0: the art of online accountability and science of customer centricity Indianapolis, IN: Sybex; 2009.

52. Kirkpatrick A, Rutter R, Heilmann C, Thatcher J, Waddell C. Web accessibility: web standards and regulatory compliance New York, NY: Apress Media; 2006.

53. Kohavi, R., Crook, T., & Longbotham, R. (2009). Online experimentation at Microsoft, Third workshop on Data Mining Case Studies and Practice. Retrieved on 4/18/2013 from <http://robotics.stanford.edu/~ronnyk/ExP_DMCaseStudies.pdf>.

54. Kohavi, R., Deng, A., Frasca, B., Longbotham, R., Walker, T., & Xu, Y. (2012). Trustworthy online controlled experiments: five puzzling outcomes explained. In Proceedings of the 18th ACM SIGKDD international conference on knowledge discovery and data mining (KDD ’12). ACM, New York, NY, USA, 786–794.

55. Kohavi, R., & Round, M. (2004). Front line internet analytics at Amazon.com. Presentation at Emetrics Summit 2004. Retrieved on 4/18/2013 from <http://ai.stanford.edu/~ronnyk/emetricsAmazon.pdf>.

56. Kohn LT, Corrigan JM, Donaldson MS, eds. Committee on quality of health care in America, institute of medicine “To err is human: building a safer health system.”. Washington, DC: National Academies Press; 2000.

57. Kruskal J, Wish M. Multidimensional scaling (quantitative applications in the social sciences) Beverly Hills, CA: Sage Publications, Inc.; 2006.

58. Kuniavsky M. Observing the user experience: a practitioner’s guide to user research San Francisco: Morgan Kaufmann; 2003.

59. LeDoux, L., Mangan, E., & Tullis, T. (2005). Extreme makeover: UI edition. Presentation at Usability Professionals Association (UPA) 2005 Annual Conference, Montreal, QUE, Canada. Available from <http://www.upassoc.org/usability_resources/conference/2005/ledoux-UPA2005-Extreme.pdf>.

60. Lewis J. Sample sizes for usability studies: additional considerations. Human Factors. 1994;36:368–378.

61. Lewis JR. Psychometric evaluation of an after-scenario questionnaire for computer usability studies: the ASQ. SIGCHI Bulletin. 1991;23(1):78–81 Also see <http://www.acm.org/~perlman/question.cgi?form=ASQ>.

62. Lewis JR. IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction. 1995;7(1):57–78 Also see http://www.acm.org/~perlman/question.cgi?form=CSUQ.

63. Lewis, J. R. & Sauro, J. (2009). The factor structure of the system usability scale. Proceedings of the human computer interaction international conference (HCII 2009), San Diego CA, USA.

64. Likert R. A technique for the measurement of attitudes. Archives of Psychology. 1932;140:55.

65. Lin, T., Hu, W., Omata, M., & Imamiya, A. (2005). Do physiological data relate to traditional usability indexes? In Proceedings of OZCHI2005, November 23–25, Canberra, Australia.

66. Lindgaard, G., & Chattratichart, J. (2007). Usability testing: what have we overlooked? In Proceedings of ACM CHI conference on human factors in computing systems.

67. Lindgaard G, Fernandes G, Dudek C, Brown J. Attention web designers: you have 50 milliseconds to make a good first impression!. Behaviour & Information Technology. 2006;25:115–126.

68. Lund, A. (2001). Measuring usability with the USE questionnaire. Usability and user experience newsletter of the STC Usability SIG. See <http://www.stcsig.org/usability/newsletter/0110_measuring_with_use.html>.

69. Martin P, Bateson P. Measuring behaviour 2nd ed. Cambridge, UK, and New York: Cambridge University Press; 1993.

70. Maurer, D., & Warfel, T. (2004). Card sorting: a definitive guide. Boxes and Arrows, April 2004. Retrieved on 4/18/2013 from <http://boxesandarrows.com/card-sorting-a-definitive-guide/>.

71. Mayhew D, Bias R. Cost-justifying usability San Francisco: Morgan Kaufmann; 1994.

72. McGee, M. (2003). Usability magnitude estimation. Proceedings of human factors and ergonomics society annual meeting, Denver, CO.

73. McLellan S, Muddimer A, Peres SC. The effect of experience on system usability scale ratings. Journal of Usability Studies. 2012;7(2):56–67 <http://www.upassoc.org/upa_publications/jus/2012february/JUS_McLellan_February_2012.pdf>.

74. Miner G, Elder J, Hill T, Nisbet R, Delen D, Fast A. Practical text mining and statistical analysis for non-structured text data applications Elsevier Academic Press 2012; ISBN 978-0-12-386979-1.

75. Molich, R. (2011). CUE-9: The evaluator effect. <http://www.dialogdesign.dk/CUE-9.html>.

76. Molich, R., Bevan, N., Butler, S., Curson, I., Kindlund, E., Kirakowski, J., et al., (1998). Comparative evaluation of usability tests. Usability professionals association 1998 Conference, 22–26 June 1998 Washington, DC: Usability Professionals Association, pp. 189–200.

77. Molich R, Dumas J. Comparative usability evaluation (CUE-4). Behaviour & Information Technology. 2008;27:263–281.

78. Molich R, Ede MR, Kaasgaard K, Karyukin B. Comparative usability evaluation. Behaviour & Information Technology. 2004;23(1):65–74.

79. Molich R, Jeffries R, Dumas J. Making usability recommendations useful and usable. Journal of Usability Studies. 2007;2(4):162–179 Available at <http://www.upassoc.org/upa_publications/jus/2007august/useful-usable.pdf>.

80. Mueller J. Accessibility for everybody: understanding the Section 508 accessibility requirements New York, NY: Apress Media; 2003.

81. Nancarrow C, Brace I. Saying the “right thing”: coping with social desirability bias in marketing research. Bristol Business School Teaching and Research Review 2000;(Summer):3.

82. Nielsen J. Usability engineering San Francisco: Morgan Kaufmann; 1993.

83. Nielsen, J. (2000). Why you only need to test with 5 users. AlertBox, March 19. Available at <http://www.useit.com/alertbox/20000319.html>.

84. Nielsen, J. (2001). Beyond accessibility: treating users with disabilities as people. AlertBox, November 11, 2001. Retrieved on 4/18/2013, from <http://www.nngroup.com/articles/beyond-accessibility-treating-users-with-disabilities-as-people/>.

85. Nielsen, J. (2005). Medical usability: how to kill patients through bad design, Alertbox, April 11, 2005 <http://www.nngroup.com/articles/medical-usability/>.

86. Nielsen J, Berger J, Gilutz S, Whitenton K. Return on Investment (ROI) for usability 4th ed Freemont, CA: Nielsen Norman Group; 2008.

87. Nielsen, J., & Landauer, T. (1993). A mathematical model of the finding of usability problems. ACM proceedings, Interchi 93, Amsterdam.

88. Norgaard, M., & Hornbaek, K. (2006). What do usability evaluators do in practice? An explorative study of think-aloud testing. In Proceedings of designing interactive systems, pp. 209–218. University Park, PA.

89. Osgood CE, Suci G, Tannenbaum P. The measurement of meaning Urbana, IL: University of Illinois Press; 1957.

90. Otter M, Johnson H. Lost in hyperspace: metrics and mental models. Interacting with Computers. 2000;13:1–40.

91. Petrie, H., & Precious, J. (2010). Measuring user experience of websites: think aloud protocols and an emotion word prompt list. In Proceedings of ACM CHI 2010 Conference on human factors in computing systems, 2010. pp. 3673–3678.

92. Reichheld, F. F. (2003). One number you need to grow. Harvard Business Review, December 2003.

93. Reynolds, C. (2005). Adversarial Uses of Affective Computing and Ethical Implications. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge. Available at <http://affect.media.mit.edu/pdfs/05.reynolds-phd.pdf>.

94. Sangster, R. L., Willits, F. K., Saltiel, J., Lorenz, F. O., & Rockwood, T. H. (2001). The effects of Numerical Labels on Response Scales. Retrieved on 3/30/2013 from <http://www.bls.gov/osmr/pdf/st010120.pdf>.

95. Sauro, J. (2009). Composite operators for keystroke level modeling, Proceedings of the human computer interaction international conference (HCII 2009), San Diego CA, USA.

96. Sauro, J. (2010). Does better usability increase customer loyalty? The net promoter score and the system usability scale (SUS). Retrieved on 4/1/2013 from <http://www.measuringusability.com/usability-loyalty.php>.

97. Sauro, J. & Dumas J. (2009). Comparison of three one-question, post-task usability questionnaires, Proceedings of the conference on human factors in computing systems (CHI 2009), Boston, MA.

98. Sauro, J., & Kindlund, E. (2005). A method to standardize usability metrics into a single score. Proceedings of the conference on human factors in computing systems (CHI 2005), Portland, OR.

99. Sauro, J., & Lewis, J. (2005). Estimating completion rates from small samples using binomial confidence intervals: comparisons and recommendations. Proceedings of the human factors and ergonomics society annual meeting, Orlando, FL.

100. Sauro, J., & Lewis, J. R. (2011). When designing usability questionnaires, does it hurt to be positive?, Proceedings of the conference on human factors in computing systems (CHI 2011), Vancouver, BC, Canada.

101. Schwarz N, Knäuper B, Hippler HJ, Noelle-Neumann E, Clark F. Rating scales: numeric values may change the meaning of scale labels. Public Opinion Quarterly. 1991;55:570–582.

102. Section 508. (1998). Workforce Investment Act of 1998, Pub. L. No. 105–220, 112 Stat. 936 (August 7). Codified at 29 U.S.C. § 794d.

103. Shaikh, A., Baker, J., & Russell, M. (2004). What’s the skinny on weight loss websites? Usability News, 6.1, 2004. Available at <http://psychology.wichita.edu/surl/usabilitynews/61/diet_domain.htm>.

104. Smith PA. Towards a practical measure of hypertext usability. Interacting with Computers. 1996;8(4):365–381.

105. Snyder, C. (2006). Bias in usability testing. Boston Mini-UPA Conference, March 3, Natick, MA.

106. Sostre P, LeClaire J. Web analytics for dummies Hoboken, NJ: Wiley; 2007.

107. Spencer D. Card sorting: designing usable categories Brooklyn, NY: Rosenfeld Media; 2009.

108. Spool, J., & Schroeder, W. (2001). Testing web sites: five users is nowhere near enough. CHI 2001, Seattle.

109. Stover, A., Coyne, K., & Nielsen, J. (2002). Designing usable site maps for Websites. Available from <http://www.nngroup.com/reports/sitemaps/>.

110. Tang, D., Agarwal, A., O’Brien, D., & Meyer, M. (2010). Overlapping experiment infrastructure: more, better, faster experimentation. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’10). ACM, New York, NY, USA, 17–26.

111. Teague, R., De Jesus, K., & Nunes-Ueno, M. (2001). Concurrent vs post-task usability test ratings. CHI 2001 extended abstracts on human factors in computing systems, p. 289–290.

112. Teague, R., DeJesus, K., & Nunes-Ueno, M. (2001). Concurrent vs. post-task usability test ratings. Proceedings of CHI 2001, 289–290.

113. Tedesco, D., & Tullis, T. (2006). A comparison of methods for eliciting post-task subjective ratings in usability testing. Usability Professionals Association (UPA) 2006 annual conference, Broomfield, CO, June 12–16.

114. Trimmel M, Meixner-Pendleton M, Haring S. Stress response caused by system response time when searching for information on the internet: psychophysiology in ergonomics. Human Factors. 2003;45(4):615–621.

115. Tufte ER. Envisioning information Chesire, CT: Graphics Press; 1990.

116. Tufte ER. Visual explanations: images and quantities, evidence and narrative Chesire, CT: Graphics Press; 1997.

117. Tufte ER. The visual display of quantitative information 2nd ed Chesire, CT: Graphics Press; 2001.

118. Tufte ER. Beautiful evidence Chesire, CT: Graphics Press; 2006.

119. Tullis, T. S. (1985). Designing a menu-based interface to an operating system. Proceedings of the CHI ’85 conference on human factors in computing systems, San Francisco.

120. Tullis, T. S. (1998). A method for evaluating Web page design concepts. Proceedings of CHI ’98 conference on computer-human interaction, Los Angeles, CA.

121. Tullis, T. S. (2007). Using closed card-sorting to evaluate information architectures. Usability Professionals Association (UPA) 2007 Conference, Austin, TX. Retrieved on 4/18/2013 from <http://www.eastonmass.net/tullis/presentations/ClosedCardSorting.pdf>.

122. Tullis, T. S. (2008a). SUS scores from 129 conditions in 50 studies. Retrieved on 3/30/2013 from <http://www.measuringux.com/SUS-scores.xls>.

123. Tullis, T. S. (2008b). Results of online usability study of Apollo program websites. <http://www.measuringux.com/apollo/>.

124. Tullis, T. S. (2011). Worst usability issue. Posted July 4, 2011. <http://www.measuringux.com/WorstUsabilityIssue/>.

125. Tullis, T. S., Mangan, E. C., & Rosenbaum, R. (2007). An empirical comparison of on-screen keyboards. Human factors and ergonomics society 51st annual meeting, October 1–5, Baltimore. Available from <http://www.measuringux.com/OnScreenKeyboards/index.htm>.

126. Tullis, T. S., & Stetson, J.. (2004). A comparison of questionnaires for assessing Website usability. Usability Professionals Association (UPA) 2004 conference, June 7–11, Minneapolis, MN. Paper available from <http://home.comcast.net/~tomtullis/publications/UPA2004TullisStetson.pdf>. Slides: <http://www.upassoc.org/usability_resources/conference/2004/UPA-2004-TullisStetson.pdf>.

127. Tullis, T. S., & Tullis, C. (2007). Statistical analyses of e-commerce websites: can a site be usable and beautiful? Proceedings of HCI international 2007 conference, Beijing, China.

128. Tullis, T. S., & Wood, L. (2004). How many users are enough for a card-sorting study? Proceedings of Usability Professionals Association Conference, June 7–11, Minneapolis, MN. Available from http://home.comcast.net/~tomtullis/publications/UPA2004CardSorting.pdf.

129. Van den Haak MJ, de Jong MDT, Schellens PJ. Employing think-aloud protocols and constructive interaction to test the usability of online library catalogues: a methodological comparison. Interacting with Computers. 2004;16:1153–1170.

130. Vermeern, A., van Kesteren, I., & Bekker, M. (2003). Measuring the evaluator effect in user testing. In M. Rauterber et al. (Eds.), Human-computer interaction–INTERACT’03. pp. 647–654. Published by IOS Press, (c)IFIP.

131. Virzi R. Refining the test phase of the usability evaluation: how many subjects is enough? Human Factors. 1992;34(4):457–468.

132. Vividence Corp. (2001). Moving on up: move.com improves customer experience. Retrieved October 15, 2001, from <http://www.vividence.com/public/solutions/our+clients/success+stories/movecom.htm>.

133. Ward R, Marsden P. Physiological responses to different WEB page designs. International Journal of Human-Computer Studies. 2003;59:199–212.

134. Wilson, C., & Coyne, K. P. (2001). Tracking usability issues: to bug or not to bug? Interactions, May–June.

135. Withrow, J., Brinck, T., & Speredelozzi, A. (2000). Comparative usability evaluation for an e-government portal. Diamond Bullet Design Report, #U1-00-2, Ann Arbor, MI., December. Available at <http://www.simplytom.com/research/U1-00-2-egovportal.pdf>.

136. Wixon, D., & Jones, S. (1992). Usability for fun and profit: a case study of the design of DEC RALLY, Version 2. Digital Equipment Corporation.

137. Wong D. The Wall Street Journal guide to information graphics: the do’s and don’ts of presenting data, facts, and figures New York, NY: W. W. Norton & Company; 2010.

138. Woolrych, A., & Cockton, G. (2001). Why and when five test users aren’t enough. In Proceedings of IHM-HCI2001, 2, pp. 105–108. Toulouse, France: Ce´padue`s-E´ditions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset