Index
Note: Page numbers followed by “b” refer to boxes.
A
automated accessibility-checking tools,
231b
After-Scenario Questionnaire (ASQ),
132
Alternative design comparisons,
52
American Customer Satisfaction Index (ACSI),
148–149
American Institutes for Research,
147
Analyzing and reporting metrics
efficiency metrics,
88–90
frequency of issues per participant,
109
frequency of unique issues,
108–109
time-on-task metrics,
78–81
Andre, Anthony,
Apple IIe Design Guidelines,
102
Apple Presents Apple,
102
Attribute assessments, in self-reported metrics,
154–156
Automatic external defribulators (AED),
Awareness
B
Behavioral and physiological metrics
heart rate variability (HRV),
182–183
skin conductance and heart rate,
183
Bias
in collecting self-reported data,
126
in identifying usability issues,
113–115
moderator, in eye-tracking study,
115b
analyzing and presenting,
68–69
confidence intervals,
69–70
Biometrics, measuring usability through, case study,
271
C
Calculating probability of detection,
116b–117b
hierarchical cluster analysis,
221–222
Chi-square (χ
2) test,
31–32
Cognitive effort, measuring,
87b
levels of successes,
71–73
Combined metrics
Comparisons
Completed transaction metrics,
45–47
Computer System Usability Questionnaire (CSUQ),
140–141
Confidence intervals
descriptive statistics,
22–24
Consistency
Costs
Critical product studies,
50
D
Descriptive statistics
confidence intervals,
22–24
confidence intervals as error bars,
24–25
measures of central tendency,
19–20
measures of variability,
21–22
Dwell time, in eye tracking metrics,
173
E
Efficiency metrics
analyzing and presenting,
88–90
collecting and measuring,
87–88
as combination of task success and time,
90–92
Element assessments, in self-reported metrics,
156–158
Emotion, measuring
Errors
analyzing and presenting,
84–86
collecting and measuring,
84
Evaluation methods, in studies,
52–57
Excel tips
central tendency, measuring,
20
combination chart in,
202b
comparing more than two samples,
29
confidence intervals, calculating,
23
confidence intervals as error bars,
24–25
descriptive statistics tool,
22
relationship between two variables,
30–31
t test on independent samples,
27
variability, measures of,
21
Exit rate (for a page),
210b
Expert performance, comparison to,
206–207
Eye-tracking study, moderator bias in,
115b
F
Feedback on fingerprint capture, measuring effect of, case study,
244–245
quality of the fingerprint image,
248
task completion rate,
248
task completion time,
250
Fixation, in eye tracking metrics
Focus groups
vs. usability tests,
53b
Frequency of issues
Frequent use, of product studies,
47–48
G
Goals
combining metrics based on,
188–189
GOMS (Goals, Operators, Methods, and Selection rules),
88
Graphs
H
Heart rate variability (HRV),
182–183
and skin conductance research,
183
Hierarchical cluster analysis, of card-sorting data,
221–222
Hit ratio, in eye tracking metrics,
174
I
Impact of subtle changes, evaluating,
51–52
Independent variables,
16
Information architecture studies,
48
Interval data
Issues-based metrics
bias in identifying issues,
113–115
real
vs. false issues,
101
K
Keys to success
language of business,
285
Keystroke-level model,
88b
L
Language of business,
285
Learnability
analyzing and presenting,
94–96
collecting and measuring,
94
Levels of successes,
70–73
analyzing and presenting,
73
collecting and measuring,
71–73
terms used in web analytics,
210b
M
Management appreciation, myths about,
14
independent samples,
26–27
more than two samples,
29–30
Metrics overview
Moderator bias, in eye-tracking study,
115b
Multidimensional scaling, of card-sorting data,
222–224
Myths about metrics,
11–14
N
Net Promoter Score (NPS),
146
Net Promoter Scores and value of a good user experience, case study,
238
prioritizing investments in interface design,
241–242
New products, myths about,
13
Noisy data, myths about,
12–13
Nominal data
Nonparametric tests
Number
of fixations, in eye tracking metrics,
173
Number
Number of scale values,
127b
O
Online services
interaction with design,
56b
Ordinal data
P
Participants
Percentages
combining metrics based on,
189–196
Performance
Performance metrics
Positive user experiences,
51
Postsession ratings
Computer System Usability Questionnaire (CSUQ),
140–141
product reaction cards,
144
Questionnaire for User Interface Satisfaction (QUIS),
141–142
System Usability Scale (SUS),
137–140
Usefulness, Satisfaction, and Ease of Use (USE),
142–144
Post-task ratings
After-Scenario Questionnaire (ASQ),
132
Posture Analysis Seat measures,
184
Probability of detection, calculating,
116b–117b
Product reaction cards,
144
Q
Questionnaire for User Interface Satisfaction (QUIS),
141–142
R
semantic differential,
124
Ratios
of positive to negative comments,
164
Retrospective think aloud (RTA),
82b
Return-on-investment (ROI) data,
232–236
Revisits, in eye tracking metrics,
174
S
Samples
Satisfaction
Self-reported metrics
awareness and comprehension,
159–160
awareness and usefulness gaps,
160–161
Semantic differential scales,
124
Sequence, in eye tracking metrics,
173
Single Usability Metric (SUM),
198–200
Single usability scores (SUS),
187–200
Skin conductance research, heart rate and,
183
Small improvements, myths about,
12
Software Usability Measurement Inventory (SUMI),
148
Studies, types of, overview
alternative design comparisons,
52
budgets and timelines,
57–58
completing transaction,
45–47
evaluation methods,
52–57
frequent use of products,
47–48
impact of subtle changes,
51–52
navigation and information architecture,
48
positive user experience,
51
Successes
Summative usability testing,
43,43b
System Usability Scale (SUS)
for comparing different designs,
147
T
Target goals, combining metrics based on,
188–189
Task failure, types of,
68b
Tasks
Time and time data
Timelines, for studies,
57–58
Time-on-task metrics
analyzing and presenting,
78–81
automated tools for measuring,
75b–76b
collecting and measuring,
75–78
importance of measuring,
75
U
University prospectus, case study,
263
deciding on actions after usability testing,
264,
266–267
triangulation for iteration of personas,
269–270
Usability tests, focus groups
vs.,
53b
Usefulness, Satisfaction, and Ease of Use (USE),
142–144
User experience
V
Value of usability metrics,
8–9
Variables
independent and dependent,
16
relationships between,
30–31
Verbal expressions
observing and coding unprompted,
164–165
ratio of positive to negative comments,
164
W
Webcam-based eye tracking,
167
Web experience management system, case study,
254–255
Web session duration,
vs. time-on-task metrics,
74b–75b
Website Analysis and Measurement Inventory (WAMMI),
148
X
Z
Z scores
combining metrics based on,
196–198