Index

Note: Page numbers followed by “b” refer to boxes.

A

A/B tests, 216–218
Accessibility data, 228–232
automated accessibility-checking tools, 231b
Adjusted Wald Method, 70
Affective Computinsg Research Group, 176–177, 183–184
After-Scenario Questionnaire (ASQ), 132
Alternative design comparisons, 52
American Customer Satisfaction Index (ACSI), 148–149
American Institutes for Research, 147
Analyzing and reporting metrics
binary successes, 68–69
efficiency metrics, 88–90
errors, 84–86
frequency of issues per participant, 109
frequency of participants, 109–110
frequency of unique issues, 108–109
issues by category, 110
issues by task, 111
learnability, 94–96
levels of successes, 73
time-on-task metrics, 78–81
for usability issues, 107–111
Andre, Anthony, 5
Apple, 102
Apple IIe Design Guidelines, 102
Apple Presents Apple, 102
Areas of interest (AOI), 170–172
dwell time, 173
hit ratio, 174
number of fixations, 173
revisits, 174
sequence, 173
Attribute assessments, in self-reported metrics, 154–156
Automated studies, 103
Automatic external defribulators (AED), 5
Awareness
and comprehension, 159–160
increasing, 48–49
and usefulness gaps, 160–161

B

Backtracking metric, 90b
Bar graphs, 33–35
line graphs vs., 36b
Behavioral and physiological metrics
emotion, measuring, 176–182
eye tracking, 165–176
heart rate variability (HRV), 182–183
pupillary response, 175–176
skin conductance and heart rate, 183
verbal expressions, 164–165
Benchmarking, 283–284
Bender, Daniel, 176
Bias
in collecting self-reported data, 126
in identifying usability issues, 113–115
moderator, in eye-tracking study, 115b
Binary successes, 66–70
analyzing and presenting, 68–69
confidence intervals, 69–70
Biometrics, measuring usability through, case study, 271
background, 271–272
participants, 272
procedure, 272–273
Q Sensor data results, 273–274
qualitative findings, 274–275
technology, 272
Blink test, 56
Blue Bubble Lab, 179–180
Bounce rate, 210b
Budgets, 57–58
Byrne, Michael, 147

C

Calculating probability of detection, 116b–117b
Card-sorting data, 218–228
analyzing, 219–224
closed, 224–227
Excel spreadsheet, 220b
hierarchical cluster analysis, 221–222
multidimensional scaling, 222–224
number of participants, 223b–224b
tools, 219b
tree testing, 227–228
Categorical data, See Nominal data
Category, issues by, 110
Central tendency, 19–20
Chi-square (χ2) test, 31–32
Click-through rates, 213–214
Closed card-sorting data, 224–227
Cognitive effort, measuring, 87b
Collecting data, 60
collecting, 94
efficiency, 87–88
errors, 84
levels of successes, 71–73
self-reported, 125
studies, 60
time-on-task, 75–78
Column graphs, 33–35
Combination chart, 202b
Combined metrics
based on percentages, 189–196
based on target goals, 188–189
based on z scores, 196–198
overview, 187
scorecards, 200–203
single usability scores, 187–200
Comparisons
alternative designs, 52
to expert performance, 206–207
to goals, 204–205
product, 47
Completed transaction metrics, 45–47
Computer System Usability Questionnaire (CSUQ), 140–141
Confidence, 285–286
Confidence intervals
binary successes, 69–70
descriptive statistics, 22–24
as error bars, 24–25
Consistency
in identifying issues, 102–103, 111–113
Conversion rate, 210b
Costs
myths about, 12
Critical product studies, 50

D

Data cleanup, 60–61
Data collection, See Collecting data
Data exploration, 284–285
Data types, 16–19
interval, 18–19
nominal, 16–17
ordinal, 17–18
ratio, 19
Dependent variables, 16
Descriptive statistics
confidence intervals, 22–24
confidence intervals as error bars, 24–25
measures of central tendency, 19–20
measures of variability, 21–22
overview, 19–25
Drop-off rates, 215–216
Dwell time, in eye tracking metrics, 173

E

Early planning, 282–283
Efficiency metrics
analyzing and presenting, 88–90
collecting and measuring, 87–88
as combination of task success and time, 90–92
overview, 86–92
Element assessments, in self-reported metrics, 156–158
El Kaliouby, Rana, 176–177
Emotion, measuring
Emovision, 179–180
overview, 176–182
Q Sensor, 177–178
Seren, 180–182
Emotiv, 180–182
Emovision, 179–180
Entrance page, 210b
Errors
analyzing and presenting, 84–86
collecting and measuring, 84
issues, 86
measuring, 82–83
overview, 82–86
Evaluation methods, in studies, 52–57
Evaluator Effect, 118b
Everett, Sarah, 147
Exact Method, 70
Excel tips
central tendency, measuring, 20
chi-square test, 31–32
combination chart in, 202b
comparing more than two samples, 29
confidence intervals, calculating, 23
confidence intervals as error bars, 24–25
descriptive statistics tool, 22
median calculation, 79
relationship between two variables, 30–31
transforming time data in, 190–191
t test on independent samples, 27
variability, measures of, 21
working with time data, 77b–78b
z scores, 197b
Exit page, 210b
Exit rate (for a page), 210b
Expert performance, comparison to, 206–207
Exploring data, 284–285
Eye tracking, 5, 165–176
analysis tips, 174–175
areas of interest (AOI), 170–172
common metrics, 172–174
dwell time, 173
first fixation, 173–174
fixation duration, 173
hit ratio, 174
number of fixations, 173
pupillary response, 175–176
revisits, 174
sequence, 173
visualizations, 167–170
webcam-based, 163
Eye-tracking study, moderator bias in, 115b
EyeTrackShop, 167, 282

F

Feedback on fingerprint capture, measuring effect of, case study, 244–245
attempt time, 249
discussion, 252–253
effectiveness, 248
efficiency, 249
errors, 248
experimental design, 245
materials, 246
participants, 245
procedure, 246
quality of the fingerprint image, 248
results, 246
satisfaction, 251
task completion rate, 248
task completion time, 250
Finstad, Craig, 127b
Fixation, in eye tracking metrics
duration, 173
first, 173–174
number of, 173
Focus groups vs. usability tests, 53b
Focus map, 170
Formative usability testing, 42–43,43b
Frequency of issues
per participants, 109
unique, 108–109
Frequency of participants, 109–110
Frequent use, of product studies, 47–48

G

Geometric mean, 79b
Goals
combining metrics based on, 188–189
comparison to, 204–205
study, 42–44
user, 44–45
GOMS (Goals, Operators, Methods, and Selection rules), 88
Graphs
column and bar, 33–35
line, 35–36
overview, 32–40
pie charts, 38
scatterplots, 36–38
tips, 33b
Greene, Kristen, 147

H

Hart, Traci, 147
Harvey Balls, 203b
Heart rate variability (HRV), 182–183
and skin conductance research, 183
Heat map, 170
Hierarchical cluster analysis, of card-sorting data, 221–222
Hit ratio, in eye tracking metrics, 174
Holland, Anne, 218

I

Impact of subtle changes, evaluating, 51–52
Independent variables, 16
Information architecture studies, 48
In-person studies, 102–103
Interval data
overview, 18–19
Issues-based metrics
analyzing and reporting, 107–111
automated studies, 103
bias in identifying issues, 113–115
concept of, 100–102
identifying issues, 102–103, 111–113
in-person studies, 102–103
number of participants, 115–119
overview, 99–100
real vs. false issues, 101
severity ratings, 103–107

K

Keys to success
benchmarking, 283–284
confidence, 285–286
data exploration, 284–285
effective presentation, 287–288
language of business, 285
making data live, 279–280
planning, 282–283
proper use of metrics, 286–287
tools, 282
Keystroke-level model, 88b

L

Lab tests, 53
Landing page, 210b
Language of business, 285
Learnability
analyzing and presenting, 94–96
collecting and measuring, 94
issues, 96
overview, 92–96
and self-service, 93b
Levels of successes, 70–73
analyzing and presenting, 73
collecting and measuring, 71–73
Likert scale, 123–124
Line graphs, 35–36
vs. bar graphs, 36b
Live-site survey issues, 152–154
Live website data, 209–218
A/B tests, 216–218
basic web analytics, 210–213
click-through rates, 213–214
drop-off rates, 215–216
terms used in web analytics, 210b
Loop11, 282
Lund, Arnie, 142–144

M

Management appreciation, myths about, 14
Maurer, Donna, 220
Mean, 19–20
Means, comparing, 25–30
independent samples, 26–27
more than two samples, 29–30
paired samples, 27–28
Median, 20
Metrics overview
defined, 6–8
myths, 11–14
new tchnologies, 10–11
value, 8–9
Misuse of metrics, 286–287
Mode, 20
Moderator bias, in eye-tracking study, 115b
Multidimensional scaling, of card-sorting data, 222–224
Myths about metrics, 11–14

N

Navigation studies, 48
Net Promoter Score (NPS), 146
Net Promoter Scores and value of a good user experience, case study, 238
discussion, 242–243
methods, 239–240
prioritizing investments in interface design, 241–242
results, 240–241
New products, myths about, 13
Noisy data, myths about, 12–13
Nominal data
coding, 17b
overview, 16–17
Nonparametric tests
χ2 test, 31–32
overview, 31–32
Number
of fixations, in eye tracking metrics, 173
Number
of participants, 115–119
Number of scale values, 127b

O

Online services
ACSI, 148–149
live-site survey issues, 152–154
OpinionLab, 149–152
overview, 147–154
WAMMI, 148
Online studies, 54–56
Online surveys, 56
interaction with design, 56b
tools, 125b
Open-ended questions, 158–159
OpinionLab, 149–152
Optimal Workshop, 282
Ordinal data
overview, 17–18
Osgood, Charles E., 124
Outliers, 20,195b
time-on-task data, 80–81

P

Page views, 210b
Paired samples, 27–28
Participants
studies, 58–59
Percentages
combining metrics based on, 189–196
Performance
expert, comparisons to, 206–207
vs. satisfaction, 44b–45b
as user goal, 44
Performance metrics
efficiency, 86–92
errors, See Errors
learnability, 92–96
overview, 63, 65
task success, See Successes
time-on-task, See Time-on-task metrics
types of, 65
Perlman, Gary, 142b
Picard, Rosalind, 176–177
Pie charts, 38
Planning, 282–283
Poppel, Harvey, 203b
Positive user experiences, 51
Postsession ratings
aggregating, 137
comparison, 145–146
Computer System Usability Questionnaire (CSUQ), 140–141
overview, 137–147
product reaction cards, 144
Questionnaire for User Interface Satisfaction (QUIS), 141–142
System Usability Scale (SUS), 137–140
Usefulness, Satisfaction, and Ease of Use (USE), 142–144
Post-task ratings
After-Scenario Questionnaire (ASQ), 132
ease of use, 131
expectation measure, 132–133
overview, 131–137
task comparisons, 133–136
Posture Analysis Seat measures, 184
Presentation, 287–288
PressureMouse, 184
Probability of detection, calculating, 116b–117b
Problem discovery, 49–50
Product reaction cards, 144
Pupillary response, 175–176

Q

Q Sensor, 177–178
Questionnaire for User Interface Satisfaction (QUIS), 141–142

R

Radar chart, 143b–144b
Rating scales, 123–131
analyzing data, 127–131
guidelines for, 126–127
Likert scale, 123–124
semantic differential, 124
Ratios
overview, 19
of positive to negative comments, 164
Retrospective think aloud (RTA), 82b
Return-on-investment (ROI) data, 232–236
case studies, 235b–236b
Revisits, in eye tracking metrics, 174
Rice, Mike, 220

S

Samples
myths about, 14
Satisfaction
performance vs., 44b–45b
as user goal, 44
Sauro, Jeff, 70
Scatterplots, 36–38
Scorecards, usability, 200–203
Section 508, 232b
Self-reported data, 123
collecting, 125
Self-reported metrics
attribute assessments, 154–156
awareness and comprehension, 159–160
awareness and usefulness gaps, 160–161
element assessments, 156–158
importance, 123
online services, See Online services
open-ended questions, 158–159
overview, 122
postsession ratings, See Postsession ratings
post-task ratings, See Post-task ratings
rating scales for, 123–131
Semantic differential scales, 124
Sequence, in eye tracking metrics, 173
Seren, 180–182
Severity ratings, 103–107
caveats, 107
combination of factors, 105–106
example, 105b
overview, 103–104
user experience, 104
using, 106–107
Single Usability Metric (SUM), 198–200
Single usability scores (SUS), 187–200
based on percentages, 189–196
based on target goals, 188–189
based on z scores, 196–198
Skin conductance research, heart rate and, 183
Small improvements, myths about, 12
Software Usability Measurement Inventory (SUMI), 148
Studies, types of, overview
alternative design comparisons, 52
awareness, 48–49
budgets and timelines, 57–58
completing transaction, 45–47
critical products, 50
data cleanup, 60–61
data collection, 60
evaluation methods, 52–57
frequent use of products, 47–48
goals, 42–44
impact of subtle changes, 51–52
navigation and information architecture, 48
participants, 58–59
positive user experience, 51
problem discovery, 49–50
product comparisons, 47
Successes
binary, 66–70
factual, 67b
issues in, 73–74
levels of, 70–73
overview, 65–74
Summative usability testing, 43,43b
System Usability Scale (SUS)
for comparing different designs, 147
overview, 137–140

T

Target goals, combining metrics based on, 188–189
Task failure, types of, 68b
Tasks
issues by, 111
Time and time data
collection myths, 11
Timelines, for studies, 57–58
Time-on-task metrics
analyzing and presenting, 78–81
automated tools for measuring, 75b–76b
collecting and measuring, 75–78
importance of measuring, 75
issues, 81–82
overview, 74
vs. web session duration, 74b–75b
Treejack, 90
Tree testing, 227–228

U

University prospectus, case study, 263
deciding on actions after usability testing, 264, 266–267
site-tracking data, 267–269
triangulation for iteration of personas, 269–270
Usability tests, focus groups vs., 53b
Usefulness, Satisfaction, and Ease of Use (USE), 142–144
User experience
concept of, 4–6
metrics, See Metrics overview
User goals, 44–45
UserZoom, 282

V

Value of usability metrics, 8–9
Van Dongen, Ben, 179
Variables
independent and dependent, 16
relationships between, 30–31
Verbal expressions
observing and coding unprompted, 164–165
ratio of positive to negative comments, 164
Visitors, 210b
Visits, 210b

W

Wald Method, 70. See also Adjusted Wald Method
Web analytics, 210–213
terms used in, 210b
Webcam-based eye tracking, 167
Web experience management system, case study, 254–255
data collection, 256
results, 261–262
test iterations, 255–256
workflow, 257, 261
Web session duration, vs. time-on-task metrics, 74b–75b
Website Analysis and Measurement Inventory (WAMMI), 148
Which Test Won, 218

X

X/Y plots, See Scatterplots

Z

Z scores
combining metrics based on, 196–198
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset