Chapter 3. Quantitative Research Methods

Above all else, show the data.

Edward Tufte

Now that we’ve introduced the various types of questions you’ll attempt to answer through your research efforts, we’ll discuss the two types of research found in product design: quantitative and qualitative research. This chapter and the next introduce the history, variations, and application of these two methods. All successful products rely on the integration of both quantitative and qualitative methods, and there is no silver bullet. Just as different methods require different skills, teams with a variety of skill sets are encouraged.

Quantitative Research by the Numbers

Quantitative research is simply defined as the study of what can be measured and observed. More specifically, quantitative research means the results will be consistent and generally agreed on by all parties involved. Your height and weight are quantifiable measurements, as they can be counted and measured against a standard scale. On the other hand, your personality is a qualitative measurement, as “nice,” “kind,” and “mean” are all subjective. We will talk more about qualitative research in Chapter 4.

On the Web and in the product space, quantitative measurements may include:

  • Bounce rates

  • Time on task

  • Conversion rates

  • Order size (number of items or their value)

  • Number of visitors to a site (physical or digital)

  • Average size of group

Where Do I Find Quantitative Research?

We have all encountered quantitative data, even if it was only in grade school. The most common aspects of quantitative research are mean (the average value), median (the middle value), mode (the most common value), and range (the difference between the highest and lowest values). While helpful for high-level understanding of averages and trends, these numbers only scratch the surface of what quantitative research can provide.

Quantitative research informs designers where customers are accessing their information as well as what devices they are using. In a world where “mobile first” is touted as scripture, quantitative analytics support that claim with actual numbers. In April 2015, Pew Research estimated that 64% of American adults own a smartphone of some kind, and that 10% of Americans use a smartphone as their primary Internet device (Figure 3-1).[3] This is an example of how quantitative research provides primary use cases for modern products with digital components.

Pew Research chart of mobile usage
Figure 3-1. Pew Research chart of mobile usage

What Quantitative Research Is Not

While very informative, quantitative research doesn’t tell us how to fix things, doesn’t tell us why things happen, and doesn’t share information that isn’t asked for. That being said, quantitative research can act as a benchmark for future studies and for qualitative research.

First, quantitative research doesn’t tell us how to fix issues, as it is only a historical representation. Numbers do not consider context of use. For instance, if a thermostat is intended for use in private homes and is installed instead in an industrial setting, issues may occur that would not be properly diagnosed, as the context of use is incorrect and not visible through an analytical readout.

Similarly, quantitative research doesn’t tell us why things happen. Did a customer click on a certain link because they didn’t visually see the intended goal? Or did they perceive the link as the correct destination?

Lastly, quantitative research shares only information we ask for. Site analytics are good only when they’re actively measured. Call center statistics and usage metrics are valuable only if you have them and are able to measure change over time.

Quantitative research is a valuable tool. As we continue to explore the facets of research, keep in mind this is one side of the coin.

Three Focuses of Research

There are three main focuses of research that inform our work as product designers. Though they’re commonly called planning, discovery and exploration, and testing and validation, we describe them here as insight-driven, evaluative, and generative. While these are distinct and specific areas, they have overlapping characteristics and often span project phases (Figure 3-2). They also go beyond simple product design and ideation, impacting maintenance and support goals for ongoing projects. Remember, design and iterations aren’t finished just because a product shipped. And with many of our products going digital, pushing updates is easier than ever. We should always be looking to measure and improve the products we deliver.

The three main focuses of research—insight-driven, evaluative, and generative—are distinct yet have many overlapping characteristics
Figure 3-2. The three main focuses of research—insight-driven, evaluative, and generative—are distinct yet have many overlapping characteristics

Insight-Driven

Insight-driven research seeks to understand what the problem space is, why the problem exists, and where opportunities lie. Often conducted in early stages of projects, insight-driven research may be as simple as looking at the rate at which your users are succeeding when they attempt to accomplish certain goals. For quantitative research, insight-driven research manifests as benchmarks, often referred to as key performance indicators (KPIs). Product teams use KPIs to properly set goals and to measure a product’s overall success. KPIs may be conversion rates, new customer or sales numbers, or time spent engaging with a product.

Evaluative

Evaluative research, on the other hand, looks to measure how a design or solution stands up to the KPIs and benchmarks laid out. A user flow is a common artifact that identifies touchpoints and challenges throughout a process. Evaluative research helps answer how this flow is effective both before and after proposed changes. While this kind of research may be done with both quantitative and qualitative measures, quantitative measurements can pinpoint opportunities to improve on a larger scale than qualitative research. For instance, quantitative studies of a healthcare user flow can show how improvements to a system have decreased the time nurses spend entering data and increased the time they spend engaging directly with patients.

Generative

Lastly, generative research methods offer opportunities to create and explore new designs through research. Often called data-driven design, generative research methods balance subjective design recommendations and trends with quantifiable, measurable gains and opportunities. This approach may consider Fitts’s law, as discussed in Chapter 1, or may use A/B testing, which we’ll discuss in the next section.

Types of Research Methods

There are many types of quantitative research available. The next few sections will highlight a small number of methods and how they support insight-driven, evaluative, and generative project goals. This list is by no means comprehensive, and we encourage you to explore and adapt these methods for your own use.

System Analytics

System analytics are probably the most common form of quantitative data. Often referred to as site analytics for web-based experiences, analytics provide passive access to a wide array of data points. Analytics are a great example of insight-driven research because of their low cost of entry and, assuming correct tagging on the backend, depth of information. Some of the most common pieces of data include user flows, demographics, and geography.

User flows may be used to understand how customers access and navigate a tool. If your product is a website, are users accessing the product from the home page or through natural search (a search engine)? Are they interacting with your navigation or are they immediately using the search feature (which may be an indicator of poor labeling and tagging, called taxonomy among UX practitioners)? You can learn how customers perceive your product and taxonomy through search logs, which highlight their search terms and mental model.

Demographics include age, gender, and other biographical data. This information is particularly helpful for products that target specific user groups—for instance, young investors or users with specific medical conditions.

Geography is particularly helpful in understanding where your customers are coming from. If you want to build a global product and can identify a country or region as a focus for launching your tool, you can build on the successes of an already established market.

There are a number of tools available for gathering and displaying system analytics. Google Analytics is a common one that offers a wide array of customization with an easy-to-read dashboard (Figure 3-3).

The Google Analytics dashboard provides insights into site traffic and usage
Figure 3-3. The Google Analytics dashboard provides insights into site traffic and usage

Surveys

Unlike analytics, surveys straddle both evaluative and insight-driven methodologies by providing data around not only how a system is used but also how it might meet or fail to meet expectations.

Surveys vary in shape and size. You’ve likely encountered surveys in the form of pop ups as you browse the Internet, or when a call center asks if you have a few moments to provide feedback after you speak with a representative.

Unlike analytics, surveys take a proactive approach to gathering data. Where analytics passively captures information based on customer usage, surveys allow you to actively collect data through more open formats. Common goals of surveys are to learn about intent and quality of service, or whether expectations were met.

Again, many tools exist to create, capture, and analyze surveys. While we’re not endorsing any individual tools, in our own work we’ve used SurveyMonkey (Figure 3-4), Google Forms, Survs, and ForeSee, among others. Each of these tools offers a unique approach to surveys and varying degrees of customization and data manipulation. We highly recommend exploring each tool individually based on your own needs.

SurveyMonkey results displayed through visual representations
Figure 3-4. SurveyMonkey results displayed through visual representations

Tree Jacking

Tree jacking is an example of a generative research method, though it can also be used as an evaluative measure. It is a method of evaluating a system’s navigation and terminology. A designer will enter a proposed taxonomy into the system and prompt customers to navigate the information. By clicking through the site map, the designer gathers data on users’ expectations and understanding of terms by tracking their path through the tree structure. In this way, designers can quickly generate a new information hierarchy and navigation structure through iterative evaluation of proposed solutions.

Using tools like Optimal Workshop’s Treejack (Figure 3-5) is by no means an exact science. With all quantitative methods, the questions asked are just as important as the tool and method being used. Let’s consider the task of a housewares-focused ecommerce site. A common navigation structure includes sections called Home, Kitchen, Living & Dining, and Bedroom. If a tree-jacking study asked, “Where would you go for a new kitchen appliance?” the answer is simple. A better question would be “Where would you go for new glassware?”

Optimal Workshop’s Treejack tool
Figure 3-5. Optimal Workshop’s Treejack tool

Eye Tracking

Eye tracking is the process of using cameras to follow a participant’s eyes as they scan a page. Limited to screen-based products, eye tracking is particularly effective with ecommerce systems and tools.

One major hurdle with eye tracking is the cost of software and the requirement that participants be brought into a lab that can support the technology. While eye tracking was originally limited to desktop interfaces, new systems are being developed to support it on mobile devices.

Tobii is a leader in building eye-tracking software, the results of which are shown in Figure 3-6.

Sample eye-tracking heat map
Figure 3-6. Sample eye-tracking heat map

A/B Testing

In A/B testing, a version of site analytics, two different versions of a page are presented to customers. Then, by analyzing the data, researchers can identify a leading option.

A/B testing may be conducted for anything from the color of a button to the headline or body copy of a home page. It is important to focus an A/B test on a specific question and to have established KPIs to measure success with these micro-interactions.

Similar to system analytics, A/B testing is limited in that it provides a retroactive look at behavior but does not provide the subjective “why” or intent of different actions.

Card Sorting

Card sorting is very similar to tree jacking as a generative research method. Through card sorting, participants arrange topics and items in logical chunks based on their own understanding of the data (Figure 3-7). Variations in card sorting include open and closed, as well as moderated and unmoderated. Open card sorting allows participants to organize cards any way they see fit, while closed card sorting provides predetermined groups or master labels. Card sorting is a fascinating research tool because, depending on the exact implementation, it may be more quantitative (with tools like Optimal Workshop) or more qualitative (with smaller participant groups).

Sample card sorting exercise
Figure 3-7. Sample card sorting exercise

Additional Methods

In Table 3-1 we have organized the quantitative methods just listed, as well as a number of others, highlighting their tendency toward insight-driven, evaluative, or generative approaches to product design. Understanding where these methods fit within a project phase will assist you in selecting the most effective approach throughout your product design.

Table 3-1. Quantitative research methods

Method Name

Description

Insight-Driven

Evaluative

Generative

A/B testing

A method of implementing two solutions and, by displaying them to randomly selected customer groups, determining a preferred solution.

 

X

 

Analytics

Any measure of statistical data or usage of a system. This may include click rates, bounce rates, time on task, and more.

 

X

X

Card sorting

A method of seeking understanding for a customer’s mental model of a system’s architecture. May be open (where customers can create their own labels and groups) or closed (where labels and groups are provided).

X

X

X

Customer feedback

Any format of requesting and gathering large-scale customer input.

X

X

 

Email surveys

One method for gathering customer feedback.

X

X

 

Eye tracking

A lab-based method where cameras track a customer’s eye movement across a digital product.

 

X

 

Intercept testing

A method of randomly requesting customer feedback as they engage with a product.

X

X

 

Moderated product testing

A method of validating a product with a researcher actively engaging customers.

X

X

X

Surveys

Any format where customers are presented with open or closed questions on their experience with a product.

X

X

 

Taxonomy review (tree jacking)

An analytical method to address system architecture and taxonomy.

X

X

 

Unmoderated product testing

A method of validating a product with a researcher setting up questions for a customer to respond to at a later time.

   

Quantitative Methods: When and Where

Every tool has its place, and quantitative research methods are no different. Quantitative methods are best used when a large number of participants or customers may be accessed for the most statistically significant outcomes. This isn’t to say that quantitative methods don’t work in smaller studies. Eye tracking and card sorting, for instance, need only a handful of participants to show utility. Still, one of the major distinctions quantitative research has over qualitative research is the larger scale. Additionally, quantitative methods are best employed when the question at hand has a tangible, measurable outcome. A good question for quantitative research might be “How many users abandon the checkout process when signing up for a product’s service?” Seeking to understand preference or desirability of a product is less effective for quantitative research than efficacy of a proposed solution.

Quantitative Methods: When to Avoid

Quantitative methods, while valuable, do have some key shortcomings. Despite the variety of quantitative methods, they don’t always fit the job.

When you are looking to understand a user’s motivations or comprehension of a task, qualitative methods, covered in Chapter 4, offer more tangible results. Similarly, if you have access only to a small number of users, analytics may not be statistically significant. If you’re developing a product for an entirely new market, analytics may not exist at all. Without this benchmark, quantitative methods may not be as effective as simply hitting the streets and talking with people face to face.

Exercise: Getting to Know Quantitative Research

In order to best understand quantitative methods, follow this exercise. It shouldn’t take more than 15–20 minutes and will allow you to immediately apply some of the ideas discussed in this chapter.

  1. Think of a project.

    Think of a project you are currently working on. If you are between projects, think back to the exercise in Chapter 2.

  2. Write down what questions you want to answer with your project.

    On individual sticky notes, write down five questions related to your project that can be answered with hard numbers. This may be “Who uses our system?”, “How long do people use the system?”, or anything else that comes to mind.

  3. Write down why you want to ask those questions.

    On separate sticky notes, write why you want to ask each of these questions. Pair each “why” with the appropriate question.

  4. Identify who can help you.

    Identify who on your team might be able to access this information. Is it a programmer, a data analyst, or even the client? Write down the role and name next to each question.

  5. Make a game plan.

    Take these questions to work and start a conversation with each of the individuals. Look to understand what it might take to gain the desired information. You might be surprised as to how many of these questions are easily answered or already available. As we shared in Chapter 2, knowing what to ask is as important as asking the right question.

Parting Thoughts

So far we have addressed quantitative research methods and the implications these methods have on product design. Still, this is only a portion of the tools at our disposal. In the next chapter, we will evaluate qualitative research methods and how they can be integrated with the quantitative methods we’ve discussed here.

Many designers tend to avoid quantitative research, as the idea of statistical analysis can seem daunting. To the readers whose palms sweat at the thought of large data sets, fear not. Many tools offer low-level analysis for free, and many organizations have an Excel guru, even if that guru doesn’t want it known. Learning about these tools and identifying these allies is a key piece of your design toolkit.



[3] Aaron Smith, “U.S. Smartphone Use in 2015,” Pew Research Center, April 1, 2015 (http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset