Chapter 23

Understanding User Testing

In This Chapter

arrow Understanding the difference between attitudinal and behavioral testing

arrow Knowing what to test

arrow Learning about A/B split testing

arrow Learning about screencasting

Earlier in this book I mentioned that if something’s worth doing, it’s worth measuring. Now I submit to you that if something’s worth measuring, it’s worth testing. By purposefully testing your key success metrics and acting on your discoveries, you may intelligently affect your outcomes. Increasing the positive impact of any given inbound marketing factor results in an increase in the overall successful outcome of your inbound marketing initiatives. Testing factors that affect your key metrics provides insight into incremental improvements you may discover and implement for every point in the Customer Conversion Chain.

This chapter is not a thorough examination of extremely sophisticated or technical testing methods. I’ll hire a mathematician for that. As an inbound marketer, you need simpler behavioral tests, the findings of which may have a more immediate impact on your marketing results. My main focus is on split testing because it’s the most manageable for the typical inbound marketer. This chapter covers sound testing principles for basic testing techniques. The intent is to teach a simple and quick means of discovering which parts of your digital marketing work best, and leveraging that knowledge in positive business outcomes.

Gaining Insight from User Testing

When performing user-testing, it’s helpful to know the difference between attitudinal testing and behavioral testing. Attitudinal testing is what users say they do. Behavioral testing is what they actually do. Inbound marketing’s goal is to increase attraction and conversion. So, measuring behavior is your primary objective. To quickly increase your conversions remember this: ABABT or Always Be A/B Testing. Focusing on A/B testing means measuring actual behaviors, assessing the impact of format changes, and applying actionable positive influence on the end result. Other attitudinal testing is valuable for understanding the psychology of communicating with your customer personas, and if you have access to that research, by all means, use it! For purposes of testing the Customer Conversion Chain, I want to limit the conversation to testing that you, as an inbound marketer, can control, measure, and implement on your own. Most certainly, A/B split testing and possibly multivariate testing fit into this category.

If you choose to test beyond A/B testing, use the chart that digital marketer Christian Rohrer designed because it’s a good starting point. (See Figure 23-1.) Understanding which questions you want answered molds the type of testing you should perform. Use this list to help you decide on a test type:

  • If you want to know what people do, use behavioral testing (like A/B testing).
  • If you want to answer “how many?” or “how much?” use quantitative testing.
  • If you want to know the answers to why something is happening and how to fix it, use qualitative testing.
  • If you want to know what people say, use attitudinal testing.
image

Figure 23-1: Christian Rohrer’s research methods chart.

Christian created another, more detailed chart that I feature later in this chapter to help you choose the actual style of research testing.

Knowing What to Test

Because of the complexity of inbound marketing, the things you could test are infinite. So start off simply, testing those points of attraction and conversion that contribute the greatest to your successful outcome. That way, any positive changes you learn by testing may be applied with more impact.

tip The fact is, you can test anything that you know or suspect may have a meaningful impact on your inbound marketing efforts. I recommend you begin by testing input factors that are closest to the desired consumer action you’re seeking. So, if you’re e-commerce, consider starting your testing on your shopping cart page and work backwards down the customer conversion path. The exception is if A/B testing that close to a purchase would have a profoundly negative impact on sales. Alternatively, you may consider testing the weakest link in your company’s Customer Conversion Chain.

If you’re serious about inbound marketing, invest in marketing automation software or conversion optimization software. And, remember: ABABT!

Knowing Your Minimum Sample Size

Garnering meaningful data is only as good as the validity of the test you’re performing. Maybe you’re not so great at manually formulating statistical degrees of confidence. Neither am I, actually. Check out Figure 23-2 and meet your new A/B testing best friend, the Sample Size Calculator from Optimizely (www.optimizely.com/resources/sample-size-calculator/).

image

Figure 23-2: The Optimizely Sample Size Calculator.

Simply fill in your current conversion rate (it defaults to three percent). Next, input your minimum detectable effect. This is the amount of “lift” or increase over the original control you want your test to be able to detect. The lower you make this number, the more respondents you’ll need. With a lower number, then, the test will take longer, but it will provide finer results. The statistical significance box autofills with the recommended 95 percent — this is recommended for a reason. Unless you possess the serious chops of an experienced marketing researcher, leave this value alone so you don’t get into trouble. The tool is built to help provide some testing parameters, but in the end it is you that performs the test, not the tool.

warning Statistical significance alone does not equate to validity. You must give your tests time to work themselves out. You otherwise risk getting an early “false read” that may show one of your test subjects overperforming. If you stop the test and apply the changes immediately, you risk applying false assumptions that will not carry through down the Customer Conversion Chain. To avoid this, make sure you test the minimum amount to achieve statistical significance and then keep testing. For instance, if you’re A/B testing two landing pages for a couple days, generating huge traffic and one out-converts the other … do not stop!

Keep testing through two business cycles, as defined by your industry’s purchase cycle. This may be an extremely long time for business-to-business companies. To be safe, run your test for at least 30 days to account for unforeseen time anomalies. To create higher confidence in your results, make sure you’re generating at least 250 testable actions per version. Lastly, make sure your number of conversion itself is big enough to draw meaningful data and project future performance more accurately. If you had tons of traffic to each of two landing pages you were testing, and for the first page one person converted and for the second, two people, you’ve learned nothing that can be applied. Assuming the second page will convert 200 percent better than the first page is a fallacy that will get you in trouble, so be careful and don’t get into testing that’s over your head.

Performing A/B Testing

Today, inbound marketing offers more control over your advertising message than ever before. With access to complex data points, it’s possible to know more about who is visiting your site, how they are responding to your message, what drives them to action, and what is and what is not working.

The key is refining your approach. One effective method is through A/B testing, also known as split testing. With the right software, A/B testing costs next to nothing to implement. Split testing works by examining a “control” (A) against one single changed variable, also known as your “treatment” (B). It allows you to identify what brings a better response rate. A/B testing individual components of your marketing approach is a great way to understand and identify what is working. It allows you to refine your message, drive more traffic to your site, and generate more leads from the traffic you are getting. Ultimately, your inbound marketing drives more revenue opportunities.

If you decide to try testing yourself, here are several guidelines and best practices to consider before getting started.

  • Keep it simple. Only conduct one test at a time. For example, if you’re testing a new on-page CTA button directing visitors to a landing page you’re also testing at the same time, the results can easily become cloudy. What’s performing better, the CTA or the landing page? You have no way of verifying what caused a specific effect. Do not conduct multiple overlapping split tests at the same time.
  • Test only one variable at a time. To determine how effective an element is, you need to isolate that variable in your A/B testing. Whether it is a page element or graphic, a call-to-action or an email campaign, only test one variable at a time. By focusing your testing on an entire page, email, or CTA as the variable, you can often achieve dramatic results.
  • Small changes may have big results. While big sweeping changes can often increase lead generation numbers, small changes can be just as important. When developing your tests, remember that even something as simple as changing your headline, your image, your form field size, or the color of your CTA button may offer significant results. Sometimes small changes are easier to measure than big ones.
  • Consider testing one larger variable such as an entire landing page. While you can test the color of a CTA button as a variable, you might wish to test an entire landing page, unique promotional offer, or full email as a variable. Instead of testing a single design element like a headline or image, consider creating two separate landing pages as variable and test them against each other. In this case, you’re testing the overall design layout rather than the individual components making up the landing page or email. This is a higher level of testing and can result in dramatic results so you’ll learn which page performs better, but not which component, or combination of components, is causing that lift.
  • Start by measuring closer to the desired end action, usually a sale. I’ve stated this before, and I’ll state it again: Testing closer to the Action in the Lifestyle Loop will create the opportunity for more influence. Just be aware that impact may be a positive or negative impact! Your testing can have a positive impact on your conversion rate, but how is it affecting your sales numbers? Applying what you’ve learned from A/B testing results can affect your bottom line. You may find that while your conversion rate may drop, leads may be more qualified and result in higher sales numbers. As you create your tests, consider metrics like click-through rates, leads, traffic, conversion rates, and demo requests.

Setting Up User Testing

User testing can tell you a lot about your website and inbound marketing campaigns in a very short time. Have you ever written content, any content, and had it proofread by a couple people only to discover there’s a glaring typo that somehow is glaringly obvious after your content is published? I have, and it’s embarrassing and frustrating. The same can be said about UI and UX. What seems like a logical onsite navigation path and obvious CTA choices to you and your developer when you initially chart it out on paper may not work as you intended. Even after you’ve reviewed your website on an interactive demo site, test its usability before you launch it live to the public. Why? UX navigation mistakes aren’t as obvious as a typo in your content. Visitor activity is invisible, and people who bounce aren’t going to tell you why they left. I perform user testing to spot unintentional design dead-ends, roadblocks, and friction on the customer conversion path and I urge you to do the same. Performing user testing prelaunch helps maximize consumer action, which is exactly what you want out of your inbound marketing.

Testing a single factor

Similar to your split A/B testing, it makes sense for you to choose only one thing to test when performing user testing. (Are you sensing a testing trend here?) Testing a workflow, a conversion campaign, or a website navigation sequence make sense any time you’re creating a new version of each. Wouldn’t you rather learn your conversion roadblocks before you launch new inbound initiatives so you’re not fixing things on the fly? Trust me, the answer is “Yes!”

Testing with click-tracking

Tracking user clicks through click recording software is an effective method of testing where your users are going on your website pages. By collecting and aggregating this information and then displaying it as a heat map, you get a clear picture if people are clicking in the places in the manner in which you designed any particular page. This is particularly effective for observing and reporting which CTAs are being clicked and which are not being clicked. There are too many options for me to cover in this book; however, here are four paid click-tracking options, listed alphabetically:

Researching with click-tracking and analyzing the resulting heat map reports instigates better website page UI design. That’s good for your visiting customers, which means it’s good for you.

Testing with screencasting

One method of easily spotting obvious navigation roadblocks is hiring paid testers to perform onsite actions and tasks. Two paid services that deliver are UserTesting (www.usertesting.com) and TechSmith’s Morae (www.techsmith.com/morae). Each of these record the activities of actual users interacting with the subject of your test. For instance, by providing your prelaunch working prototype of your soon-to-be launched website, you pay for users to perform specific activities, like shopping and checking out or navigating through a workflow.

UserTesting uses professional testers who screencast and narrate their onsite activity. So you see the tester’s mouse move toward a CTA button, for instance, while the tester says, “Now I’m going to click here to see where this takes me.” You name your number of testers, and you receive recorded sessions quickly, sometimes under an hour. Morae is capable of recording test subjects’ facial expressions, eye movements, and mouse-click behavior. If accessing historical report data is important, you can archive results digitally. Either way, you’ll know where your UI roadblocks are very quickly.

Regardless of whether you choose one of the two research tools above or an alternative, your process is the same:

  1. Define your research scope.
  2. Create the scenario.
  3. Define the user task.
  4. Observe and notate key positive or negative interactions.
  5. Formulate a summary.
  6. Make recommended changes.
  7. Retest to see if identified roadblocks have been eliminated from the customer purchase path.

In my consulting experience, I’ve learned that showing clients, developers, and IT users interacting with our collective digital marketing efforts proves far more powerful than telling them what the problems are. When you record users stumbling around a web page, not knowing where to go next, it’s usually obvious to all which components need to be fixed. It can be a humbling experience, but the reason I’m an inbound marketer is to build up sales, not my ego.

Reporting results

Reporting and documenting your test results, regardless of the testing performed, is good inbound marketing practice. Knowing which past testing factors succeeded and which failed builds a shareable knowledge base from which you can draw upon to make successively better decisions in the future. Documenting the intent, timing, observations, and action points helps you and your inbound marketing team leverage your collective knowledge to maximize conversions.

remember Testing is an ongoing process. Creating a testing culture within your organization stimulates continual raising of your success bar. I like the Usabilla graphic in Figure 23-3, because it demonstrates the testing process as continuous. There is no beginning and no end.

image

Figure 23-3: Usabilla testing process.

Commit to a culture of measuring, testing, and reporting. Not every marketing initiative will work. It’s valuable to know about those failures so your organization doesn’t repeat past mistakes. Likewise, documenting your testing history provides a resource from which you can replicate successes resulting in an overall lift in CTR and sales while lowering your cost-per-lead and cost-per-acquisition.

Things You Can Do Now

  • Test a CTA button with an A/B test.
  • Test a landing page with an A/B test.
  • Perform a screencasted UX navigation test.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset