Chapter 23
In This Chapter
Understanding the difference between attitudinal and behavioral testing
Knowing what to test
Learning about A/B split testing
Learning about screencasting
Earlier in this book I mentioned that if something’s worth doing, it’s worth measuring. Now I submit to you that if something’s worth measuring, it’s worth testing. By purposefully testing your key success metrics and acting on your discoveries, you may intelligently affect your outcomes. Increasing the positive impact of any given inbound marketing factor results in an increase in the overall successful outcome of your inbound marketing initiatives. Testing factors that affect your key metrics provides insight into incremental improvements you may discover and implement for every point in the Customer Conversion Chain.
This chapter is not a thorough examination of extremely sophisticated or technical testing methods. I’ll hire a mathematician for that. As an inbound marketer, you need simpler behavioral tests, the findings of which may have a more immediate impact on your marketing results. My main focus is on split testing because it’s the most manageable for the typical inbound marketer. This chapter covers sound testing principles for basic testing techniques. The intent is to teach a simple and quick means of discovering which parts of your digital marketing work best, and leveraging that knowledge in positive business outcomes.
When performing user-testing, it’s helpful to know the difference between attitudinal testing and behavioral testing. Attitudinal testing is what users say they do. Behavioral testing is what they actually do. Inbound marketing’s goal is to increase attraction and conversion. So, measuring behavior is your primary objective. To quickly increase your conversions remember this: ABABT or Always Be A/B Testing. Focusing on A/B testing means measuring actual behaviors, assessing the impact of format changes, and applying actionable positive influence on the end result. Other attitudinal testing is valuable for understanding the psychology of communicating with your customer personas, and if you have access to that research, by all means, use it! For purposes of testing the Customer Conversion Chain, I want to limit the conversation to testing that you, as an inbound marketer, can control, measure, and implement on your own. Most certainly, A/B split testing and possibly multivariate testing fit into this category.
If you choose to test beyond A/B testing, use the chart that digital marketer Christian Rohrer designed because it’s a good starting point. (See Figure 23-1.) Understanding which questions you want answered molds the type of testing you should perform. Use this list to help you decide on a test type:
Christian created another, more detailed chart that I feature later in this chapter to help you choose the actual style of research testing.
Because of the complexity of inbound marketing, the things you could test are infinite. So start off simply, testing those points of attraction and conversion that contribute the greatest to your successful outcome. That way, any positive changes you learn by testing may be applied with more impact.
If you’re serious about inbound marketing, invest in marketing automation software or conversion optimization software. And, remember: ABABT!
Garnering meaningful data is only as good as the validity of the test you’re performing. Maybe you’re not so great at manually formulating statistical degrees of confidence. Neither am I, actually. Check out Figure 23-2 and meet your new A/B testing best friend, the Sample Size Calculator from Optimizely (www.optimizely.com/resources/sample-size-calculator/
).
Simply fill in your current conversion rate (it defaults to three percent). Next, input your minimum detectable effect. This is the amount of “lift” or increase over the original control you want your test to be able to detect. The lower you make this number, the more respondents you’ll need. With a lower number, then, the test will take longer, but it will provide finer results. The statistical significance box autofills with the recommended 95 percent — this is recommended for a reason. Unless you possess the serious chops of an experienced marketing researcher, leave this value alone so you don’t get into trouble. The tool is built to help provide some testing parameters, but in the end it is you that performs the test, not the tool.
Keep testing through two business cycles, as defined by your industry’s purchase cycle. This may be an extremely long time for business-to-business companies. To be safe, run your test for at least 30 days to account for unforeseen time anomalies. To create higher confidence in your results, make sure you’re generating at least 250 testable actions per version. Lastly, make sure your number of conversion itself is big enough to draw meaningful data and project future performance more accurately. If you had tons of traffic to each of two landing pages you were testing, and for the first page one person converted and for the second, two people, you’ve learned nothing that can be applied. Assuming the second page will convert 200 percent better than the first page is a fallacy that will get you in trouble, so be careful and don’t get into testing that’s over your head.
Today, inbound marketing offers more control over your advertising message than ever before. With access to complex data points, it’s possible to know more about who is visiting your site, how they are responding to your message, what drives them to action, and what is and what is not working.
The key is refining your approach. One effective method is through A/B testing, also known as split testing. With the right software, A/B testing costs next to nothing to implement. Split testing works by examining a “control” (A) against one single changed variable, also known as your “treatment” (B). It allows you to identify what brings a better response rate. A/B testing individual components of your marketing approach is a great way to understand and identify what is working. It allows you to refine your message, drive more traffic to your site, and generate more leads from the traffic you are getting. Ultimately, your inbound marketing drives more revenue opportunities.
If you decide to try testing yourself, here are several guidelines and best practices to consider before getting started.
User testing can tell you a lot about your website and inbound marketing campaigns in a very short time. Have you ever written content, any content, and had it proofread by a couple people only to discover there’s a glaring typo that somehow is glaringly obvious after your content is published? I have, and it’s embarrassing and frustrating. The same can be said about UI and UX. What seems like a logical onsite navigation path and obvious CTA choices to you and your developer when you initially chart it out on paper may not work as you intended. Even after you’ve reviewed your website on an interactive demo site, test its usability before you launch it live to the public. Why? UX navigation mistakes aren’t as obvious as a typo in your content. Visitor activity is invisible, and people who bounce aren’t going to tell you why they left. I perform user testing to spot unintentional design dead-ends, roadblocks, and friction on the customer conversion path and I urge you to do the same. Performing user testing prelaunch helps maximize consumer action, which is exactly what you want out of your inbound marketing.
Similar to your split A/B testing, it makes sense for you to choose only one thing to test when performing user testing. (Are you sensing a testing trend here?) Testing a workflow, a conversion campaign, or a website navigation sequence make sense any time you’re creating a new version of each. Wouldn’t you rather learn your conversion roadblocks before you launch new inbound initiatives so you’re not fixing things on the fly? Trust me, the answer is “Yes!”
Tracking user clicks through click recording software is an effective method of testing where your users are going on your website pages. By collecting and aggregating this information and then displaying it as a heat map, you get a clear picture if people are clicking in the places in the manner in which you designed any particular page. This is particularly effective for observing and reporting which CTAs are being clicked and which are not being clicked. There are too many options for me to cover in this book; however, here are four paid click-tracking options, listed alphabetically:
www.crazyegg.com
)www.inspectlet.com
)www.luckyorange.com
)http://mouseflow.com
)Researching with click-tracking and analyzing the resulting heat map reports instigates better website page UI design. That’s good for your visiting customers, which means it’s good for you.
One method of easily spotting obvious navigation roadblocks is hiring paid testers to perform onsite actions and tasks. Two paid services that deliver are UserTesting (www.usertesting.com
) and TechSmith’s Morae (www.techsmith.com/morae
). Each of these record the activities of actual users interacting with the subject of your test. For instance, by providing your prelaunch working prototype of your soon-to-be launched website, you pay for users to perform specific activities, like shopping and checking out or navigating through a workflow.
UserTesting uses professional testers who screencast and narrate their onsite activity. So you see the tester’s mouse move toward a CTA button, for instance, while the tester says, “Now I’m going to click here to see where this takes me.” You name your number of testers, and you receive recorded sessions quickly, sometimes under an hour. Morae is capable of recording test subjects’ facial expressions, eye movements, and mouse-click behavior. If accessing historical report data is important, you can archive results digitally. Either way, you’ll know where your UI roadblocks are very quickly.
Regardless of whether you choose one of the two research tools above or an alternative, your process is the same:
In my consulting experience, I’ve learned that showing clients, developers, and IT users interacting with our collective digital marketing efforts proves far more powerful than telling them what the problems are. When you record users stumbling around a web page, not knowing where to go next, it’s usually obvious to all which components need to be fixed. It can be a humbling experience, but the reason I’m an inbound marketer is to build up sales, not my ego.
Reporting and documenting your test results, regardless of the testing performed, is good inbound marketing practice. Knowing which past testing factors succeeded and which failed builds a shareable knowledge base from which you can draw upon to make successively better decisions in the future. Documenting the intent, timing, observations, and action points helps you and your inbound marketing team leverage your collective knowledge to maximize conversions.
Commit to a culture of measuring, testing, and reporting. Not every marketing initiative will work. It’s valuable to know about those failures so your organization doesn’t repeat past mistakes. Likewise, documenting your testing history provides a resource from which you can replicate successes resulting in an overall lift in CTR and sales while lowering your cost-per-lead and cost-per-acquisition.