Is Your Website Traffic Ruining Your A/B Tests?
A/B testing can transform your business into a polished science. Remember the scientific method from 8th-grade science? Well, a good A/B test meets all of those criteria and—if you set your tests up correctly—your results will tell you exactly what works for your customers…and what doesn’t. Assuming of course, that you’re testing your customers. Unfortunately, getting traffic to your site isn’t the same thing as getting the right traffic to your site. The right traffic is qualified to buy what you’re selling and interested in converting. The wrong traffic…isn’t.
A/B testing can transform your business into a polished science. Remember the scientific method from 8th-grade science? Well, a good A/B test meets all of those criteria and—if you set your tests up correctly—your results will tell you exactly what works for your customers…and what doesn’t. Assuming of course, that you’re testing your customers.
Your old science teacher would probably roll over in his grave if he heard that you did an awesome experiment on the wrong sample. That’s like asking your cat which type of dog food you should buy. But that could never happen to your A/B test, right? I mean, you only test people who visit your site, so of course you’re only testing your customers…
All Traffic is Not Created Equal
Unfortunately, getting traffic to your site isn’t the same thing as getting the right traffic to your site. The right traffic is qualified to buy what you’re selling and interested in converting. The wrong traffic…isn’t.
For example, in October 2013, we ran a Halloween-themed blog post entitled “6 Killer PPC Branding Tactics Even Freddy Krueger Loves!”
From a traffic standpoint, the article was an instant, unqualified success. It showed up in the #1 spot on Google and received 50-200 clicks per day. In fact, it got more traffic than our homepage for over a year.
The only problem was, no one converted. As it turned out, our article wasn’t ranking for “ppc” or “ppc branding.” It was ranking for “freddy krueger tactics.”
Did we have awesome traffic? Yes. Was it the right traffic? No.
Perhaps I could have optimized my entire website to cater to serial killer fan-boys, but the simple fact would remain that they were not the customer I was looking for. And, no matter how many Krueger fans clicked on that post, they weren’t going to sign up for pay-per-click management.
Of course, with organic traffic, you don’t have a ton of control over who sees and clicks on your ads. However, with paid advertising, you pay to make sure that your ad shows up for your ideal audience. If you are strategic with your targeting, this can work great. But for most companies, they just end up paying for the wrong traffic anyways.
For example, last year we promoted a humorous post on Facebook called “How to Spice Up Your Love Life with AdWords.”
I had promoted a lot of articles on Facebook, so I thought I had a good feel for my audience and thought this post would get a lot of clicks…and it did. Unfortunately, those extra clicks didn’t translate into extra conversions.
At first, I was confused. Normally, more clicks on my Facebook posts meant more conversions. In an effort to figure out what was going on, I dug into my analytics data. As it turned out, most of the clicks were from 55+ year old women, apparently drawn by the title’s promise to “spice up your love life.”
As a digital marketing agency, we don’t get a lot of conversions from pentagenarians — clearly 55+ old women were not the right traffic for my site.
So I excluded the non-converting segment of my audience and my clicks went down, but now I wasn’t paying for the wrong traffic.
You see the same sorts of problems throughout paid advertising. In fact, while auditing more than 2,000 AdWords accounts, we discovered that the average AdWords account wastes 76% of its budget on search terms that never convert! Clearly, the wrong traffic never converts. Worse still, it undermines the effectiveness of your A/B tests.
How the Wrong Traffic Ruins A/B Tests
To explain how the wrong traffic can ruin your A/B tests, let’s take a look at a hypothetical scenario. Say you work for Ferrari and you’re running an ad campaign that drives 10,000 visitors to your website per month. Since you are a smart marketer who happens to be using Optimizely, you’ve decided to split test your traffic.
9,900 of your visitors are actually teenagers who are not even at driving age. Obviously, no matter how optimal your site is, a 16 year old simply cannot buy a Ferrari.
Those 9,900 visitors might be super interested in Ferraris. They might love your website. But they are simply the wrong traffic. That means only 100 of your 10,000 visitors have any chance of converting.
Of those 100 visitors, 15/50 might convert on experience A of your test and 10/50 might convert in experience B. Improving your conversion rate from 20% to 30% is awesome, right?
However, because of those 9,900 teens, only 15/5,000 and 10/5,000 of your visitors will convert. With all of those irrelevant visitors on your site, your apparent conversion rate will improve from 0.2% to 0.3%. That’s not nearly as exciting.
Sure, you improved the performance of your page, but is a 0.1% improvement in your conversion rate really going to make a difference for your site? In Optimizely, your test will look like a failure when—in reality—one arm of your test was actually converting 50% better than the other for right kind of traffic.
So, if your marketing is putting the wrong traffic on your page, no amount of CRO will fix the problem. In fact, you might not even be able to tell if your CRO is working.
Test the Right Traffic
To ensure that your A/B tests are actually yielding meaningful results, you need to make sure that your marketing is putting the right traffic on your site. After all, if you don’t have the right traffic, A/B testing isn’t likely to do you any good. With that in mind, here are a couple of easy ways to improve the quality of your traffic:
1. Know Your Customer
If you want to avoid marketing to the wrong audience, you have to know who your ideal audience is before you start to advertise.
Talk to your target customers. Try to discover their “pain points”—the problem with their current situation that makes them look for a solution (hopefully yours). Do your research, and remember that you are researching the people you want to sell to, not those who are already sold.
In your research, you may discover multiple customer groups and subgroups, each with its own pain point. The next task is to customize your advertising to match the needs of each group.
Remember, what you think your customer needs may not be what they think they need. Your customer knows their needs better than you do. If you try to advertise to the wrong pain points, you’ll end up with the wrong traffic.
2. Be Specific
Once you know your customer, you need to match your advertising message to their pain point.
For example, if you are advertising to people with low back pain (a literal pain point), their biggest concern may be that they might have to get surgery and miss work. In this situation, an ad that promises “Non-surgical back pain experts who work with your schedule” will probably get better quality traffic than one touting “healthy living across the lifespan” or “cutting edge cancer treatment.”
Worse still, if your ad promised “non-surgical back pain relief” and then sent your traffic to a generic hospital site, your visitors will feel betrayed and won’t stick around.
By tailoring your ads to the needs of your specific buyer personas, you will not only improve your traffic quality, you will also increase their likelihood of converting once they reach your site.
If you want your A/B testing to do its job, you need to be testing the right traffic. The wrong traffic never converts and it can seriously undermine your testing efforts.
In addition to improving the effectiveness of your testing, improving the quality of your traffic will actually improve your conversion rate and reduce your wasted marketing spend. It takes some extra effort, but it will produce game-changing results.
What do you think of this analysis? Have you seen this sort of problem in your own A/B testing?