A/B testing, which you may also have heard referred to as split testing, is a method of website optimization in which the conversion rates of two versions of a page — version A and version B — are compared to one another using live traffic. Site visitors are bucketed into one version or the other. By tracking the way visitors interact with the page they are shown — the videos they watch, the buttons they click, or whether or not they sign up for a newsletter — you can determine which version of the page is most effective.
A/B testing is the least complex method of evaluating a page design, and is useful in a variety of situations.
One of the most common ways A/B testing is utilized is to test two very different design directions against one another. For example, the current version of a company's home page might have in-text calls to action (CTA), while the new version might eliminate most text, but include a new top bar advertising the latest product. After enough visitors have been funneled to both pages, the number of clicks on each page's version of the CTA can be compared. It's important to note that even though many design elements are changed in this kind of A/B test, only the impact of the design as a whole on each page's business goal is tracked, not individual elements.
A/B testing is also useful as an optimization option for pages where only one element is up for debate. For example, a pet store running an A/B test on their site might find that 85% more users are willing to sign up for a newsletter held up by a cartoon mouse than they are for one emerging from the coils of a boa constrictor. When A/B testing is used in this way, a third or even fourth version of the page is often included in the test, which is sometimes called an A/B/C/D test. This, of course, means that traffic to the site must be split into thirds or fourths, with a lesser percentage of visitors visiting each site.
Simple in concept and design, A/B testing is a powerful and widely used testing method.
Keeping the number of tracked variables small means these tests can deliver reliable data very quickly, as they do not require a large amount of traffic to run. This is especially helpful if your site has a small number of daily visitors. Splitting traffic into more than three or four segments would make it hard to finish a test. In fact, A/B testing is so speedy and easy to interpret that some large sites use it as their primary testing method, running cycles of tests one after another rather than more complex multivariate tests.
A/B testing is also a good way to introduce the concept of optimization by testing to a skeptical team, as it can quickly demonstrate the quantifiable impact of a simple design change.
A/B testing is a versatile tool, and when paired with smart experiment design and a commitment to iterative cycles of testing and redesign, it can help you make huge improvements to your site. However, it is important to remember that the limitations of this kind of test are summed up in the name. A/B testing is best used to measure the impact of two to four variables on interactions with the page. Tests with more variables take longer to run, and A/B testing will not reveal any information about interaction between variables on a single page.
If you need information about how many different elements interact with one another, multivariate testing is the optimal approach.
Multivariate testing uses the same core mechanism as A/B testing, but compares a higher number of variables, and reveals more information about how these variables interact with one another. As in an A/B test, traffic to a page is split between different versions of the design. The purpose of a multivariate test, then, is to measure the effectiveness each design combination has on the ultimate goal.
Once a site has received enough traffic to run the test, the data from each variation is compared to find not only the most successful design, but also to potentially reveal which elements have the greatest positive or negative impact on a visitor's interaction.
The most commonly cited example of multivariate testing is a page on which several elements are up for debate — for example, a page that includes a sign-up form, some kind of catchy header text, and a footer. To run a multivariate test on this page, rather than creating a radically different design as in A/B testing, you might create two different lengths of sign-up forms, three different headlines, and two footers. Next, you would funnel visitors to all possible combinations of these elements. This is also known as full factorial testing, and is one of the reasons why multivariate testing is often recommended only for sites that have a substantial amount of daily traffic — the more variations that need to be tested, the longer it takes to obtain meaningful data from the test.
After the test has been run, the variables on each page variation are compared to each other, and to their performance in the context of other versions of the test. What emerges is a clear picture of which page is best performing, and which elements are most responsible for this performance. For example, varying a page footer may be shown to have very little effect on the performance of the page, while varying the length of the sign-up form has a huge impact.
Multivariate testing is a powerful way to help you target redesign efforts to the elements of your page where they will have the most impact. This is especially useful when designing landing page campaigns, for example, as the data about the impact of a certain element's design can be applied to future campaigns, even if the context of the element has changed.
The single biggest limitation of multivariate testing is the amount of traffic needed to complete the test. Since all experiments are fully factorial, too many changing elements at once can quickly add up to a very large number of possible combinations that must be tested. Even a site with fairly high traffic might have trouble completing a test with more than 25 combinations in a feasible amount of time.
When using multivariate tests, it's also important to consider how they will fit into your cycle of testing and redesign as a whole. Even when you are armed with information about the impact of a particular element, you may want to do additional A/B testing cycles to explore other radically different ideas. Also, sometimes it may not be worth the extra time necessary to run a full multivariate test when several well-designed A/B tests will do the job well.
Don't let the differences between A/B testing and multivariate testing make you think of them as opposites. Instead, think of them as two powerful optimization methods that complement one another. Pick one or the other, or use them both together to help you get the most out of your site.