A/A testing is the tactic of using A/B testing to test two identical versions of a page against each other. Typically, this is done to check that the tool being used to run the experiment is statistically fair. In an A/A test, the tool should report no difference in conversions between the control and variation, if the test is implemented correctly.
Why would you want to run a test where the variation and original are identical?
In some cases, you might want to use this to monitor the number of conversions on the page where you are running the A/A test to track the number of conversions and determine the baseline conversion rate before beginning an A/B or multivariate test.
In most other cases, the A/A test is a method of double-checking the effectiveness and accuracy of the A/B testing software. You should look to see if the software reports that there is a statistically significant (>95% statistical significance) difference between the control and variation. If the software reports that there is a statistically significant difference, that’s a problem, and you’ll want to check that the software is correctly implemented on your website or mobile app.
When running an A/A test, it’s important to keep in mind that finding a difference in conversion rate between identical test and control pages is always a possibility. This isn’t necessarily a poor reflection on the A/B testing platform, as there is always an element of randomness when it comes to testing.
When running any A/B test, keep in mind that the statistical significance of your results is a probability, not a certainty. Even a statistical significance level of 95% represents a 1 in 20 chance that the results you’re seeing are due to random chance. In most cases, your A/A test should report that the conversion improvement between the control and variation is statistically inconclusive—because the underlying truth is that there isn’t one to find.
When running an A/A test with Optimizely, in most cases, you can expect the results from the test to be inconclusive—the conversion difference between variations will not reach statistical significance. In fact, the number of A/A tests showing inconclusive results will be at least as high as the significance threshold set in your Project Settings (90% by default).
In some cases, however, you might see on the that one variation is outperforming another or even that a winner is declared for one of your goals. The conclusive result of this experiment occurs purely by chance, and should happen in only 10% of cases, if you have set your significance threshold to 90%. If your significance threshold is higher (say 95%), your chances of encountering a conclusive A/A test is even less (5%).
For more details on Optimizely’s statistical methods and Stats Engine, take a look at How to Run and Interpret an A/A Test.
Evaluating and making sure your potential partner delivers the right progressive delivery and experimentation platform can be difficult. Download this free template to evaluate the quality of platforms and make sure they deliver exactly what you need.
In under 4 minutes, this full platform demo will show you how to create winning digital experiences.
More people are accessing the internet in new ways, which means your digital customer experiences are more valuable than ever. Our new COVID-19 guide is full of practical advice and proven ideas on using experimentation to stay ahead of the game.
An error has occurred
You can get the very best of Optimizely without spending a dime.Try it out for 30 days, on us.
Hang tight! We're creating your account and password instructions are headed to your inbox.
Please correct form errors
Start releasing products smarter with feature flags and rollouts. Prove value with A/B testing. Built on our Full Stack platform.
Welcome, we're creating your account...