Learn More: Analyze the Metrics

Below the summary we provide deeper insights to each metric added to your experiment with the primary metric always on top.

Primary Metric Above is a sample experiment with declared winner and loser

Each metric is visualized with statistics for each variation in the experiment, including a graph. Keep in mind that results display differently depending on how the experiment was initially set up and what metrics were defined and added for tracking in the experiment. Experiment designers can choose to track either unique or total conversions on the metrics and whether to enable Stats Accelerator. Depending on these choices, Optimizely will display results differently, as noted below.

Unique Conversions/Total Conversions & Visitors:

  • Unique conversions show deduplicated conversions, so a single visitor who triggers the same event multiple times is counted just once. Unique conversions help you control for outliers and help you track conversions such as selecting the “add to cart” button.

  • Total conversions show a simple total of conversions for the event. Total conversions are useful for tracking repeat behavior, like social sharing or a media site trying to drive more article views.

  • For example, if a visitor came back to your site and clicked on the Call to Action again, Optimizely would track both clicks under total conversions, but only the first one under unique conversions.

  • Number of visitors is noted for each variation. The visitor counts for experiments with Stats Accelerator enabled will reflect the distribution decisions made by Stats Accelerator due to adjustments made to the percentage of visitors who see each variation. For experiments still running, the Stats Engine calculates an estimate of the number of visitors/events remaining to reach statistical significance. This estimate is calculated based on the current, observed baseline and variation conversion rates. If those rates change - meaning visitors actions have changed, the estimate will adjust automatically. But, what does this mean? In statistical terms, this experiment is currently underpowered. Optimizely needs to gather more evidence to determine whether the change you see is a true difference in visitor behaviors, or just chance.

Conversion Rate:

  • Let’s start with how the conversion rate is calculated. I know, we said no formula, but this is a straightforward one. The conversion rate is the number of conversions divided by the total number of visitors.

For example: if an ecommerce site received 200 visitors in a month and has 50 sales, the conversion rate would be 50 divided by 200, or 25%

  • Unique conversions show the conversion rate, the percentage of unique visitors in the variation who triggered the event.
  • Total conversions show Conversions per Visitor, the average conversions per visitor, for visitors in the variation.
  • Stats Accelerator experiments use a different calculation to measure the difference in conversion rates between variations: weighted improvement. Weighted improvement represents an estimate of the difference between conversion rates that is derived from inspecting the difference between conversion rates in each time interval individually. It is more accurate than a simple average since it corrects for the changing traffic proportion between variations.

Improvement:

  • For most experiments, Optimizely displays the relative improvement in conversion rate for the variation over the baseline as a percentage.

For example, if the baseline conversion rate is 5% and the variation conversion rate is 10%, the improvement for that variation is 100%.

  • Stats Accelerator experiments and campaigns use absolute improvement instead of relative improvement in results to avoid statistical bias and, when the baseline conversion rate is very small, to reduce time to significance.

Confidence interval:

  • The confidence interval tells you the uncertainty around improvement and is expressed as a range of values. The conversion rate for a particular variation almost certainly lies in this interval.

  • This interval starts out wide, then as Stats Engine collects more data, the interval narrows to show that certainty is increasing. Once a variation reaches statistical significance, the confidence interval always lies entirely above or below 0. Winning variations will have a confidence interval wholly above zero, while losing variations are wholly below zero, with inconclusive results showing the confidence interval encompasses zero.

  • The confidence interval gives you a means to declare a stop to an experiment. If the confidence interval is still inconclusive but tightly concentrated around 0, then you know that the true lift is probably very small and stopping the experiment is safe. On the other hand, if the confidence interval is large, then stopping the experiment can leave a large lift undiscovered. We'll take closer look at analyzing confidence intervals in the Taking Action on Results courses.

    We’ll look closer at confidence intervals in the Taking Action on Results courses.

inconclusive Above is a sample experiment with inconclusive results. Optimizely tries to help you with an estimate of the number of visitors/events needed to reach statistical significance. Due to fluctuations in visitor behavior, this estimate can be +/- 50%.

Statistical significance:

  • Optimizely shows you the statistical likelihood that the improvement is due to changes you made on the page, not chance.
  • Until Stats Engine has enough data to declare statistical significance, the Results page will note an estimate of the number of visitors needed to reach statistical significance based on the current conversion rate.
  • Stat sig is a measure of how much evidence there is that there is a difference, but we can never be 100% certain! Hypothesis testing is not perfect and we sometimes see A/A tests reported as significant (but we guarantee that it happens 10% of the time).

Overall Revenue:
When your experiment is setup to track increase in total revenue per visitor, Optimizely sums this up across all events, noting Total Revenue and Visitors along with Improvement, Confidence Interval and Statistical Significance.

  • Total Revenue - The total revenue generated from visitors who were exposed to this variation.
  • Visitors - The number of unique visitors exposed to this variation.

When looking at revenue metrics, it’s important to be aware of the effects that outliers can have on your results. Outliers can appear in your Optimizely results, often as a result of unusual or unexpected behavior by a customer. For example, suppose you are running an experiment that aims to improve average order value for your e-commerce site. Your visitors usually submit orders with an average total value of $200. Now imagine there is a small number of visitors who submitted orders with 100 or even 1000 times higher value than the average. If these extreme orders are included in the result calculations, they could introduce bias into your A/B comparison, and lead you to draw the wrong conclusions from your experiment.

For this reason, Optimizely gives you the option to exclude outliers from your experiment results. This feature is currently available for revenue metrics in A/B experiments. Contact your customer success representative to check if you are on an eligible plan.

For more details regarding Optimizely’s handling of outliers, read our Outlier filtering in Optimizely article.