a white box with a white keyboard

In 2013, the percentage of the top 1 Million sites using an split testing platform grew by a massive 59%* (by comparison the Dow Jones Industrial Average had a breakout year and only grew by 22%). This growth shows that companies of all sizes are leveraging the value of testing.

However, while companies of all sizes are testing, the challenges they face are highly correlated with their size – what stops a small e-commerce company from testing is very different from what stops a large one.

Today we’ll look at pitfalls encountered most often by large e-commerce companies and offer solutions based on what we’ve seen work at Optimizely. (Check back soon for the challenges faced by small and medium e-commerce companies.)

Large E-commerce – Mo’ Money, Mo’ Resources … Fewer Tests

“Giants are not what we think they are. The same qualities that appear to give them strength are often the sources of great weakness.”

― Malcolm Gladwell, David and Goliath

One would think that large companies with more resources would be testing the most. But we’ve found that getting tests up and running is difficult precisely because of their size.

Pitfall #1 – No Testing Evangelist

Giving many different teams access to a testing platform but no person dedicated to using it is a recipe for low usage. It becomes just one more thing on top of everyone’s already busy jobs.

Solution – Create a Dedicated Tester: Dedicating one person to do testing full time – even if he or she is just a program manager who uses other resources to help execute on the tests – is much better than many people only partially dedicated. Being fully dedicated to testing means this person will need to show something for his efforts. Making sure this person is curious, resourceful, well-organized and excited about testing is more important than any technical skillset. Give him ownership to work with other teams by allotting a rough number of hours or a percentage of time to utilize design and developer resources if needed.

Pitfall #2 – No Clear Directive

The testing evangelist is excited about the product but doesn’t know where to start.

Solution – Set a Test Number Goal: Set a goal for running a certain number of tests by a certain date. Don’t worry about details like setting up the perfect workflow or impacting the most important metrics – not yet anyway. By setting a test number goal, the testing program manager will run simple tests that are easy to set up and don’t involve too many other stakeholders (which can create bottlenecks). In turn, this will get him comfortable with the platform and build momentum for even more impactful tests in the future.

Pitfall #3 – Workflow Inertia

Large companies often have processes in place that are hard to break. A retail company, for example, may have a process starting with photo shoots, moving to a digital assets team, who then works with a web production team to get content on the site. This process often spans multiple products and categories and may take months from start to finish.

Solution #1 – Start Small: If you have a test idea that could apply to every product or category – say, products on a white background instead of products on a model – it doesn’t mean you have to test it for every single category. Start by testing on one product (or just a few at most). This means the digital assets team only has to create one additional asset (say, skinny jeans on white background) and once that asset is created, their workflow can continue as normal. Once the test has been run and shown value, a discussion can be had about a more wide-scale test.

Solution #2 – Avoid Workflows Altogether: Test a constant feature that never changes – say, the size or location of an add-to-cart button on a product page. This means you only have to notify the Product team to get buy-in instead of the array of teams involved in creating product images.

Pitfall #4 – Data Discrepancies

You have your platform integrated with your analytics platform – your company’s system of record. Great! You run your first test and some of the data in your testing platform, like visitor count, is different from what your analytics platform shows. All testing efforts are halted to determine why.

Solution – Define What Each Platform’s Role is: An analytics platform is designed to count data in a certain way and it may count some things differently than your testing platform. Don’t get caught up in meaningless discrepancies. Determine what metrics will be evaluated based on the analytics platform and what metrics will be evaluated based on the testing platform. Remind everyone that the testing platform’s main purpose is to capture directional signal of what is likely to work for your site. Here’s an example:

If your testing platform indicates there were 2,200 total purchases with Variation B performing 20% better than Variation A, but your analytics platform indicates there were 3,100 total purchases, it doesn’t directionally change the fact that variation B is better than A. Not only that, but the difference in conversions likely has nothing to do with either platform being “wrong”, it’s just that however each platform defined a purchase is slightly different or the way the testing platform allowed visitors to get into the test was different.

If your company can avoid some of these common pitfalls, you’ll be able to test in a more more meaningful and nimble way. If you’re still unconvinced that testing volume matters, remember that in the time it takes many large companies to run two tests, their smaller and more nimble competitors will have run twenty and reinvented their website based on data that moves them in the right direction.

If you’re one of the businesses that adopted testing in 2013, don’t let 2014 be the year of the false start!

*Based on BuiltWith data from the top 10 A/B testing tools used across the Alexa Top 1 Million Sites