This is a guest post from Matty Wishnow, the Founder & CEO of Clearhead, a validation and optimization agency based in Austin, TX.
Blu Dot tests everything. Really. I know what you’re thinking. Is testing everything possible? Is it even practical? The answer is yes on both counts. This is the very true story of a company that, within two years, evolved from never having conducted a split test to an organization that is continuously developing, prioritizing and validating hypotheses—big and small, broad and targeted.
Having recently returned from a quarterly business review with the Blu Dot team in Minneapolis, I found myself beaming with pride. We had spent the previous morning reviewing both the e-commerce business performance and the product roadmap—past and present. For nearly two years, Blu Dot’s direct to consumer business—selling modern furniture to the design obsessed—has enjoyed hockey-stick-like growth that has outpaced all expectations. Simultaneously, their product roadmap is entirely oriented around experiments and validated learnings. These two facts are not unrelated.
Clearhead is a validation and optimization agency. Blu Dot is our client. Our teams set sail on this journey with Mike Wodtke, their Director of Ecommerce, and his crew almost two years ago. Today Blu Dot tests, measures and optimizes everything they do in e-commerce—everything. But it wasn’t always that way.
Watch our webinar on this topic.
Should you really test everything?
At Clearhead, when we meet or kick off a new project, the two sentiments I most frequently observe among our clients are exuberance and cynicism. The exuberance is what sticks with me the most. A giddiness associated with the feeling of “we’re finally doing it—getting real about testing, personalization, and optimization.” The cynicism is an undercurrent, but it’s frequently there. And, frankly, I relate to it. For over a decade, I had my own e-commerce start-up and then for another five years I ran e-commerce for a big and complex music company. So I know the giddiness, and I know the cynicism.
People are naturally curious as to whether continuous optimization really will drive their business forward or if it’s just one new, additional, buzzword-y thing with marginal benefits. There is a sense that it’s not really practical to test and measure everything. We often hear that there are projects in the works that have already gotten approval and are too far along. Or there are small changes that would be too costly to test. Or there are the executive mandates that crop up and are wholly allergic to data.
While I have argued from both sides of this debate, countering each practical argument with the benefits and costs of the alternative, I am no Pollyanna. Secretly, I sympathized with parts of the argument that everything cannot be tested and measured—that it actually is not practical.
Articulate the hypothesis. Certainly. Measure the impact after the fact. Of course. But you can’t always test changes against a control. There may be a risk and opportunity cost to not testing and validating, yet there is an actual cost and time associated with testing as well.
This sober pragmatism hasn’t stopped me from wondering “what if.” What if there were a business and executive that could test everything they changed against a control? That had a “clean lab” for optimization? That could attribute their growth in conversion rate and AOV most directly to the quality of their hypotheses? That could resist the urge to just “do things” in the interest of expediency so their team could constantly learn and optimize? What if?
Certainly some projects would get to market slower. People would grumble that other things are just obvious, clearly superior to the status quo and worth doing 100%, without reservation.
On the other hand, in a “test everything” world, the business and its leaders would know, with confidence, exactly what was moving the needle and would learn at an incredible pace. Their future bets would be more likely to succeed.
An absolute commitment to continuous testing
Two years into our optimization journey together, Blu Dot is that clean lab and Mike is that executive. They truly are testing everything that requires material investment or is based on a hypothesis important enough to make it to product road-mapping discussion. I will be transparent and say that there are bug fixes and defects that Blu Dot remediates without testing. They are not literally testing every code tweak.
But you name it, we’ve tested it with Blu Dot:
- Massive homepage changes
- Tiny copy changes
- Checkout flow changes
- Category level grids
- Shipping value propositions
- Smartphone targeting
- New and return customer targeting
- Email sign-up modals
- Swatch interactions and requests
- Navigation and taxonomy
- Filters and facets
- Keep going… we tested that, too.
The beauty of Blu Dot is how easily we can attribute business growth to testing and optimization. Cumulatively, a number of factors make Blu Dot a uniquely clean lab. Bludot.com traffic has been relatively controlled and the spikes are highly correlative to known marketing events. The SKU count and assortment has been similarly controlled. The company is not rolling out hundreds of new designs and categories each quarter. Their furniture design and product development is paced by their own innovation as opposed to trends. And, finally, they are not rolling out UX, content, feature or promotional changes outside our hypothesis development and validation efforts.
Additionally, there are forcing functions in the business that have allowed us to set and achieve reasonable expectations around testing and product development velocity. The fact that the site traffic is not inordinate, that customer segments rather consistent in their traits (people who are interested in high design but very functional furniture) and that e-commerce team is small (but potent) enable us to be very focused and pragmatic in our efforts.
The results of testing everything
So, what has a “test everything” approach in a clean lab yielded, two years later? The growth in conversion rate, AOV and their subscriber list speak for themselves. But, what’s just as important, is the Blu Dot team can look at their data and attribute specific spikes to specific efforts and cumulative growth the collective efforts of the testing and optimization program.
Another remarkable result is how testing has changed their resource allocation. As you can see in the chart, the Blu Dot team devotes far more time to optimization than the average e-commerce team. But they are able to do that because continuous testing enables them to spend less time on features, development, analytics, merchandising and content. Continuous testing means their efforts across the board are smarter, more efficient, and more effective.
As we get further into our work together, we find that the hypotheses are getting smarter, more focused on confirmed problems and more likely to succeed, built upon the shoulders of previously validated learning. It’s no coincidence that Blu Dot boasts a uniquely high winning percentage for the experiments we run together. The hypotheses are well prioritized and informed. Over my many years as an e-commerce entrepreneur and operator, this has been my continual pursuit: knowing the cards are stacked in your favor. And knowing they’ll continue to be so long as you gather and synthesize the data and what it means .