graphical user interface, application

Experimenting at scale is a superpower.      

Get it right, and you’ll grab attention and see the impact on business goals that directly benefits the company, including sales growth, more conversions, cost savings, etc.    

Fail, and all the hard work you put into your customer experiences will go unnoticed (and unanswered).   

So, what does a culture of experimentation look like?  

While you focus on turning visitors into paying customers, you need to think about:    

  • If you want to a/b test across different devices, do you have the right tech and key metrics in place to enable it?   
  • If you’re a/b testing across the enterprise, do you have a governance structure in place to measure results?   
  • Do you have the right stats and results you can rely on?   

Initiatives like testing and optimization don’t happen in a vacuum – they’re surrounded by waves of other variables and insights. Here, it’s about testing your entire digital experience across different business units, devices and channels.    

So, how do you build a culture of experimentation?  

The key is having trusted measuring pillars you can rely on in your company culture.  

Tips for building a culture of experimentation

Here are six techniques you can use for brainstorming ideas when building a test and learn framework:   

Tip #1 Test the entire digital experience   

The goal of every digital experience is to create curiosity that motivates your visitors to take action. If you’re not using deep personalization methodology to build credibility and sell new products, you can do it by optimizing every touchpoint and connecting the dots to the buying stage.   

Tip #2 Optimize experiences on every device   

A good digital experience can adapt and impact user experience across devices. Segmentation can help you create experiences based on user needs. Focus on having the ability to easily make changes across platforms (web, mobile apps, single-page apps, TV apps, loT, voice apps, etc.) without requiring a developer.    

Tip #3 Make sound decisions quickly   

To run experiments, your tests should be data-informed to ensure the validity of your results and detect a difference at any moment. Using a Stats Engine and Stats Accelerator will help you increase testing velocity and improve the accuracy of results.      

Tip #4 Research + Data-driven marketing > Opinions   

The goal is to improve your conversion rate by optimizing each metric. However, prioritization becomes difficult if you only use opinions to discuss what to test in the short term. So, you need to continue iterating the need for data-driven decision-making. To support your hypothesis and eliminate subjectivity, make it easy for anyone to interpret results.    

Tip #5 Integrate with key systems   

For optimization, build experiments quicker, learn faster, and make swifter decisions. Your tools and team members shouldn't work in a silo. Use open APIs to build your perfect stack. They enable easy integration with your other systems so you can act on existing customer data.  

Tip #6 Accelerate success through best practices   

Best practices combine a shared vision and collaborative tools to implement experimentation and learning in your work environment. Digital experimentation is one part of optimization. Gather, track, and report on experiment results across the enterprise to assess the quality of your tests and business success.   

Focus on the metrics and resources

Many organizations have an internal debate about the most important metrics, but it is important to focus on the right top-line goals. For example, leading indicators are important to test, but often poorly understood and under-tested

Teams that work with a/b testing should focus on the following metrics and resources:

1. Key metrics

  • Company-wide and business-unit key performance indicators (KPI)
  • Benchmark performance metrics: conversion rates, Revenue Per Visitor (RPV), Lifetime Value (LTV), Average Order Value (AOV), Return On Investment (ROI)
  • Leading indicator metrics: test velocity, efficiency, quality, agility
  • Program budget

2. Resources

  • Program Manager
  • Team leads (technical, design, analytics)

3. Actions you will perform

  • Benchmark KPIs and baseline modeling
  • Goal trees exercise
  • Align with stakeholders on optimization program goals

Conclusion  

Remember, Experimentation beats speculation. Bookmark these measures so it's handy the next time you're trying to foster a culture of experimentation.   

Even if you’re a small business with only a few stakeholders, think about these pillars as they'll be your future-proofing measures while running experiments. Try and see what works for you and let us know how it went!   

And if you still don’t have an experimentation culture, we can help you get started from scratch and achieve positive results. Optimizely analyzes millions of experiments monthly to identify new ways to optimize customer experiences.    

Get started here to run tests, uncover insights and let every customer interaction on your website take visitors to the buying moment.   

About the author