Veröffentlicht am 20. Mai

Test + Learn 2025: 6 key takeaways for experimentation and digital teams

7 min read time

It's official: this year's Test + Learn was another success. With experimentation leaders and digital optimizers tuning in from around the globe, this year's Test + Learn virtual event kept its promise on actionable takeaways you could implement faster than you can say "3, 2, 1... we have lift-off!" 🚀

That's what it was all about this year; turning today's tests into tomorrow's revenue. We saw brands including Virgin Media O2, Vimeo, Chase, Yelp, ClassPass, and others share tips on how to build experimentation programs that don't just run tests but drive serious business growth.

For those who missed the event, want a refresh, or just can't get enough, here are the six key takeaways from the sessions this year:

1) ☄️ Your experimentation strategy is worn-out... but here's how to play offense (not just defense)

There's been a 55% increase in experiments run in 2024 versus two years ago, with an astounding 900 billion test impressions on the Optimizely platform last year. That's roughly two test impressions per week for every human on Earth!

Coming in with some harsh truths, Elena Verna (yes, the growth marketing pro) joins President at Optimizely, Shafqat Islam, for some real talk about a game-changing framework we're all stealing immediately: "defensive" versus "offensive" experimentation.

Defensive testing? That's when your conversion rate drops and you scramble to fix it. Offensive analysis helps you uncover future new revenue that you don't yet have by finding why high-performing cohorts are succeeding and propagating those learnings.

The problem? Most teams spend 90%+ of their time on defensive analysis, which fundamentally creates much lower lift to your outcomes.

Further, Elena challenged the entire product development cycle. Traditional teams "ship to release" while experimentation teams should "ship to validate." This means skipping unnecessary design reviews and overthinking and just testing faster.

Experimentation across the digital lifecycle

Image source: Optimizely

As she put it, "It's not about making the right decision. It's about making the fastest decision, and the test will tell you if it's the right one or not."

➡️ Watch Beyond the horizon: Transforming your experiments into a growth engine

2) 🛸 The AI revolution is here (and will make humans better testers)

According to Deborah O'Malley, founder of GuessTheTest, we're at "the very verge of a paradigm shift" with AI in experimentation.

Deborah walked through all eight stages of experimentation—from ideation to optimization—explaining how AI will improve each one.

The days of stressing over statistical significance? Gone. Spending hours manually writing hypotheses? Automated. Staying up at night, wondering if your test implementation is buggy? AI's got you.

The most exciting prediction? We won't be celebrating measly 1-2% conversion lifts anymore. With AI experimentationcontinuously optimizing winning variations, we're talking potential 15-20% improvements on valid statistically significant samples.

The human element remains critical, though. Our intuition, empathy and ethical oversight will become our superpowers. As Deborah put it, we might evolve from "experimentation or CRO auditor" to "AI behavioral architect" who will "shape ethical and emotional boundaries of AI suggestions."

Her straight-up advice? Stop clinging to "it's not there yet" and embrace what's coming. "If you have an optimization mindset and if you're willing to adapt to change and learn and grow," AI won't replace you; it'll empower you.

➡️ Watch A cosmic convergence: AI and humans take on testing

3) 🌠 Expect failure, embrace learning: How Virgin Media scaled from 50 to 600 experiments

Who knew raising a six-month-old daughter and running an experimentation program had so much in common?

As Doychin Sakutov from Virgin Media O2 put it: "Everything with her is an experiment. You try something, it works, and then you try the same thing, it doesn't." Sample size too small to draw conclusions! 👶

Waffles_fails_jump.gif

Gif source: Make a gif dot com

Virgin Media's experimentation story deserves its own Netflix series. Five years ago? A sad 40-50 tests annually with finance requiring sign-offs for experiments (the horror!). Today? They're crushing it with 600 variants in 2024 and already200 in Q1 2025 alone.

Their "Automation Intelligence" algorithm (yes, Doychin's trademarking that) for product recommendations is the perfect cautionary tale. It performed great on one page, matching human trading. Naturally, they expanded it to more pages and...complete failure! The algorithm couldn't handle sales periods starting/ending and behaved differently across devices.

But here's the insight from Doychin: "You're not going there for the win first time. You're just trying and making sure thatyou learn every single time."

➡️ Watch Intergalactic insights: Virgin Media's AI in experimentation journey

4) 💫 Who cares about click-through if it doesn't trickle down to sales?

Madison Hajeb from Tapestry (the global house of brands behind Coach, Kate Spade, and Stuart Weitzman) just dropped some fantastic insights about connecting data with revenue.

Previously, her team could say "conversion rate went up two percent," but couldn't tell what it did to the business as a whole. Now, with warehouse-native analytics, her team can tap directly into their data warehouse to see how digital experiments impact everything from return rates to in-store behavior.

When testing faster shipping options, for example, they can now see not just conversion impact but also delivery costs: "Did this change specifically benefit the business or did it hurt the business?"

The best part? No more manual SQL reports. "Before we were expected on really high visibility tests... people want to know 24 hours after it launched, we want to know it's not falling off a cliff." Now it updates automatically every 15 minutes, freeing Madison's team to explore testing in brick-and-mortar stores and other innovations.

Her advice for anyone starting their warehouse-native journey?

"Understand the nuances of your data... but don't be intimidated. It was plug and play for us in a lot of ways... If you look before you leap, sometimes you never leap."

➡️ Watch Warehouse-native analytics: Breaking down data silos

5) 🪐 From vanity metrics to business impact: What ClassPass and Chase UK measure now

When it comes to metrics that matter, Nina Bayatti (ClassPass) and Alexander Bock (Chase UK) shared how their programs evolved from surface-level metrics to true business impact measurements.

three folks having a fireside chat

Image source: Optimizely

Chase UK's journey had distinct phases: first year was all about customer acquisition, second year expanded to asking "are those customers now doing the actions that we want them to do?", and the third year focused on "generating good opportunities for them... that leads to achieve our commercial goals."

ClassPass underwent a similar shift, evolving from a marketing-focused program testing "optimal messaging and imagery" to working with product, engineering, pricing, inventory, and customer experience teams. Now they use SQL to analyze "down-funnel metrics" that show product usage and revenue impact after sign-up.

Alexander emphasized the importance of standardization: "We ended up creating our own internal stats library... creating our own metric library that is standardized in a way that we can have a single view on the right metric definitions." Why? "As soon as you start using different definitions and reporting different numbers, confusion arises."

Nina also shared her approach: "We've gone from focused on how many tests can we run... to now it's a lot of really thoughtful tasks that ladder directly back to what are the company objectives. And if it's not, we question, why are we doing it?"

➡️ Watch Experimentation metrics: What's propelling today's businesses forward?

6) 🚀 10 power moves to transform your experimentation program (that you can use today)

Sid Arora, Head of Product Experimentation at Yelp, delivered 10 powerful tips to transform your experimentation program from static to strategic:

  1. "Always make it easy to run experiments." If your team feels testing is hard, they won't do it.
  2. "Measure what matters." Ensure your team isn't chasing vanity experimentation metrics.
  3. "Always tie experiments to larger company goals." Every test should answer "what are we trying to move?"
  4. "Start with a story, not with stats." Ask yourself, "What is the story this experiment will help us tell?"
  5. "Build the experiment memory of the organization." Keep a log of what worked and what didn't.
  6. "Always make it about learning." Experiments aren't meant to prove you're right—they're about getting smarter.
  7. "Design for decisions." Only run tests if they will help make a clear and good decision.
  8. "Always make your environment transparent and safe to be wrong." Teams won't take risks if failure means blame.
  9. "Turn 'I think' into 'let's test'." This kills opinions and focuses on facts.
  10. "Teach teams to stop bad tests early." If a test is broken or unclear, stop it immediately.

Looking toward the future, Sid predicts three major shifts with AI: moving from simple A/B to complex multivariate testing, using AI as an "experimentation strategist" to scan past tests for patterns, and shifting focus from test velocity to which team can "learn the fastest and compound those learnings over time."

➡️ Watch From static to strategic: 10 Tips for perfecting your experimentation program

Test + Learn 2025: Watch all the sessions on-demand

Catch all the expert discussions, examples, and top tips to conquer your experimentation strategy in 2025 (and beyond).

Check out all the talks from our Test + Learn event now.

  • A/B-Testing, KI, Analysen, Experimentation, How we optimize
  • Last modified: 22.05.2025 05:41:36