Case Study: Visa

Vinod Kartha leads an optimization team at Visa that is passionate about the power of experimentation. He and his team have learned a great deal in their time with experimentation, and one of the most important lessons is that experimentation builds on lessons learned from past experiments. Most specifically, when running an experiment, the first run should be focused on calibration, and the second experiment should be the real “first experiment.” This applies to all sorts of “first” experiments, whether it’s your first experiment ever, the first experiment on a particular page, or a first experiment on a critical element. When analyzing the calibrating experiment, you will ask questions like:

  • Is our hypothesis wrong or incomplete?
  • Is the experiment geared toward the right audience?
  • Is there an issue with the team or process in place to support the experiment?

Kartha notes that teams have a tendency to want to wait for the “perfect conditions” in order to run an experiment, such as the right amount of traffic, more supporting data, or any other factors that can impact results. But, as Kartha says, it’s far more important to just get started, dive in, and commit to learning from all of your results.

Taking this approach can be exceptionally liberating, especially if your first experiment turns out to be inconclusive. Inconclusive experiments can be frustrating if you’re trying to get things exactly right the first time around, but if you’re willing to analyze and learn from your experiments and calibrate your results for successive optimization, you will be more willing to jump in, get messy, and build real learning experiences no matter what your results are.