Challenge: Analyzing Results in Program Management

Imagine that you’re working with Attic and Button’s newest team in Program Management as a consultant from the marketing team. You’ve worked with other teams in your company before, but most of the people on this new team are only familiar with Optimizely X, and not with Program Management.

At your latest team meeting, one of your developers, Stephan, asks you how analysis will be different from the way you analyzed experiments in Optimizely X. What’s the best response?

  • While Optimizely X has many experimentation tools, Program Management is built to connect those experimentation tools to the organization at large. You can automatically generate and send reports to specific groups outside of the experimentation team at predetermined intervals and filter your feedback into different experiments.

  • Program Management fills many of the planning functions that help you create and iterate a successful experimentation program in Optimizely X. Program Management will give us more analysis tools that can help us review not only a single experiment, but results across our program to see the impact of our progress.

  • We’ll be able to use Program Management to automatically analyze our results and estimate timelines for the experiments that we iterate from those results based on the accumulated data from our program as a whole.
  • We can set up predetermined segments in Program Management that will automatically be broken out and compared across our program to other results we receive. This will speed our analysis and help us determine how our results compare to other teams and the experiment’s history, if we’ve run it before.

You’re just finishing up your first experiments linked through Program Management. You and your team are beginning to consider what you should do next. Which of these options is the best way to use Program Management to analyze your results and iterate into new experiments?

  • Use your results history to help you determine what your next steps should be by looking at your losing experiments and pursuing the opposite path.
  • Search prior linked experiments and ideas by keyword and review the results to help determine which ideas may not have gotten enough attention.
  • Encourage team members to review the previous experiment results available and pitch an idea based on the needs they see. Your team can then vote in Program Management on the ideas to determine your next steps.
  • Review the program history of linked experiments and compare them to ideas you have on deck to determine which experiment is most necessary based on your business goals.

As you review the results from your latest experiment in Program Management and compare them to your program’s history, you notice that it differs widely from what you’d expect based on the information from your previous analysis. What reason should you likely explore first?

  • It’s likely that looking at the whole picture is obscuring your vision to recent factors that may have had an impact on these results. Review other changes that may have taken place recently and experiment with those factors to see if they could have made an impact.

  • Don’t change your approach at all. Continue following your roadmap and see if the trend continues. The roadmap’s entire purpose is to keep you on track instead of running off on a wild goose chase. One errant result is probably a fluke. Run your next experiment to get more data.

  • Review your roadmap for experiments that could provide more insight into these results. Adjust your prioritization to experiment with the ideas that could give you insight into what caused the errant results.
  • Share the results with your team and elicit new ideas to pursue for the next experiment that could give you insight into why the results went the way they did. Shift the best new ideas to the top of your roadmap.