graphical user interface, text

Tell us a little bit about yourself.

I’m a manager for the optimization team at Dow Jones, overseeing many of the testing roadmaps for WSJ.com and Barrons.com. This includes acquisition testing, engagement testing, core product testing, and retention testing. My team and I handle the full life-cycle of planning a test: ideation, prioritization, stakeholder buy-in, launching the test, analyzing the impact of our test, then ultimately pushing the results we found live to improve our product. When I’m not testing, I’m either eating tacos or olympic weightlifting.

What does “test and learn” mean to you?

We test to build products that we know are going to be the most useful for our users and the most impactful for whatever KR [Key Results] we’re trying to move. The ability to test means we can iterate cheaply and quickly. For me, “test and learn” is kind of redundant. It’s just test, because to test is inherently to learn. The more we test, the more we know about how to build a better product for tomorrow.

Where do your A/B test ideas come from?

We believe good A/B test ideas can come from anyone and anywhere. I think that part of the way that we’ve gotten buy-in from a lot of the business is we want everyone to contribute ideas. We come up with many test ideas ideas (it’s our job!), but we also welcome input from the stakeholders that we work with. We always try to test the things they care about and really value all the great ideas they bring to the table. We so adamantly believe that good ideas can come from anywhere that we have a test idea repository shared with everyone in the business so they can submit ideas whenever genius strikes.

We also take a lot of inspiration for test ideas from our competitors – both in category and out of category. What another newspaper is doing with their acquisition strategy is just as inspiring to me as what a food delivery service is doing. So, we try to keep our eyes open and look for interesting ideas from the products we admire in market.

What’s your North Star metric? What are you optimizing for?

Because we test across the whole customer lifecycle, it’s different for every part of that journey. For acquisition tests, we’re often evaluating success in terms of order increases. For engagement and product tests, we’re looking at how much we can decrease churn by increasing interactions with other parts of the site or with features of our product. For retention tests, we’re looking at a winback rate. The most important thing for us is that we tie back the testing we do to a true business value. It’s how we keep ourselves honest about why we prioritize the tests that we do and how we prove our value as an optimization team.

What is in your experimentation toolkit or stack?

Optimizely is what we run the majority of our experiments on. We also use a program called CXense, which is used in our business for content recommendation and personalization; it’s often easier for us to run tests related to those domains in CXense. Other than that, we use Adobe Analytics to analyze results and we have a team of amazing data scientists that help us parse harder problems.

What was the most surprising or unintuitive result you’ve seen? Or the most memorable winner?

We run so many tests, so that’s like asking me to choose a favorite child! But, I can tell you about an interesting result we got recently from a test we ran in our newsletter center. Our traditional Sign Up button is a blue CTA that says “Sign Up.” We hypothesized that that language might be scary to users, asking them to over-commit to something they just wanted to sample. We tested using a toggle switch instead of the CTA, and it absolutely crushed it – a high double digit uplift. What was really fascinating to us is that we saw that the toggle was effective on both desktop and mobile. We would never have considered a traditionally tappable interaction like a toggle on desktop, but this test showed us that this type of UX is pervading digital experiences and causes less friction for people to engage with.

How many experiments are you aiming to run this year?

Probably around 200 this year!

What will you be talking about at Test & Learn?

I will be talking about how we measure and understand what a habit is. More specifically, I’ll dive into how we use that knowledge to prioritize what we drive users to do on a site so we can make stickier, longer-term members.

Are there any final thoughts on experimentation that you want to share with people who might be reading the Optimizely blog?

Everyone should experiment. It’s the best way to build products.

Olivia’s talk from Test & Learn: The Product Experimentation Virtual Summit is now available. Register free to watch this online event.