decorative yellow lines on background

Welcome to the third installation of Ask an Experimenter, a new series where we interview experts about how they are building a culture of experimentation at their companies. This interview features Kevin Chen, Growth Product Manager at Reddit. Kevin works on the strategy, roadmap, and execution of push notifications and email products for Reddit.

a person smiling next to a person in a garment

Tell us a little about yourself.

I am a growth product manager at Reddit. The two specific products I work on are push notifications and emails to resurrect users that have churned or to engage new users and retain them for longer periods of time. I’ve also worked on other parts of the product like our onboarding flow and mobile apps.

What does experimentation mean to you?

I credit my answer to the director of product on the knowledge team (a team that works on a lot of the backend algorithms and machine learning that goes into Reddit). He says, “Experimentation is a way of guiding the business at Reddit to make good business decisions using the data.”

I like this answer because it can be very easy to get caught up in the numbers and make sure that we have a very rigorous experiment. Often business gets messy, and not everything is as perfect as we want it to be. So if we run an experiment, and the setup or the results are not perfect, we still feel comfortable moving forward with a decision if the data suggests that one product feature is better than another, because, ultimately, experimentation is not about writing a research paper about what we found. We need to make a business decision and move forward, and experiments help inform that for us.

Who runs experiments at Reddit? Where do your ideas come from?

Most of our experiments happen on the growth team and the knowledge team. The growth team works on getting users to come back everyday, increasing our daily active user (DAU) number. The knowledge team works on improving the user’s experience in the app–increasing their time on site.

Ideas for experiments can come from anywhere at Reddit. We don’t believe that all the ideas rest in the heads of executives or product managers. Our job as product managers is to prioritize them, size the impact of the potential experiment, and guide the execution of the experiment.

We have a weekly backlog review meeting to review the hundreds of tests that we want to run. Of course, we don’t have hundreds of engineers that we can just throw around on experiments. So we prioritize them based on the cost and how much potential lift we think it could get, and then assign it to our engineers.

What metrics are you optimizing for on the growth team? And what’s your North Star metric?

At the very highest level, we’re driving toward DAUs. But DAU is very high level, so we look to specifically increase its component pieces: acquisition, retention, and resurrections. We choose these metrics because they’re easier to change and operationalize on an experiment-to-experiment basis.

We want to make sure that whatever we’re launching, we can see an improvement in these component metrics. We also make sure we share our learnings with the rest of the company. It’s not just enough to say that we increased retention by X percent — we also need insight into why it happened.

For example, one way we’ve tried to increase retention is by testing our mobile onboarding flows. We have different onboarding flows on the iOS and Android apps. We tested one onboarding flow on the iOS app that allows users to sign up for certain communities, and we didn’t see a retention increase.

On Android we tried a different approach. We still have those same communities in the same categories, but we put it in a feed style display, so users could actually see what the posts are. I can actually see, “Oh, r/aww is a bunch of cute dog and cute cat photos, I want to subscribe to it.” And we actually saw retention increase with the Android onboarding flow. The learning from that experiment was that by showing users more content, they can make a better decision about how to curate their home feed, which made a better onboarding experience.

What’s in your experimentation stack?

Our experimentation is built in house. Any information that’s available on the server side, we can use that to segment out the population. So for example, if the user hasn’t been seen for a couple of days, we can segment by that. Right now we’re working on building out a more comprehensive dashboard where PMs can turn the experiment on and off, set the dates of when the experiments run, and then see a number of secondary and tertiary metrics of how those perform across different variants.

We also use Big Query for data analysis, and Mode Analytics as the visualization layer. And to manage our experiment backlog, we use Google Sheets.

How did you get started with experimentation?

My first taste of experimentation was as a growth analyst at a financial technology company. I was on the growth marketing team, and I worked very closely with our direct mail team. We sent physical pieces of mail to users to get them to sign up, and we tested every aspect of the mail, like the types of mail pieces and designs we sent.

I challenge you to look at your financial mail pieces in the future, because certain ones have really thick paper, others have glossy paper, others have little inserts that will fall out when you open the mail piece. All these things affect conversion rates for direct mail.

We also did online A/B testing for our referral program. One of our referral tests was an incredibly high ROI project. We weren’t surfacing the referral CTA very well, so we tested surfacing the call to action in a better way and it improved by something like 75%. Through projects like these, I saw firsthand how experiments could drive business metrics, as well as the impact of being able to tell other stakeholders: “Hey, what we did here actually has value and made it easier in the long term to prioritize other projects,” because we could confidently show that this feature led to this result.

And finally, how many experiments are you aiming to run this year?

We have dozens of people across the company planning to run hundreds of experiments in the year. I can’t give you exact number because I honestly don’t know. The thing with experimental teams is that we tend to be very nimble. So when you learn something, you might want to invest in it more, which changes your product roadmap. We don’t usually have year-long roadmaps with features that take three to six months to build.

We do things that maybe take a sprint or a week and iterate from there. So we’ll set a high-level strategy of which metrics we want to improve — acquisition, retention, resurrections. But our tactics change frequently based on new learnings from each experiment, so it’s important to iterate, experiment, learn, and repeat.

Interested in learning more about server-side experimentation? Check out Optimizely Full Stack or learn more about building vs. buying an experimentation platform