Emily will be speaking at Test and Learn: The Product Experimentation Virtual Summit, a free half-day virtual event for product managers, data scientists, engineers, and designers to dive deep into the intersection of insights-driven product development and agile software delivery. Save your seat to listen to Emily and other product and engineering leaders on how to create a test and learn culture in product and engineering.
Tell us a little about yourself
My title is Director of Engineering at Honeycomb.io, but we’re a small team, so I wear a lot of hats. I manage the engineering and design teams, occasionally write code, and spend a fair amount of time thinking about how we’re building the product.
What does “test and learn” mean to you?
It definitely resonates with our product philosophy at Honeycomb. We say “everything is an experiment” and this applies both to how we build the company and the product. We don’t believe the status quo in developer tools is anywhere near good enough—so many tools we use every day have unintuitive interfaces or make it difficult to find valuable insights our teammates have already found—and we know that it will take creativity and iteration to raise the bar. A good idea can come from anywhere in the company, from a customer, or even from a conversation on Twitter. Our willingness to experiment with many new ideas and be open to learning from the outcomes is an important part of how we hope to build the next generation of production tooling.
Who owns feature flags and experimentation in your organization?
We’re fairly democratic about this — we all love a good experiment. Everyone from our engineering team and designers to our product manager and sales team has set up and used their own feature flags in the product. The only rule is, whoever adds a new flag is also responsible for taking it out.
What metrics do you use to measure the quality of the software you are building?
We think engineering teams often get too distracted by metrics that tell you something about your systems but not about what really matters, which is the experience your customers are having with your product. Our most important internal signals are our end-to-end check — can you send data to our API and read it back from our UI? — and data about the customer experience like how quickly new customers start sending us events or finding answers in the product.
What challenges do you see teams facing when they try to make the move to continuous delivery?
The switch to continuous delivery can feel scary when you don’t trust your production tooling. If your existing monitoring and observability tools don’t make you feel 100% certain that you’ll know when something goes wrong and be able to debug it quickly, I’d argue that you should invest a few weeks of engineering work there first before you try to make the switch. Once you have the confidence that you can understand customer traffic at the level of the individual request and easily ask questions about a particular customer’s experience, and once you know your tools are going to surface errors and related metadata quickly, moving to a continuous delivery model should feel more comfortable. You’ll be able to focus on the benefit of getting new valuable changes into customers’ hands quickly and reducing the risk of big deployments, instead of feeling like you’re flying blind.
What will you be talking about at Test & Learn?
I’ll be talking about a couple of interconnected trends in software engineering that are helping us ship code faster and more safely. First, I’ll talk about incorporating tools like feature flags into our development and deployment process. Second, I’ll talk about how observability fits in — and how it pairs up perfectly with feature flags to help us understand how our new code is doing. Finally, I’ll talk about how the trend toward code ownership takes these tools and empowers software teams to ship higher-quality code than ever and improve even faster.
Sign up for free to hear Emily’s talk from the Test and Learn Virtual Summit on May 22nd.