The Role of A/B Testing in Lead Nurturing
Lead nurturing sounds simple, right? Set up an email drip campaign that gives new prospects more detail about what products you offer and what problems you solve. Done. And as you get more complex, scale it up. But like any machine, lead nurturing has to be constantly tuned and tested. A/B testing is an ideal tool for this. So how do you best take advantage of A/B testing to build great lead nurturing?
Lead nurturing sounds simple, right? Set up an email drip campaign that gives new prospects more detail about what products you offer and what problems you solve. Done. And as you get more complex, scale it up:
- Set up dozens of drip campaigns, tailored to very specific prospect and customer segments.
- Nurture people further down the sales funnel, up to and including existing customers.
- Run lead nurturing campaigns over other channels — not just email, but retargeting, social, and even website personalization.
And the great thing about lead nurturing is, it is kind of simple. Once you get your program in place, it’s a machine. You don’t have to supervise it. There’s no marginal cost to add another prospect to your program. After a while, results should be fairly predictable and consistent.
But like any machine, lead nurturing has to be constantly tuned and tested. A/B testing is an ideal tool for this: you’re sending lots of emails (or other communications) on a regular basis, and chances are your marketing automation platform has A/B testing built right in. How do you best take advantage of A/B testing to build great lead nurturing?
Before you run any A/B tests, understand how your current nurture program is performing. A well-performing nurture track should lead to:
- Higher conversion rates from lead to closed deal
- Faster movement from prospect to close
- Bigger deal sizes
Lots of marketing automation tools can be set up to measure lead inflows and outflows in each stage, from initial visit, all the way through to Closed Won, as well as how long each stage takes, on average. Setting that measurement up is outside the scope of this article, but it’s critical to understand this data. Deal size is something you’re probably already tracking, and hopefully you’ll see that number increase as your nurture program becomes more effective.
Also pay close attention to metrics associated with the nurture program itself. These include:
- Burnout rates. How often does someone go through the entire track without clicking anything?
- Open, click, bounce, and unsubscribe for each email in your nurture track. If you’re nurturing using some other channel, include those metrics, instead. (Related article: You’re Email Ope Rate is High? That’s Nice.)
Set up a spreadsheet in which you track these metrics over time for every message in your track. For example, if you have 7 emails in one of your nurture tracks, you’ll have a spreadsheet with 28 numbers in it that you track frequently (I suggest weekly, or bi-weekly). Once you get in a rhythm, it should take very little time to fill these out. Here’s an example of what that could look like.
Once you have the data on how your current tracks are performing, you can start figuring out which of your ideas to test.
For example, if you’re seeing that prospects take a very long time to go from their initial conversation with a sales rep to actually becoming an opportunity, one solution might be creating a nurture track and testing with some prospects in that stage. Or, if you’re seeing that one particular email has a much lower click-through rate than all other nurture emails, you know you’ll want to focus your efforts there.
For each problem area, form a hypothesis about what’s causing the problem and how to solve it. Do you think that one of your emails is performing poorly because it doesn’t have a clear offer for recipients to click? Guess as to what should be done to clarify the offer, and what lift you expect to get from that change. Your initial hypotheses may be completely wrong, but they will improve over time. Here’s a quick example of what that could look like:
Based on these hypotheses, you can prioritize your backlog of ideas.
After you’ve laid the groundwork, you can start to run effective A/B tests for your lead nurturing program. Start with a spreadsheet that lists and prioritizes the tests you came up with. For each test, list what changes you’ll make, why, and what results you predict from your test. Of course, you’ll also want columns describing your actual results.
Your spreadsheet will help you in a few ways:
- It will serve as the basis for a rhythm of A/B testing. It’s important to prioritize your test ideas and set them up in a schedule instead of doing one-off tests.With a prioritized spreadsheet in front of you, you can easily set up the next test, as soon as you finish one.
- It will help you get better at predicting. You can see the results of all your pasts tests, as well as your predictions, and use this data to guess better in the future.
- It will show progress. Your first few tests will probably fail, but your spreadsheet will show what you’ve learned from each test. Validated learning is extremely important to driving more revenue.
Beyond Lead Nurturing
The best part about incorporating experiments into your lead nurturing using the process above? You can very easily extend it to lots of other marketing activities as well.
For example, once you’ve successfully set up an experiment pipeline for lead nurturing, use the same spreadsheet to start running experiments for your user acquisition programs. Or expand your A/B tests to include new features, UI changes, sales outreach methods, or onboarding flows. You’re building a powerful machine; why not use it to power other parts of your business, too?