How to run more effective A/B tests: An Optichat recap
Good ideas are the foundation of our work as optimizers. The source doesn’t matter — whether it comes from our heads, a user testing session or our website analytics — as much as the quality.
Instead of one-off ideas, we need to equip ourselves with scalable strategies for brainstorming and running effective A/B tests with solid hypotheses. Hiten Shah, founder CrazyEgg, co-founder of KISSMetrics, curator of SaaS Weekly, and author of hitenism.com joined #optichat to discuss tactics for running effective A/B tests.
Here’s a recap of the questions, answers from Hiten and the #optichat community.
What are the attributes of a good A/B test idea?
From Hiten:
A1: Good A/B tests are researched well, have a clear hypothesis and are documented before and after. #optichat
— Hiten Shah (@hnshah) December 1, 2015
- When A/B testing make sure you’re documenting the experiments before and after. Enables faster experimentation and growth.
- An A/B test hypothesis is an educated guess informed by data from user behavior, research and past experiments.
- When A/B testing doing research helps make sure you are focused on the right things
From the community:
- If you ask “what should we test next”, you have no idea what you’re doing. Do conversion research! –@peeplaja
- Research based hypothesis. Every test should answer a question. –@MikeStLaurentWF
- Lay the foundation for a few deeply-researched A/B tests. Testing begets testing, so often the strategy becomes domino effect –@KMRivard
Helpful articles:
- How to Come Up with More Winning Tests Using Data on ConversionXL
See all the answers to question 1 here.
Why is it important to spend time researching A/B tests?
From Hiten:
- You’ll severely limit your long-term growth by not doing research when coming up with A/B test ideas
- Research helps you focus on creating better experiments that are more likely to improve your funnel & reduce user friction
A2: Research will allow you to take a long list of potential tests and turn it into a shorter list of high-impact tests #optichat
— Samantha Sawyer (@SamanthaSawyer) December 1, 2015
From the community:
- It’s important to research your A/B tests because it increases chances of success. Too much failure kills org support. –@seanellis
- I keep a log of all tests – what we tested, what we learned, how long the tests ran, why we felt it worked/didn’t work, etc. –@Beymour
- Research helps me from re-inventing the wheel and focusing on experiments that will move the needle. –@bl_bennett12
- Research helps you find the global maximum and get over the local maximum. –@JenPwns
- Research takes your optimization program away from opinion and toward empathy. –@avramescu
See all the answers to question 2 here.
What are the main sources of data you use in A/B test research?
From Hiten:
A3: For A/B testing research, 3 main sources of data are: 1. Qualitative 2. Quantitative 3. Your past experiment history #optichat
— Hiten Shah (@hnshah) December 1, 2015
- Quantitative data mostly comes from marketing channels, funnels, retention cohorts and product usage data
- Qualitative data can come from surveys, interviews, user research and any other ways to understand why users do what they do
- If you want to get growth by A/B testing you have to get the knowledge out of people’s heads and start documenting it
From the community:
A3: without a doubt @usertesting for customer insights, @CrazyEgg for heatmaps and @Optimizely for #abtesting #optichat
— Leah O'Callaghan (@laocal) December 1, 2015
- We get a good amount of testing ideas from talking to the non-web/tech ppl in the office. What are their UX pain points? –@lmaaa
- Voice of customer, analytics, heuristics. eye tracking, click tracking, time on site, etc. -@FormosaChris
- industry standards – blog ideas, industry standard conversion rates, competitors’ sites – can round out your data sources too –@herartofseeds
- Also don’t forget results of A/B test can and SHOULD influence what you test next –@BryantGarvin
Helpful articles:
- Use analytics data to inform your hypotheses on the Optimizely Knowledge Base
- How to use indirect data to inform ideas on the Optimizely Knowledge Base
- Creating an Excellent Hypothesis with Website Analytics and User Research on the Optimizely Blog
See all the answers to question 3 here.
Besides being pretty, what are heatmaps good for?
From Hiten:
- Heatmaps help improve your website by informing you as to what page elements you can add, remove and move when A/B testing
- Heatmaps help with A/B testing because they show you exactly where people are clicking and where they are not
- Heatmaps can be a very useful tool for understanding what people are doing on your website
From the community:
A4: Heatmaps are great for mobile responsive design. Every platform has different functionality and design needs. #optichat
— Laura J Cryst (@LJCryst) December 1, 2015
- When using heatmap for your research, remember to look at not only your desktop heatmap, but also mobile and tablet –@idowebid
- Heat maps highlight the gap between what you’ve designed and what users are doing. –@MRSallee
- What elements of your page are users paying attention to? Are you guiding them along the journey or are they getting lost? –@milwaukeePPC
- Treat heatmaps as an analytics source. You can view each page as a step in the funnel to ID friction points and bottlenecks. –@avramescu
See all of the answers to question 4 here.
What are some great ways to prioritize and keep track of your test ideas?
From Hiten:
A5: Flexible workflow tools are commonly used by companies to track tests: @trello @basecamp @asana @quip and even @slackhq. #optichat
— Hiten Shah (@hnshah) December 1, 2015
- Whatever you do, if you just start documenting your experiments, prioritization becomes much more straight forward
- This growth canvas tool is specifically designed to help in prioritizing & tracking A/B tests
From the community:
A5: For https://t.co/9fUA7AHRxP we use ICE score – Impact, Confidence and Ease with 1-10 scale rating #optichat pic.twitter.com/TL1mMzI8Ke
— Sean Ellis (@SeanEllis) December 1, 2015
- We always use mind maps at our office. First you brainstorm, then you prioritize and group related ideas. –@egaal
- There’s a lot of Excel in my life… –@SamanthaSawyer
- We always use mind maps at our office. First you brainstorm, then you prioritize and group related ideas. –@avramescu
- I really like
@EffectiveExperi as a tool for prioritizing and tracking test ideas and archiving results. –@granttilus - The folks at
@monkeychatter are working on a great project management tool for A/B tests –@takeshiyoung
Helpful articles:
-
Archiving Test Results: How Effective Organizations Do It on ConversionXL
- How to Prioritize Conversion Rate Tests Using PIE on the WiderFunnel Blog
See all of the answers to question 5 here.
How do you think about win rate? Is this a metric A/B testing teams should try to improve?
From Hiten:
A6: If you have a very high win rate (4+ tests out of 10 win), you should consider running bolder / riskier tests. #optichat
— Hiten Shah (@hnshah) December 1, 2015
- If you have a very low win rate (1 test out of 10 wins), you should start informing your experiments with more research
- Whether a test wins or loses, make sure you learn and keep testing. That’s how you’ll get a higher win rate in the long run
From the community:
A6: Yes! 3 metrics you need: 1) number of tests run (per year) 2) % of winning tests 3) uplift per successful experiment #optichat
— Peep Laja (@peeplaja) December 1, 2015
A6: A few early wins can help build org. buy-in, but it's important to key your eye on the long-term metrics. #optichat
— Yeesheen (@heartofseeds) December 1, 2015
- Be careful with incentivizing a/b test win rates, we tend to forget about metrics that also shouldn’t go down –@davidjbland
- NOO! It’s about insights. You take what you learn, modify your approach and get going again. Growth doesn’t happen on day 1! –@AswinKumar
- Losers can also save you from going down costly paths. “Aren’t you glad you tested this first?” –@avramescu
- It’s hard to maintain a high win rate, but my team averaging around 50% wins on our tests.
#optichat Maybe we’re too conservative! –@seanellis
See all of the answers to question 6 here.
Which A/B testing cliche would you debunk?
From Hiten:
- Testing isn’t just about all the tactics you read. You have to actually test them for yourself!
- All those tactics you read about can’t get you growth without a system for experimenting and learning from your own users
From the community:
@Optimizely A7: Don't always copy Amazon. If it works for someone else, doesn't mean it will work for you! #optichat
— Michael St Laurent (@MikeStLaurentWF) December 1, 2015
- Testing for testing sake. Just because you can turn something on/off doesn’t mean you will get results. –@altos
- “I don’t have enough time to test.”
#excuses –@lmaaa - The Big Orange Button is always a winner –@ohdubz
- Debunk “everyone should test and always be testing”. Only if you have the volume –@CherryManrao
See all of the answers to question 7 here.
How can you convince your company to be more experimental in 2016?
From Hiten:
A8: If you aren't experimenting, you are at risk of a competitor learning faster than you! #optichat
— Hiten Shah (@hnshah) December 1, 2015
- In a world full of competition, your best advantage comes from understanding your customers better than anyone else
- Look around your company and if people are making decisions that seem like guesses, it’s time to start experimenting
- Continuous testing helps you learn how to solve customer problems as they come up instead of as competitors create solutions
From the community:
A8: Start using words like hypothesis & hypothesize in everyday language around the office to get people accustomed to it #optichat
— Cara Harshman (@CaraHarshman) December 1, 2015
- Keep telling people that “it’s just an experiment,” not a decision to go with one thing or another –@caraharshman
- If you are not testing you are essentially gambling business performance on gut feeling –@bhas
- Document a rigorous process, begin to show small wins, build a culture of test ideation –@JoshRodriguez
- Answer every single comment with “You Should Test That” Nobody knows the impact of anything without testing –@MikeStLaurentWF
- I like to start with a list of things that need to be fixed – with supporting
@usertesting evidence. This starts momentum –@seanellis