Lean hypothesis testing is an approach to agile product development that’s designed to minimize risk, increase speed of development, and hone product market fit by building and iterating on a minimum viable product (MVP).
The minimum viable product is a concept famously championed by Eric Ries as part of the lean startup methodology. At its core, the concept of the MVP is about creating a cycle of learning. Rather than devoting long development timelines to building a fully polished end product, teams working through lean product development build in short, iterative cycles. Each cycle is devoted to shipping an MVP, defined as a product that’s built with the least amount of work possible for the purpose of testing and validating that product with users.
In lean hypothesis testing, the MVP itself can be framed as a hypothesis. A well-designed hypothesis, breaks down an issue into problem, solution, and result.
When defining a good hypothesis, start with a meaningful problem: an issue or pain-point that you’d like to solve for your users. Teams often use multiple qualitative and quantitative sources to scope and describe this problem.
Imagine that you notice a problem: users are abandoning a signup flow at a higher rate than expected. After doing some research, you notice that the signup process takes longer than the industry average -- and you’ve seen user feedback about the slowness of your application. The sign-up flow also doesn't make it clear what the benefit of your product is.
You offer a solution. The solution might be a feature, product idea, or product direction that addresses the problem described. In the case of our example, the solution might be to make the signup process faster by reducing the number of form fields and making the value proposition clearer. This serves as your hypothesis, on which you can then iterate.
You may want to offer a rationale or theory about why this solution is the right one. In our example, this theory is that users are abandoning the signup process because it takes too long and they don't understand the value.
It is important when testing a hypothesis to ensure that you establish p-values and ensure that you have a large enough sample size to avoid statistical errors. For example, if you don't take statistical significance into effect, you can run into a type 1 error, in which you think your test has an effect, when it actually has no impact (a null hypothesis).
If you don't properly apply the scientific method to your testing methodology, you can mistakenly see benefits which are just due to random chance and are not actually significant. You can use our Sample Size Calculator to select the right sample size needed for an experiment, given your baseline conversion rate and confidence interval.
Evaluating and making sure your potential partner delivers the right progressive delivery and experimentation platform can be difficult. Download this free template to evaluate the quality of platforms and make sure they deliver exactly what you need.
This report helps B2B marketers look beyond the typical first- or third-party distinction to understand the significant differences in how data is sourced and refined.
We’ve analyzed over 100,000 revenue-boosting digital experiments to understand the science of driving real growth
Get started creating digital experiences that transform your company
An error has occurred
You can get the very best of Optimizely without spending a dime.Try it out for 30 days, on us.
Hang tight! We're creating your account and password instructions are headed to your inbox.
Please correct form errors
Start releasing products smarter with feature flags and rollouts. Prove value with A/B testing. Built on our Full Stack platform.
Welcome, we're creating your account...