Lean hypothesis testing is an approach to agile product development that’s designed to minimize risk, increase speed of development, and hone product market fit by building and iterating on a minimum viable product (MVP).
The minimum viable product is a concept famously championed by Eric Ries as part of the lean startup methodology. At its core, the concept of the MVP is about creating a cycle of learning. Rather than devoting long development timelines to building a fully polished end product, teams working through lean product development build in short, iterative cycles. Each cycle is devoted to shipping an MVP, defined as a product that’s built with the least amount of work possible for the purpose of testing and validating that product with users.
In lean hypothesis testing, the MVP itself can be framed as a hypothesis. A well-designed hypothesis, breaks down an issue into problem, solution, and result.
When defining a good hypothesis, start with a meaningful problem: an issue or pain-point that you’d like to solve for your users. Teams often use multiple qualitative and quantitative sources to scope and describe this problem.
Imagine that you notice a problem: users are abandoning a signup flow at a higher rate than expected. After doing some research, you notice that the signup process takes longer than the industry average -- and you’ve seen user feedback about the slowness of your application. The sign-up flow also doesn't make it clear what the benefit of your product is.
You offer a solution. The solution might be a feature, product idea, or product direction that addresses the problem described. In the case of our example, the solution might be to make the signup process faster by reducing the number of form fields and making the value proposition clearer. This serves as your hypothesis, on which you can then iterate.
You may want to offer a rationale or theory about why this solution is the right one. In our example, this theory is that users are abandoning the signup process because it takes too long and they don't understand the value.
It is important when testing a hypothesis to ensure that you establish p-values and ensure that you have a large enough sample size to avoid statistical errors. For example, if you don't take statistical significance into effect, you can run into a type 1 error, in which you think your test has an effect, when it actually has no impact (a null hypothesis).
If you don't properly apply the scientific method to your testing methodology, you can mistakenly see benefits which are just due to random chance and are not actually significant. You can use our Sample Size Calculator to select the right sample size needed for an experiment, given your baseline conversion rate and confidence interval.
Be inspired by 40+ experiment ideas that have generated millions in revenue.
Learn the benefits of experimenting at scale from this original research report from the Harvard Business Review
This assessment is the starting point to understanding your organization’s capabilities and will set you on the path to building a high-performing program.
An error has occurred
You can get the very best of Optimizely without spending a dime.Try it out for 30 days, on us.
Hang tight! We're creating your account and password instructions are headed to your inbox.
Please correct form errors
Get a free account with full access to Optimizely's APIs and SDKs.
Already have an Optimizely account? Sign in here.