Skip to main content

Everything you need to learn Optimizely X in one place: See our resource pages for Web Experimentation and Personalization.


Create a hypothesis that drives your business goals

This article will help you:

  • Form your own data-driven hypothesis
  • Create strong hypotheses using "If [variable], then [result], because [rationale]"
  • Learn from “winning” and “losing” hypotheses  

An effective hypothesis will drive your A/B testing strategy and help you run better A/B and multivariate tests. 

In the scientific method, you form a hypothesis and test it through experimentation to collect measurable results. This process encourages you to know what to test, why you're testing it, and how you'll measure results. In this article, we'll cover the basics of hypothesis-driven testing. We'll also provide activities to form effective hypotheses.

Hypothesis-driven testing (HDT) helps you tie your iterative testing program to your organization's business goals. It enables a widespread experimental mindset and culture, while also imposing a structure that ties back to data-centric prioritization and decision-making.

Hypothesis-driven testing sets your company up for long-term gains:

  • Build a mechanism for constant inquiry and learning, leading to improved brainstorming processes.
  • Construct a reliable, consistent, and informed (based on data) understanding of your unique online business.
  • Size up the potential (and proven) benefits of your tests and prioritize accordingly.
  • Drive the direction of your marketing and product offerings.

What is a hypothesis?

A hypothesis is a bold, testable statement that can be confirmed or rejected with data. When crafting your hypothesis, you should draw from your experience, marketing intuition, and prior data to probe a theory about customer experience on your website.

With hypothesis-driven testing, your team places a metaphorical "stake in the ground" and sets off on a path toward improving your customer experience and key business metrics.

The nuts and bolts

A complete hypothesis has three parts. The formula consists of a variable that you think you might modify, a predicted, quantifiable outcome that results from that change, and a rationale that connects that outcome to your theory about your customer experience.

If your hypothesis has all three components, you should be able to write it out in the form of the following sentence:

"If ____, then ____, because ____." 

The Variable (or, what to test)

When creating a hypothesis, you should focus on ideas that tell a story about your key metrics rather on minor aspects of site design. Evaluate your leading indicators, high traffic pages, and most valuable flows to identify variables that have the greatest potential for business impact.

Choose one or more variables to test. How might modifying this variable change your customers’ behavior? Would rearranging existing elements, deleting distracting content, or adding entirely new content or flows impact your business goals?

One of the benefits of A/B testing is the ability to link changes on the page to quantifiable results. You can create tests that manipulate more than one variable at a given time. However, the more variables there are that interact in your experiment, the longer it will take to get statistically significant results.

The Result

The predicted result or outcome ties your hypothesis directly back to your key business metrics. In formulating this part of the hypothesis, consider the change that you expect your modified variable to have on your business goals. Your prediction may be informed by best practices in your industry, existing data about your website, and your intuition and experience.

You do not have have to state your prediction as numerical gains or losses (a 15% reduction in bounce rate or a 25% increase in click-throughs), but the result should be specific and measurable. It should always tie back to your key metrics and ultimately help to drive monetary value for your business.

The Rationale

Numbers are compelling, but so are stories. The rationale is the heart of the hypothesis. It is the "why" -- a distilled interpretation of the data that your hypothesis is based on. It’s where your team will test its theory about customer experience in relation to the predicted outcome. This is your stake in the ground! Is the change you make to the variable going to produce an incremental effect or a large-scale effect? Why do you think so? (Good luck and fingers crossed!)

Best practices


In order for a hypothesis to be testable, you need to be able to measure both the change that you’re making and the effect of that change. For example, you may want to remove breadcrumb navigation from your checkout page, with the anticipated result that this change will increase conversions. The change between the original and the variation is the presence or absence of breadcrumbs. The effect of that change is measured as the difference in the number of conversions.

A Learning Opportunity

Hypothesis-driven testing presents multiple opportunities to learn more about your customers’ behavior. It also creates a cycle of learning that can be iterated on drive improvement of your key business metrics.

  • Ask questions about your business and your site. Seek answers from data that already exists. Note significant metrics.
  • Formulate your hypothesis.
  • Set up and run an experiment that tests your informed hypothesis.
  • Analyze your results. Document your learnings.
  • Use your results to ask new, data-driven questions.
  • Repeat the cycle.

Connected to Business Goals

Keep your hypotheses focused and impactful by explicitly aligning testing goals to business goals. Ideally, your experiment will drive meaningful insight that helps to grow your business. What are your company-wide goals and KPIs? If your hypotheses are oriented toward improving these metrics, you’ll focus your efforts on performing experiments that matter.



As with all experiments, you must be prepared for the possibility that your hypothesis could be disproven. Your results may contradict your initial expectations. While this result doesn’t lead to a short-term win, it gives you fuel for better understanding the customer! Share this information with your team. To learn more, see our article on documenting and sharing test results.



The most powerful experiments isolate one variable and eliminate extraneous factors. For this reason, we suggest that you limit the number of variables tested in each experiment and that you DO NOT introduce any changes once the experiment has started. To learn more, see our article on changing an experiment while it is running.

Examples of strong and weak hypotheses

Example 1 (Strong)

If we personalize the Call to Action to users who clicked on our poker ad campaign, then we will see a 20% lift in click goals, because our heatmap data shows that users who focus on poker-related copy on the page click through our links 20% of the time.

Why this is strong

  • Tests changes to one element
  • Predicts a specific, measurable result
  • Hypothesis drives learning about one segment of user base: customers who click on the poker ad and their affinity to personalized poker messaging
  • Strong rationale
  • Connected to key business interest

Example 1 (Weak)

If we personalize the Call to Action to users who clicked on our poker ad and remove distracting images around the page, we’ll see an increase in revenue on our site.

Why this is weak

  • Two elements (CTA and images) that are unrelated are being tested simultaneously
  • The result (increase in revenue) is not specific in relation to the variables
  • The rationale is not obvious

Example 2 (Strong)

Removing the second page of our lead generation form will increase completion rates (conversions) by 10%. We have a higher than average drop-off rate according to industry standards and are adjusting according to recommendations submitted from our user experience expert based on user research.

Why this is strong

  • Proposes specific, testable changes
  • Informed by data: industry standards and user research
  • Takes a strong stance: predicts a 10% increase

Example 2 (Weak)

Removing elements from our sign-up form will increase completion of the form. Our VP of Marketing suggested that we try this for one week.

Why this is weak

  • Proposed changes aren’t specific
  • Changes are not clearly tied to  business objective
  • Rationale not obvious
  • Restricted time limit might not produce winning variation, and not tied to requirements for statistical significance

With hypothesis-driven testing, you’ll consistently generate data that energizes and refines your testing process regardless of whether a specific hypothesis "wins" or "loses." The goal of HDT is not just short term gains (conversion wins), but also a deeper understanding of the customer and personalized messaging that enhances your customer experience. Hypothesis-driven testing will often lead you to ask more questions.



What happens when you get it wrong?

Your team has actively engaged with customer data before and after your test is executed, whether your hypothesis is ultimately confirmed or not. This culture of continuously turning data about your customer into action will be the strategic edge that catapults both short-term lift and long-term customer engagement. Document and share your results, bring your insights to your next hypothesis brainstorm, and test on!

Start with what you know

Ready to begin? Before proposing a hypothesis, engage in discovery. One key difference between a well-formulated hypothesis and a guess is data. A hypothesis should be informed by what you know already. Dive into your existing data sources such as analytics and carefully observe and consider your customer’s journey through your site.

Use data that is strongly linked to your company roadmap to ensure that you’re focusing on areas of significant impact rather than making UX changes in isolation. How does your company define success on the web?

Direct Data

  • Visitor path and popular pages
  • Brand vs non brand search terms
  • Voice of the customer / surveys
  • Interviews and feedback forms
  • Previous results / other analytics
  • User testing

Indirect Data

  • Competitive overview
  • Shared industry data
  • Industry leadership and academic work
  • Indirect competitors
  • Eye-tracking and heat maps
  • Open questions/”phenomenon”
  • Your unique perspective

Asking questions about your data will lead you toward forming a theory about the visitor’s experience. If there is a conversion funnel, where are visitors falling out? What pain points or usability issues might be caused by your website design? 

Advanced testing culture eventually produces in-depth insights into customer behavior, motivation, and pain points. With this data, you can take action to connect with customers online, and beyond.