Declare a winner in an A/B test

Business Benefits

Make decisions driven by user data and audience testing.


Define your goal metric by referring back to the testing strategy that you established when you set up your A/B test.

Don’t let variables distract you from your core focus, which will dilute your data. For example, if you set up two landing pages with lead conversion as your primary metric, do not get distracted by other page metrics like bounce rates or time spent on page.

Calculate statistical significance tests — aiming for the ideal statistical significance of 95% — to identify how likely the difference in your A/B results is not due to random error.

Statistical significance refers to how likely the difference you’re seeing is due to the specific change between Version A and Version B.

Use a free significance calculator, such as A/B calculator from HubSpot or SurveyMonkey:

  • Enter your data from Version A.
  • Enter your data from Version B.
  • Follow the on-screen prompts to generate your statistical difference.

Increase your statistical difference by increasing your sample size or building a bigger uplift if your original statistical difference is too low.

Your A/B test must have adequate statistical differences for you to be confident in declaring a winner. If necessary, you can:

  • Run a longer test, so more people go through the A/B variations and you expand your sample size.
  • Send more traffic to your A/B test, which also increases your sample size.
  • Make a more significant variation between Version A and Version B, which will create a larger uplift. Examples include more dramatic changes to buttons, headlines, CTAs, or images.

Change nothing and stick with your original variation if there is no statistical difference.

If no variation was statistically different, that’s also an important result. It means the variable you’re testing is not important for your audience, and nothing needs to be changed. However, you can continue forward with testing other aspects of your website, email, or campaign in additional A/B tests.

Pick the winning version and disable the losing variation in your A/B testing platform if you do have a statistical difference.

Once you have a large enough sample size or uplift to have a 95% statistical difference, delete the version of your A/B test that was underperforming.

Whether you’re using an email marketing platform like ActiveCampaign or Mailchimp, or a website testing platform like HubSpot, navigate to your dashboard and follow the on-screen prompts to remove it.

Last edited by @hesh_fekry 2023-11-14T16:30:40Z