Monitor your A/B test

Business Benefits

Ensure your test is error free and decide when to declare a winner.


Review your hypothesis before you analyze your data, and stay focused on the two variables you are testing.

Don’t change a parameter mid-test as this will skew your results. Many people shift their focus as soon as they notice a minor shift in data.

Test only one variable in each test.

Testing two variables in your A/B test can skew your results. If more than one variable is changed, it makes it more difficult to understand which factor is driving results.

You can test multiple variations of one example without it clouding your data. For example, if your test is on CTA button copy, you can run three variations of button copy. Do not, however, run three variations of button copy along with two headline variations.

Avoid running tests around holidays or seasonal shifts as they can skew data and provide inaccurate results.

Website traffic and user activity is volatile around holidays and seasonal shifts due to the number of vacations, online sales, and other turbulent activities at these times.

Run A/B tests for seven days minimum, and longer if your website traffic is low.

Use a traffic calculator that will tell you how long your test should run and how large of a sample size you will need based on:

  • The minimal detectable effect or the minimum improvement over the baseline that you’re willing to predict based on statistical significance.
  • Your page’s conversion rate.
  • The statistical significance you are aiming for.
  • Average daily traffic you receive.
  • Number of variations you are testing.

Monitor your campaign to observe any setup failures, and if you are receiving data on the test once it launches.

Some issues to look out for are:

  • If you notice no data is being sent through, this could signal a failed implementation of the coding of your website or test.
  • If your website is changed and interrupts your test, it could cause a steep drop in conversions, clicks, or other metrics. If this occurs, immediately stop the test, locate and fix the problem, and start the test over.
  • Check your test every day to monitor for steep drops in metrics. If your site has more traffic, check in more frequently because if there is a problem, you could be losing hundreds of conversions and skewing your data.

Determine your statistical significance based on the traffic to your page.

These are guidelines for statistical significance levels:

  • If your site has a lot of traffic, the standard statistical significance level of 95% is an acceptable threshold to determine the winner of your test.
  • If you have less traffic, reaching that 95% threshold could take much longer.
  • Adjust your statistical significance based on the traffic the page you are testing receives to as low as an 80% threshold for pages with minimal traffic.

A way to account for a lower statistical significance threshold is to increase the time your test runs. For example, if you planned on running your test for six weeks, but your test isn’t reaching 95% statistical significance, you can drop the statistical significance threshold to 85%, but run it for 10 weeks instead.

Continue to test and try new variations.

Many A/B tests fail. Don’t be discouraged by these test failures as they are helping you to hone in on the variable that will make a difference in your marketing strategy.

A/B testing is not a perfect science. Continue to run tests on variables from A/B tests in the past to validate results. Combine other variations that have won in the past to find the perfect combination of variables and optimize your campaign performance.

Last edited by @hesh_fekry 2023-11-14T15:11:42Z