Gain statistically significant A/B test results

Business Benefits

Make your optimization tests reliable and use quantitative data on user behavior to improve your website.


Download and install a tool for A/B testing, such as VWO or Google Optimize, and add the JavaScript snippet required by the tool to your site.

Google Optimize is a good choice for trying out this kind of testing before investing more heavily. It is free and simple enough to implement.

Decide on what you are going to test and how you are going to measure success, and update your testing tool.

For example, in a newsletter signup test, the measure of success might be clicking on the subscribe button or reaching the thank you page.

Create alternative versions of your page, and specify the percentage of users you wish to send to each variation.

Google Optimize will let you create as many versions of your page as you would like, and decide how much of the traffic to send to each.

If you only send a small percentage of users to a test variation, you’ll experience less of an overall drop in performance if it performs worse than the current live version. However, the fewer users you show a variation, the longer it will take to get results.

Wait until enough people have converted that you can be confident in the winning version.

The length of time you wait will depend on the amount of traffic and level of conversion your site sees. For example, a website like Amazon only needs to run a test for a few minutes to gather enough data. But on many websites, it might mean running the test for weeks.

Test elements very closely linked with the successful action, and avoid testing elements that are loosely connected to conversion.

The more steps from the test point to the point of conversion, the more users will drop out and the longer it will take you to get results. For example, changing the text on a newsletter signup form is intimately connected with the success criteria of pressing the Subscribe button. However, testing the impact of a blog post title on newsletter signup is not as closely related, so the conversion rate will be relatively lower.

Focus on micro-conversions by targeting smaller and common actions, and avoid testing actions that do not happen very often.

For example, if you wanted to test blog post titles, consider testing how many users click to view the post, rather than whether they go on to sign up.

Limit the number of variations you use for testing on a low traffic website.

The more differences you create, the longer it will take for you to get statistically significant results. On high traffic websites, the opposite is true: the more variations you test, the higher the likelihood you will find a version that has a more significant impact on conversion.

Focus on big changes that will have a significant effect on conversion, and avoid testing very small ones.

Focus your tests on areas of the site that visitors consider essential and thus are more likely to have a significant impact. A website like Google can test 15 different shades of blue in the logo, but low traffic sites should emphasize checking the bigger, more significant changes to get reliable results. A downside of making significant changes is that it can be hard to know what element that you changed is responsible for the conversion increase. Was it the change in newsletter copy or the changes you made to the subscribe button? Usability testing can help you answer those questions.

Supplement split testing with other approaches.

It is not always immediately apparent why one version wins over another, especially when you are making significant changes. Include usability testing to identify problems and test prototypes so that you can learn more from the experience.

Last edited by @hesh_fekry 2023-11-14T11:00:45Z