Set up an A/B test on a specific audience

Business Benefits

Boost conversions.


Segment your traffic source to a specific audience for your specific test destination.

To ensure the validity of your A/B test, you must ensure everyone hitting the test is from your target audience. For example:

  • People who have never bought from you before.
  • Brand-new customers.
  • Returning customers.
  • Disengaged readers.
  • Location specific audiences.
  • Device specific audiences.

Some A/B tests allow you to choose your audience. Use the respective platform’s segmentation and audience demographic tools to narrow your focus. For example:

  • Email: Choose your recipients based on email lists, tags, and other identifying factors in your email platform.
  • Social media: You can test how a specific ad performs for different audience segments.

Some A/B tests involve external audiences where you might not always know who is seeing your test. Use paid traffic, targeted at specific users or audience demographics, to segment your audience. Examples scenarios include:

  • Public landing pages.
  • Opt-in forms.

Calculate how many members of your specific audience need to see the A/B test to ensure a statistically significant outcome.

To achieve clear results, your A/B test must be seen by enough members of your target audience to give you a definitive answer. Most A/B testing tools that are native to a specific platform can calculate your sample size for you, depending on the type of test you’re running. For example:

  • Facebook Ads Manager has a built-in testing tool that calculates your required sample size.
  • Tools like Visual Website Optimizer, HotJar, and Google Analytics let you test variables on landing pages.
  • You can also use a traffic calculator from A/B Tasty or SurveyMonkey.

Choose one variable to test with your specific audience such as headlines, design, layout, button wording, and button colors.

Even small changes, such as a change to the call-to-action on a button, can lead to significant improvements in your conversion rates, clicks, or sales. Conduct A/B testing whenever you’re:

  • Changing a variable, such as a headline or a button.
  • Noticing a negative or stagnant trend in your data, such as reduced sales, higher bounce rates on a landing page, or a drop in opt-ins.

Always choose just one variable to test with your specific audience at a time. Testing multiple variables will dilute your data integrity and compromise the A/B test. If you have a very large audience, test multiple variables, using multivariate testing.

Create a challenger variable of your existing control variable.

A/B tests can be conducted on any channel or platform that allows you to test a control variable for example, a pre-existing aspect on the page or channel, such as a button color or an email subject line, and a challenger variable like a new variation of the exact same aspect on the page or channel.

The control variable is your email, landing page, or website as it currently exists. The challenge variable is a duplicate, where only the single test variable has been changed.

Run the A/B test simultaneously and randomly.

Using your email or ad traffic segmentation, send 50% of your target audience to the control variation and 50% of your target audience to the challenge variation. If you have a large sample audience, such as an A/B test to all past customers in your email list, consider doing a split test:

  • Send 10% of your target audience Version A.
  • Send 10% of your target audience Version B.
  • Send the rest of your target audience the winning variation.

Monitor the incoming data from your A/B test and watch for any data trends that indicate your A/B testing platform has a setup failure, such as missing event tracking on a key user action or conversion event.

Examples include:

  • No clicks or other user activity.
  • A dramatic different between test variations, such as a 90% increase in Variable A and no changes in Variable B.

This may mean your specific audience segmentation may not be generating enough data or may not be set up correctly. If so, pause the test and restart it once the underlying tracking or audience segmentation is complete.

Pick the winning version and disable the losing variation in your A/B testing platform once you have a statistical difference.

If one variation outperforms the other, that is your winner. Use it going forward, and take what you’ve learned and apply it to future decisions. For example, if a specific sender name works best in your email A/B test, communicate to your email marketing team that they should use that specific sender name going forward.

If neither variation performed better, you know that your specific audience does not care about it, and you can move on to other tests. Your completed A/B test may also help you to identify other variables to test. For instance, if you notice that your specific audience responds strongly to a specific button color, you may choose to test using that color in other areas of your landing page. Or, if a headline copy change drove a significant change in user actions, you may want to test the body copy.

Last edited by @hesh_fekry 2023-11-14T12:20:36Z