Prioritize your A/B tests

Business Benefits

Increase your testing efficiency to generate a better return on investment.


List your hypotheses in a spreadsheet and score them on customer view, customer voice, past experiments, existing evidence in research, and how strong the relationship between what we’re optimizing and what the company aims for.

Score each section with scores of high, medium, and low. Create a new column to include and score your hypothesis strength, using the outcomes and data you have.

Map each hypothesis to each page of your test and score it from high to low to decide which step of your funnel the hypothesis suits.

For example, some hypotheses in an ecommerce website are not suited for the checkout page, but only for the product page.

Calculate the number of users, the conversion rates, and the notable effect to identify the chances of finding a significant outcome in your tests.

Calculate the level of effort required to implement the outcome in your website, and score them from low to high.

Prioritize your templates using the minimal detectable effect, by selecting the weeks of testing, % of traffic, the number of variations, and the significance level.

Start with the MDE score that is the lowest in percentage, as shown in the image below. Take into consideration the type of page, and the number of unique visitors that go through the entire experiment (unique visitors). In the early stages of MDE, you assume that you have a similar effect on all the pages.

Create a new spreadsheet to include the page templates, weekly visitors, weekly conversions, conversion rate, and add the average effect on your hypothesis.

The average effect is available only after running tests on your templates, and having discovered winners after your tests. Calculate the power for each template of your tests. Include the expected week effect, and assign a prioritization rank using the existing data. Look at the power of a page, and the expected effect of identifying which template has the priority.

Create a roadmap by listing the name of your tests, their hypothesis, where the user is located in their journey, and the location or type of page you will be testing.

Map your experiments across several weeks and take into consideration the amount of time required for each experiment.

For example, if in your spreadsheet you indicate that the first checkout page needs to run for 3 weeks, then add a second experiment only in the 4th week.

Update your validated data section in your report to improve your hypothesis strength, and have a higher impact on your tests.

Last edited by @hesh_fekry 2023-11-14T15:50:25Z

2 Likes

Hello,

Hope you are okay.

How can I calculate the power of tests and expected week effect?

@Patryk Dąbrowski

Peep recommends to use an AB test calculator and Juliana sent through the link: https://cxl.com/ab-test-calculator/

Hi @Patryk Dąbrowski

I would recommend reading this blog and seeing if you can find your answer there. https://cxl.com/blog/statistical-power/

Let me know if you find what you are looking for, otherwise i can ask one of the CXL team, potentially Peep himself answers :slight_smile: