Monty Hall problem in AB testing

Hi everyone, I have a Hypothesis and want to know is that worth testing or not. Maybe some of you already cheked this Idea.

So, lets move into details. We have our ABn testing. What if we will implement “Monty Hall paradox” in our daily testing routins.

For an example:
We assume option A is the best. Then we get data that says that option C will definitely not work.

Next, we need to choose between options A and B. But the difference between them is not significant.

Now we apply the Monty Hall paradox.

Starting to drive more traffic to option B (we change our initial choice from option A to B), we will choose the best option in 66% of tests (in 66% of tests, option B will be correct).

What are your thoughts, can we use this paradox in ABn testing or it would not work?


Theres an entire section on Monty hall more related to ecommerce in this blog:

This blog is recently updated and originally written by @merritt our CEO and talks about sequential testing.

lastly this one also updated quite recently talks about Discipline Based Testing Methodology

@e4atan Are any of these relevant to your question?

oh, now I know, thanks

1 Like

To be honest, I was left with multiple potential routes forward? Depending on context several things could apply hjere.

@e4atan curious what your conclusions where from reading these?

@whoiseddie Any thoughts here?

It can make sense to switch your choice in an ABC testing scenario if you have determined that option C is definitely not going to work, and the difference in performance between options A and B is not significant. In this case, switching from option A to option B may increase your chances of choosing the correct option, based on the principles of the Monty Hall paradox.

However, it’s important to consider other factors before making a decision to switch. The cost of switching, such as the resources required to implement the change, should be taken into account. Additionally, you should also consider the potential impact on your business and whether the increase in probability of choosing the correct option justifies the cost of switching.

It’s also important to keep in mind that this is not a standard method in A/B testing. Proper experimental design, randomization, and control groups should be applied to make sure that the results are reliable and not biased. And also, you should use the appropriate statistical methods to analyze the data and make inferences about the results.

Overall, the Monty Hall Paradox can be used as a guide to make a decision, but it should not be the only factor to consider.

1 Like

What you may also be referring to is called "Probability Matching" - in the Monty Hall paradox, Monty always nudged the user away from the door the user chose even if it was the correct door. In this scenario, you’re assuming that by switching focus from Option A to Option B you’ll have a 66% chance of always picking a winner. The problem with this strategy tends to not work well or have enough evidence that it works due to personal bias or at some point human pattern recognition tries to intervene.

A/B testing is about learnings, not always trying to pick the best option, if that’s what you’re trying to do, then it’s just better to stick to the Bayesian methodology, where probability matching and even the Monty Hall paradox fall under.