Remove common validity threats, generate more accurate test results, and gain more valuable insights.
Gather your campaign to brainstorm a list of any technical and environmental factors or variables, that have potential to corrupt your test before starting.
For example, to educate and involve them in monitoring for unexpected test pollutants.
If you see discrepancies of more than 2x difference, do not proceed with testing until integration and setup is complete.
Reduce your site flicker to 0.0001 seconds to ensure your visitor does not see the control before the treatment loads.
Optimizing your site for speed helps to ensure your test is valid.
Conduct quality assurance reviews for every device type, operating system, and browser by looking for improperly displayed or failing tests.
For example, a treatment may work well on an iPhone, but could act funky on Android.
For example, test results are not valid if you stop the test at 90% significance.
Be wary about running tests during the holidays, as this is only relevant to that season.
Use a representative sample population by including traffic from all sources, days of the week, and new and returning traffic.
For example, your PPC traffic does not behave the same way as the rest of your traffic, so alone it is not representative.
For example, if you have a spike in sales during the spring, then tests run during this period cannot be generalized to other times of the year.
Talk to your team prior to running tests, and take inventory of any marketing campaigns during the test window.
For example, running a PPC campaign may influence and invalidate your A/B test.