Focus testing efforts on increasing revenue

Contributors

@andre-luiz-nunes-vieira


Business Benefits

Focus your optimizing efforts on increasing revenue on your website.


Identify your business objective and translate it into a metric based on revenue.

For ecommerce websites, that would be average revenue per visit or revenue per user.

For non ecommerce websites, look at metrics like average revenue per newsletter subscription, average revenue per article read, average revenue per signup (leads), etc.

Communicate to your team the only success metric they need to focus on when reporting results.

  • Use the success metric you defined as the target metric across most of the tests you run.
  • Only report your testing program on the revenue-oriented metric chosen by you when comparing test results.
  • As revenue per X metrics are non-binomial, you can use an advanced calculator to check your results through Bayesian analysis - similar to the analysis method used by Google Optimize.
  • Only use results based on micro-conversion in cases where testing for your success metric (macro-conversion) is not possible or as when there is a strong correlation between your micro-conversion and your success metric.

Inform your stakeholders on how you will report on your testing campaign and that its success is directly linked to the revenue-oriented metric you defined.

Present the conclusions in your reports by presenting how they connect to your success metric.

Keep people in your organization informed about your progress and what to expect from the program.

Discuss possible outcomes, good and bad, with the company stakeholders. Inform them of the rationale behind your decisions. Structure the discussion around long term profits rather than short term gains.

Discuss within your team all the possibilities of testing and optimizing your testing program to challenge preconceived conventions of testing.

It’s difficult to estimate the impact of any given test. Aim at maximizing test speed above all, but preserve statistical rigor while doing so. Do not limit your test only to best practices and ideas everyone agrees with. Try bolder tests as well and be open to push more drastic changes.

Analyze previous failed tests by looking at the data and checking the impact on your success metric.

Consider re-running failed tests that showcased an increase in your success metric but led to drops in other metrics at the time. Make adjustments to these tests to fix the aspects you believe generated the drop in the metrics that were targeted at the time. Then measure the current impact they had on your success metric and re-evaluate your decision of keeping or discarding the changes pushed by the test based on that.