Gain support for making decisions based on data, not just emotion.
Sometimes, you’ll be communicating results with stakeholders that have a vested interest in a certain variation winning. In that case, it can help to concede the point and acknowledge that the stakeholder has a good point and logic. You want them to feel understood and validated first. Flip the script and ask a follow-up question in an organizational context: What would the consequences be if the results are accurate? How can we move forward if that’s the case? This literary device known as a concession is a staple in persuasive writing. What you’re doing is simultaneously acknowledging and addressing your opponents’ point, demonstrating empathy, and easing their defenses.
If you bring them statistics and results showing what they believe in is not effective, they’re likely to tune it out. Give them an alternative to what you’re sharing and throw them a bone within their sphere of influence.
For example, if you’re faced with a stakeholder with decades of experience in something that just isn’t working anymore for the target market, highlight two points they will care about - that there’s still a portion of the target market who likes or needs what they’re doing, and that embracing alternative solutions will improve overall revenue.
Outliers can have a significant impact on the accuracy of your A/B tests. For example, having a small percentage of customers spend $200 or more when most of your customers spend less than $100 could create an inaccurately high average order value.
Adjust or remove the top 5% or top 1% of orders, depending on your business, for more accurate test data. Do so only in the event that outliers truly skew the decision one way or the other. Often, outliers are an important source of data and decision-making and really do reflect the underlying model. Sometimes, however, they can skew things inappropriately. The mark of a great analyst is knowing which situation is occurring.
While introducing competitive measures and tying opinions to results can increase the beta of ideas or the range of concepts brought to the table, strong opinions can sway rooms and the objective results can fade away with justifications and conformity. Anonymity makes your reporting more about the results, not whose idea it was or who was wrong.
This changes the whole conversation, making things less conceptual and bringing about more action. In other words, start the conversation with what you learned from an experiment, not just the test data like significance and uplift percentage. This takes the emotional sting and defensiveness out of tests that didn’t go our way.
There are many ways to do this but, in general, you’ll want to figure out:
- What to include in your reports
- When to share
- How to share
- How to format
These might change depending on who you’re giving the report to, but your report should generally include:
- Purpose: Brief description of why you’re running the test, including your experiment hypothesis.
- Details: Number of variations, brief description of the differences, dates when the tests were run, number of participants, and participant count per variation.
- Results: Percentage lift or loss compared to the original, conversion rates by variation, and statistical significance or difference interval.
- Lessons Learned: Key insights generated from the data, your interpretation of the numbers, and new questions for future testing.
- Revenue Impact: Percentage lift with year-over-year projected revenue impact, quantified whenever possible.
Visualizing results makes them clearer and more persuasive, which helps ensure everyone understands them and keeps everyone on the same page. Anything from simple Excel spreadsheets to beautiful, clear visualizations can help better present the uplift and expected impact of what you’re testing.