Test that new features work for real customers before you roll them out to a general audience.
If I do X, I expect Y. For example:
- If we improve monthly gross profit by X% with the Add to Box upsell email, I expect it’s worth spending the resources to automate the feature.
- If we capture info from more than X% of visitors to the landing page in the first X weeks, I expect it’s worth building the actual product.
- If we can pre-sell $X in X weeks, I expect it’s worth building the actual product.
The idea of a smoke test is to limit the amount of time and money invested before validation. So, your smoke test should be able to run without the help of an engineer. For example:
- Manually invoicing customers and notifying the warehouse vs. having engineers automate the process.
- Creating a landing page to capture zip code and email for an eventual release vs. having engineers create the entire product.
Drive the right traffic to your experiment by using social media, paid ads or other preferred channel.
If you’re testing a feature, you can simply send existing users to the smoke test. You may want to consider using segmentation, however. For example, perhaps you will only test the feature among active users who have signed in at least 3 times in the last 30 days.
- You have tapped into the right audience. The best product or feature in the world will fail miserably if it’s presented to the wrong audience.
- You have generated enough traffic to make a decision. If you’re a large enough company to be A/B testing, use a sample size calculator upfront as you normally would. If you’re a small company (or not yet a company at all), aim for 100 clicks/day for two weeks.
Be sure you have tracking in place and are measuring KPIs before starting your smoke test. You should know where people are coming from, how they’re behaving at each stage of the funnel, what device they’re using, etc.
If you fall into the testing a product idea from scratch camp, you likely do not have the traffic (and cannot afford the traffic) to make your test statistically significant. That doesn’t mean you can’t run a smoke test; it just means that it won’t be perfect and there is a lot of room for error, so take results with a grain of salt.
Use A/B testing and informal iterations of an idea to check your smoke test results. Sometimes, your first attempt at a smoke test will not pan out the way you thought. Continue iterating before giving up on the idea completely. For example, Suprizr promised to deliver a surprise meal based on your preferences within an hour. Here’s what the landing page looked like before:
It had a <3% email capture rate. Instead of abandoning it altogether, Suprizr changed their approach by first asking for zip code:
And then email, when they were told that Suprizr was not yet available in their area:
30% of visitors entered their zip code, 25% entered their email and 10% of them shared the site.
Make a decision within a pre-defined timeframe to avoid spending too much time fiddling with iterations.
Work with the data you have, make a decision on the result, document it, and move on to the next hypothesis.
- Why might this idea have failed?
- Was it an execution error?
- Did some iterations work better than others?