Calculate key testing metrics to test faster and speed up your company’s growth.
For example, ask both the engineers and customer support, to get ideas from every corner of the company.
Generate ideas for optimizing every step of the funnel. For example, you can create ideas for each of the following steps: acquisition, activation, retention, revenue, and referral.
CXL’S PXL requires that everyone bring data to the prioritization discussion. Using this framework you can ask yourself:
- Is it addressing an issue discovered via user testing?
- Is it addressing an issue discovered via qualitative feedback?
- Is the hypothesis supported by mouse tracking, heat maps, or eye-tracking?
- Is it addressing insights found via digital analytics?
GrowthHackers.com’s ICE uses impact, confidence, and ease to prioritize based on numeric scores. You can ask the following questions to address each part of the framework:
- How much of an impact will it have on key performance indicators and revenue?
- How sure are you of that estimated impact?
- How easy is it to launch the test or experiment?
Bryan Eisenberg’s TIR focuses on time, impact, and resources. You can ask the following questions to address each part of the framework:
- How long will it take to execute?
- What is the revenue potential, the anticipated outcome?
- What is the cost of running the test or experiment?
Each factor is given a score out of 5 and then the scores are multiplied to give a final rank.
Reserve one hour every week to:
- Review your KPIs and update your growth focus.
- Look at how many tests were launched and how many were not.
- Discuss key learnings from the tests run the previous week.
- Choose tests from the backlog for the upcoming week.
- Create a list of your favorite upcoming tests for future weeks.
- Recognize how many new ideas were submitted and the top contributor for the previous week.
Manage resources and assess your testing velocity. For example, some growth experiments can be implemented by the marketing team, others by product managers, and others require deep engineering skills. Balancing the workload of top priority experiments across different teams makes it much easier to hit your tempo goal.
Maintain a thorough archive that you can share because; you will not repeat tests by accident, it is easier to communicate wins and learnings to clients, bosses, and co-workers, and you will emphasize learning from all tests, thus improving your knowledge and the quality of future tests.
Prioritize learning about your audience and website, and sharing those insights. For example, Hotwire focuses on the amount of tests that result in an insight, instead of the tests that are a win.
Use CXL’s AB test calculator to calculate the number of people you need for your test, test in full week increments, and chart the quality of your tests.
High-velocity testing can lead to tests being called too soon in the interest of speed. Avoid validity threats by using tools to calculate the length of your test before beginning and adhering to that test length. For example, you can use the CXL AB test calculator to calculate how many people you need to reach before you can call your test.
Test in full week increments to ensure you have a representative sample. For example, day of week and time of day can have a major impact on results.
Evaluate the quality of your tests by asking:
- Is my test program effective?
- Am I running tests effectively?
- Am I putting the right amount of resources, the correct type of resources, and spending the right amount of time on tests?
Chart your quality based on these questions.
Do this by dividing the 52 weeks in a year by your average required test duration in weeks. Then you multiply that number by the number of different pages and funnels that you can test at one time.
For example, if your traffic level indicates that you need to run tests for two weeks, and you have ten different lead gen pages that you can test simultaneously, your testing capacity is 260 (52 / 2 * 10) or five tests per week.
Commit to a testing velocity that will ensure you are not wasting your testing capacity.
Testing velocity is how well you are going towards your target testing capacity. It depends on how many experiments or tests are run per time period.
For example, you can measure your testing velocity on a weekly basis for very high velocity testing programs. For even higher velocity testing programs, you can measure this on a daily basis.
Assess your testing velocity trends over time. Ask yourself: Is your velocity staying the same? Decreasing? Increasing? Keep track of the changes from week to week, and month to month.
Testing coverage is how many testable days you are running a test. Calculate the percentage of testable days you actually have a test live. The goal should be to have 100% testing coverage.
Assess your testing coverage by asking:
- What is the waste of traffic I will tolerate in my testing program?
- How many days has it been since you had zero tests running?
- Why was I not running tests on certain days?