Avoid running a disastrous marketing campaign.
It is easy to influence users in qualitative testing and come to the wrong conclusion. It is important to recognize this fact before undertaking any kind of qualitative research.
Define a clear question you wish to answer. What is it you wish to know?
Pretest your research approach with 20% of your intended users to determine if the approach produces results that answer your questions.
A large tech company used this technique on a survey with directors of infrastructure at other tech firms. They attempted to portray a representation of their vision of what their typical customer looked like for their sales kit.
After interviewing 20 of their typical customers, it became clear that their vision did not match the customer’s vision. So, instead of using the one look they had decided upon, they were able to test several looks before finally deciding.
Use objectivity. Be aware of your own personal and corporate biases, in questionnaire design and result interpretation.
Avoid using terminology that is used internally within the organization but may not be familiar to the user. Be careful not to lead the user with questions that could be interpreted as biased.
Bias can creep into design and layout issues, possibly altering what appears on the surface to be irrefutable traffic results. Be sure to keep an open mind when viewing these results.
Recognize, when looking at data, that A/B test results are from a randomly selected sub-sample of a population and not the entire population.
Code and group individual results into clusters of a common response when analyzing qualitative data.
For example, suppose you’re looking at four separate answers that on face value appear to have a degree of commonality: “I liked the logo.” “The logo was cool.” “The logo made the site.” “The logo stood out.” you can assign these common answers a cumulative description, such as positive logo comments.
Assign each cluster an interval quality code number so that the data, assuming a large enough sample size, can be analyzed using parametric statistics.
Using parametric statistics like analysis of variance, factor, and cluster analysis on a sample that’s too small can result in the phenomenon known as bouncing betas where there may be a high degree of variance. Use a nonparametric statistic such as Chi-square to analyze small sample-sized A/B test results.