Set up a passive feedback mechanism on your website. Use a platform like GetFeedback or build a custom mechanism into your site.
For example, add a side tab or button to your site that users can expand and share their thoughts on the site’s ease of use – or any issues they’re facing with it.
Keep the questions simple to avoid information overload.
Focus your user testing goals on the friction users might face on your website, the clarity they need to accomplish each task, and your monetization funnels, referring to specific steps.
This will keep the focus on behaviors and avoid shifting attention to customer perceptions or motivations.
Pick 5-6 users per device category, as closely related to your target audience as possible. Include both demographic and geographic factors.
A clear user definition allows you to set up relevant user testing panels and continue to run the same user test with the same audience over time. Consider screening questions at the beginning of a user test to further narrow and define your testing audience.
Write a task script that outlines a beginning scenario, the directions the user should take, and specific tasks that the user must complete.
For example, the scenario might include the purchase of a new blender, followed by a short paragraph explaining that users should follow the tasks and verbalize what they’re doing and thinking. It might also ask them to specifically call out areas in which they feel stuck or unsure about how to proceed.
Include 1–2 follow-up questions after the tasks are complete to get the users’ overall thoughts on any successes and challenges while completing each task.
Example tasks for users to complete:
- Specific search.
- Broad search.
- Product or feature comparison.
- Competitor comparison.
- Upsells or cross-sells.
- Complete a checkout process.
Create a user testing scope document that includes the goal of the test, the target users, and individual tasks that users will walk through.
The scope document allows stakeholders to provide input and approve the plan before you launch it. It also allows for a more repetitive process in which adjustments are no longer ad hoc but formalized, managing gained knowledge and the plan’s evolution.
- Observe user behaviors rather than seeking their perceptions. UX benchmarking is a better strategy for learning user perceptions.
- Avoid leading questions, like Do you feel this page is secure? or Would you buy from this site? These types of questions can influence a participant’s answers.
- Launch each user task individually to allow you to adjust your tests in between.
- Pay attention to users’ intent behind their behaviors.
Analyze each user test to find commonalities between tester experiences. Look for potential improvements that will reduce friction on your website.
Last edited by @hesh_fekry 2023-11-14T09:52:26Z