Identify challenges, errors, and optimization opportunities.
For example, when optimizing a chatbot playbook that lives on your website’s knowledge base page, you might focus on questions per conversation or user satisfaction ratings. But if you’re dealing with a chatbot on a top-of-funnel landing page, you might instead choose metrics like goal completion rate.
Common quantitative KPIs for chatbots include:
- Chatbot activity volume.
- Bounce rate.
- Retention rate.
- Use rate by open sessions.
- Target audience session volume.
- Chatbot response volume.
- Chatbot conversation length.
Common qualitative data points include:
- User satisfaction ratings.
- Self-service rate.
- User feedback.
- Comprehension level of a user’s question or the chatbot’s knowledge base.
The conversation logs provide key insights into your audience’s behaviors and obstacles, and most major chatbot platforms collect, track and analyze the logs for you. These platforms can immediately highlight:
- Common words and phrases used by the audience.
- Common parts of the conversation flow where users drop off or mark their dissatisfaction in chatbot satisfaction surveys.
- Agent issues where basic intent is missing from the conversation flow and conversation logic, and thus the chatbot doesn’t know how to respond.
If you’re using a smaller chatbot platform that doesn’t have full conversation log reporting and analytics, you may be able to export the logs as an Excel spreadsheet and use the find tool to search for errors, phrases, or keywords.
Flag and fix error trends that indicate missing intent by looking up the errors logged in your chatbot’s conversation logs where the platform warns that it ran into a missing intent problem.
This means the chatbot had no solution or branching conversation to direct a user to. Fix it by:
- Reviewing the questions or statements made by the user that led to the missing intent error.
- Identifying the true intention of the user, which may require reading what occurred several seconds or minutes before and after the error occurred.
- Creating a new fallback intent in the chatbot agent so that the chatbot knows how to react the next time the same scenario occurs.
Major chatbot platforms like QBox, SAP Conversational AI, and Luis all generate confusion matrices. These visual reports map out not only when intent was correctly predicted, but also when intent was either unclear or incorrectly predicted. Fix it by either creating new fallback intent or changing the lead-up questions that the chatbot asks so that the chatbot solicits better user answers that better indicates the user’s intent.
Review your chatbot’s goal completion rate and flag any conversation flows that have a 50% or lower completion rate.
Each automated conversation flow, or chatbot playbook, ends with a specific goal. Log reports that indicate low completion rate, like the user not clicking any of the automated CTAs, or chatbot conversations ending before the chatbot can complete all its questions and steps. This may mean:
- Your chatbot script isn’t addressing the user’s real needs. Fix this by reading the actual conversations and adjusting the chatbot’s script, questions and replies.
- The user may need a different way to interact with your company in this specific scenario. Fix this by providing other ways to find support, such as your knowledge base or your customer service team’s Twitter feed.
- The content and purpose of the page that the chatbot is living on may be confusing or misaligned with the chatbot. Fix this with a content audit, or user research.
Check the chatbot’s self service rate to identify how often a user indicates satisfaction or issue resolution without needing to contact a human
For example, SSR for a sales chatbot might be what percentage of chatbot conversations led to a direct and immediate sale. Or, an SSR for a customer support bot might be how many users indicated their problem was fixed without calling or emailing your customer support team.
There is no standard metric for a good or bad SSR, but web analytics and interviews with your team will indicate if the chatbot is or isn’t saving your team time and leading to improved customer experiences.
Manually review any chatbot conversation record where the user indicated strong displeasure or dissatisfaction.
The best chatbots sound like a real human, and real humans typically need to step in for a personal review when the chatbot’s AI isn’t hitting the target. If a user indicates displeasure in their end of chat satisfaction survey, read the chatbot and ensure that its:
- Playbook is aligned with the goal and intent of the user.
- Language sounds on-brand and personal.
- Solutions provided were real and useful.