Most companies have become quite used to obsessing about what customers think about them, and we’ve become quite good at measuring it. Customer Satisfaction Score (CSAT) and Net Promoter Score (NPS) are some of the most widely-used metrics across all companies today.
That’s great, but how would you rate your customer service? Do you think that your agents are emphatic enough in their conversations? Do all of them have the most up-to-date product information?
Unless you do conversation reviews, you cannot know the answers to these questions.
CSAT and NPS only tell us half of the story. They reflect how satisfied our customers are with what we do.
However, they say nothing about, for example, how well our agents followed our internal guidelines for the tone of voice, or even if they provided a correct and full solution to the customer's inquiry. The only one to know the answers to these questions is you.
Leaving the quality of your customer service only for your customers to judge is flawed in many ways:
- Customers give feedback to the product and company, not just customer service. For example, customers tend to express their disappointment for declined feature requests in customer satisfaction surveys, even if the case was actually handled nicely by the support rep.
- Customers don’t see the complex processes behind their inquiries. At times, customers are dissatisfied because your team is unable to meet their unreasonable expectations. Customers cannot know how much time it would take to fix a bug or to build a completely new feature. Again, this might have little to do with how your agents interacted with the user.
- Customers don’t know your quality standards. They rate your interactions from their subjective perspective, based on what they think is right. At times, your quality standards might be higher - for example, when talking about product knowledge.
For these reasons, it is essential to analyze your customer service interactions based on your internal quality standards. The best way to do this is by conducting conversation reviews, also known as ticket reviews and sometimes called “customer service quality assurance”. However, we’re not fans of the latter term, here’s why.
Conversation reviews provide a systematic way of giving feedback to customer support agents. In essence, it means rating tickets in pre-defined categories like “Tone”, “Solution”, and “Product knowledge”. Read more about setting up conversation reviews.
Internal Quality Score (IQS) is the outcome of conversation reviews, expressed as a percentage. It adds up all ratings and divides the sum by the total number of categories. That is the score of an individual conversation. If you repeat the process with all your tickets or with a random sample of them, you’ll be able to calculate your average IQS for a specific period.
Our conversation review tool Klaus uses a three-step ticket score calculation:
- Ratings for each category: we use a 2-point rating scale, meaning that reviewers give a thumbs up or a thumbs down in each rating category. The binary approach helps to keep scoring simple and results easily comparable. The third option, “neutral/no rating” does not affect IQS.
- Category weight: this helps to give more importance to some categories and less to others. For example, some companies believe that product knowledge is more important in customer interactions than grammar; thus, they adjust IQS calculations accordingly with category weights.
- Critical categories: the most paramount aspects of the conversation, where there is no room for mistakes, should be marked as critical. If a ticket receives a negative rating in a critical category, it automatically gets an overall score of 0%, regardless of the assessments received in other categories.
So, for example, let’s set up a simple scorecard with four categories:
- “Product knowledge”: 1,5 weight and critical.
- “Tone”: 1,0 weight.
- “Solution” : 1,25 weight.
- “Grammar”: 0,5 weight.
If a conversation receives a positive rating in all categories, the ticket’s IQS will be 100%. If a ticket gets a negative rating in the critical “Product knowledge” category, its IQS will automatically be 0%.
Let’s say that a ticket receives a positive rating in all categories, except for “Tone”, which gets a thumbs down. In this case, the ticket score will be 76%. If another ticket receives a thumbs down in the “Grammar” category and positive ratings in all others, this ticket’s IQS will be 88%, because “Grammar” weighs less than the “Tone” category in the previous example.
If you systematically review your customer interactions, you will be able to calculate your average ticket score during a specific time. This is your overall Internal Quality Score.
Keep an eye on your Internal Quality Score over time. Just like you look at your monthly and quarterly CSAT and NPS results, see what is happening with your IQS, as well. Also, see how much your internal assessment agrees with what your customers are saying in CSAT and NPS. You might come across a few surprises there.
Conversation reviews give you unique opportunities to track how your team is doing over time. It also helps you get insight into your agents’ individual progress, and provide personal feedback to boost their professional growth. Use conversation review results as an input for coaching and other feedback sessions.
If IQS is an important metric for you (and, most probably, it is), but the calculation sounds like a real hassle, let the conversation review tool Klaus do this work for you.