To evaluate the performance of individual support agents, many support teams use two types of data points: the number of replies made, and the first contact resolution ratio. Both metrics are easy to track, they are within the agents’ control, and generate enough data points to look sexy on a dashboard.
They also say nothing about the quality of those replies.
And that missing perspective can be a big problem for customer service organizations that want to go beyond fast replies and provide quality responses.
To cast a light on the quality of replies, the next logical step is to add CSAT to that dashboard or report. It at least gives you an indication as to how customers perceive the support they receive. Again, it’s a hot metric because each CSAT survey response is tagged to a specific support rep, and there are hopefully enough data points to create a meaningful graph on the dashboard.
And yet, CSAT scores aren’t entirely dependent on support reps.
Among many other factors, reply time (which usually depends on the number of incoming requests), general happiness with the product, or the individual stress level of a specific customer call all have a huge influence on the final rating.
So, how can you measure quality instead?
The internal quality score (IQS) is the internal counterpart to CSAT. Here an internal reviewer (either a specialist or the team lead) rates and reviews aspects of the interaction to assign a quality score. The exact categories are different from company to company, though they usually include the 3Ps:
- Personal Interaction
- Product Knowledge
- Process Adherence
You can rate each of these areas with one category or subdivide them into several categories. Remember that people will optimize for whatever you are measuring, as long as reviews and ratings are delivered regularly in meaningful numbers.
Regular reviews can mean five reviews per week or ten reviews per month. It simply needs to be regular enough so that agents can expect those reviews and see their own progress over time. Meaningful reviews require three or more reviews per period to make sure that the reviewer can actually spot patterns instead of basing an entire evaluation on a single case.
The question then remains: how do you make sure you are measuring the right thing?
Or, rather, what kind of behavior do you want to encourage through your metrics?
They say “what gets measured, gets managed” – and that is true from a management point of view.
From an employee’s point of view, the adage goes more like “what gets measured, gets manipulated” as everyone tries to excel in exactly the metrics that are deemed as important by the company.
If you measure quality, but only ever reward or praise reply quantities, your quality program will stay on the sidelines. If on the flip side, you have clear quality metrics and highlight them consistently, then your quality reviews will have an impact.
And what does “clear quality metrics” mean in this context?
Whether you focus on the 3Ps or have a more granular scorecard, unambiguous definitions are everything.
How do you define “tone”? What do you mean by “empathy”? And what does “suitable workaround” entail? And how do these categories tie back into your overall goal to offer excellent customer support?
Everybody in your support and/or success team should know these answers by heart. Reviewers need this information to ensure fair and just reviews. Support reps rely on the definitions to optimize their work.
The easiest way to accomplish this goal of awareness and calibration across the team is to supplement your quality training sessions with easy-to-access documentation. Write down the definition of each category, why it is important, and add a couple of examples of what you’re looking for specifically.
Let’s look at an example: Empathy
- Definition: The ability to identify with or understand others’ situations or feelings.
- Importance: By putting the customer at ease and showing that we care about their experience, they are more likely to follow our instructions.
Read more about how empathy relaxes the limbic system and avoids fight or flight reactions.
- Practical examples: Summarizing the request of the customer in your own words to show that you’ve listened and are taking them seriously. Acknowledging the confusion the customer is feeling before answering the question so they don’t feel stupid.
- Examples: Links to conversations, or screenshots of great real-life examples.
These explanations can help your reviewers make good decisions when rating tickets and they allow your support reps to include them in their work. Regularly adding new or different examples can also serve as an incentive to really use these guidelines for a chance to be quoted.
Start with the end goal in mind
Which behavior do you want to influence, and how can you nudge people towards doing the right thing?
Thoughts? Share them in our community for CX professionals – Quality Tribe.
More on the topic of internal support quality ✨
- Get 41% More Quality Reviews By Setting Daily Support QA Goals
- Customer Service Quality Assurance (QA) – Everything You Need to Know
- Customer Service Scorecard Template for Quality Assurance (QA)
🎟 Don’t miss this opportunity and get a free customer service quality consultation from Valentina!