Obsessed about customer service quality? We are too! Love statistics and graphs? We do too!
That’s why we dived into our conversation review tool’s anonymized usage data to understand how large teams with 25 and more members evaluate their interactions. Here’s an overview of the customer service quality criteria trends in large support teams.
We also used the combined data from small and large teams to uncover more quality trends that you can read about here.
The first thing you’ll notice is that the list of different rating categories that large teams use is quite long. Most customer service teams define quality differently, so that explains the colorful chart.
The most popular rating category among support teams is the solution: 11% of large customer service teams evaluate whether agents provide the right answer and instructions to customers’ questions.
Solving customers’ issues is one of the main functions of support, so it definitely deserves a place in the most popular rating categories. It’s also important to note that solutions are often evaluated from two perspectives: their correctness and completeness.
The next most popular rating criteria, again, emphasizes the fact that each team has its own standards they must meet.
10.4% of customer service teams check whether agents’ responses comply with their internal processes, such as:
- Tickets were categorized and tagged correctly;
- Cases were appropriately forwarded to other teams, if necessary;
- Macros were used as intended.
Regular checks on support interactions help you make sure that the team adheres to the internal processes you’ve agreed upon.
The last customer service quality criterion that made it to large support teams’ top three most popular topics looks at agents’ technical expertise.
9.1% of customer service teams evaluate agents’ interactions in the product knowledge category.
This covers technical know-how from troubleshooting issues to providing accurate advice and instructions to the customer.
Product knowledge is a crucial component of customer service quality that needs constant attention. Companies like Automattic and Wistia believe that peer feedback is the best way to make sure that all agents have thorough and up-to-date product knowledge. Reading other agents’ interactions helps to spread learnings among peers.
The next most popular support quality criterion goes to show that customer service is not just about what the agents say - how they say it matters, too.
The aspects related to the tone and style of communication are evaluated in 8.8% of support teams who conduct conversation reviews.
Whether the agents are expected to interact in a formal and professional manner or take on a friendly and casual approach depends on the company’s voice.
It would be impossible to give any guidelines on tone and style that would suit all companies. It’s an aspect that has to be defined in internal standards and evaluated in conversation reviews.
There’s one more quality criterion worth pointing out for making it into the most popular rating categories.
7.3% of customer service teams review how well their agents validated whether they understood the customers’ issues correctly.
It’s about asking the right clarifying questions to reassure that they know what issues they need to solve.
Reflective listening will also help support teams to address customers’ emotions. It’s a great technique that we believe all successful customer service agents should master.
We’ve looked at this data from every possible angle and found a couple of rating categories that weren’t used in many companies but which, nevertheless, deserve some special attention.
- Extra mile category encompasses the criteria that support teams use to evaluate whether agents gave more than what was asked of them to make the customers happy and to increase product engagement. 4.4% of Klaus’ teams rate their support interactions in this category.
- Expectation management reflects the criteria that teams use to make sure customers understand the (complex) processes behind their inquiries. Being honest about why agents need to decline some requests or how long it will take to fix an issue prevents businesses from losing customers because they failed to meet their customers’ (unrealistic) expectations. 3.1% support teams evaluate how well their agents succeeded in this category.
These two quality criteria can have a significant impact on customer retention. So, we’re happy to see that there are companies out there making use of these opportunities for keeping their users around.
Read more: Klaus Chrome Browser Extension
When looking at the anonymized and generalized usage stats that we pulled from Klaus, you probably noticed that all large support teams (25+ folks) have unique quality criteria that they focus on. (Especially if you compare it to the similar research we conducted in Klaus’s entire userbase where the majority agreed on the top three rating categories.)
So, we can conclude that:
- Customer service is not just about what you say, often it’s about how you say it, and sometimes about why you say it, too. Internal quality standards will dictate the kind of support your company aims to provide.
- There is no one-size-fits-all when it comes to the quality of customer service. Only tailor-made quality criteria and internal conversation reviews show how well your support team is performing against your standards.
- Conversation reviews are the only way to make sure your team meets the quality criteria. There’s no way your customers can evaluate conversations based on your expectations. You need to do internal assessments to gain that perspective.
If these numbers and graphs spiked your interest and you’d like to start doing conversation reviews in your support team before, give Klaus a go. It’s the easiest way to gain control over the quality of your customer service.
See how some of the industry leaders are already doing it:
- Automattic fosters a culture of feedback with Klaus and Zendesk
- PandaDoc leveled up their customer support reviews
- Wistia set up customer service peer reviews on Klaus
Do you conduct conversation reviews in your team? Which rating categories do you use?