Just as a writer needs a proofreader and software developers need code reviews, customer service teams need quality assurance (QA). It’s a means of onboarding, training, and coaching agents by reviewing their interactions with customers. The goal of support QA is to improve and maintain excellent support quality.
Customer service quality assurance is about making it easy to make customers happy.
And when half of customers claim that customer experience is more important to them now than it was a year ago, making them happy should be the top priority.
Support QA (sometimes called customer service quality control) has become a standard practice for ambitious businesses. Though simple in nature, it is an integral part of being customer-centric. Benefits include customer loyalty, improved retention, and a boost in growth.
What is customer service quality assurance?
Customer service quality assurance is the practice of monitoring the quality of customer conversations. Regular conversation reviews help you measure and improve your team’s performance and overall support process.
In this guide you’ll learn:
- Which conversations to review
- How to set up a quality program in your support team
- How to create a customer support quality assurance scorecard
- Quality assurance questions for customer service
- When to switch to a customer service QA tool
- Why is quality assurance important in customer service
- Examples of customer service quality assurance programs
- Additional resources on customer support quality management
Let’s dig in!
Samples of all support interactions
Get an overview of the team’s performance and track support quality over time. Reviewing an assortment of conversations makes sure that your quality program doesn’t tilt too much towards finding mistakes in your agents’ responses and provides valuable feedback on cases that were handled well.
Customer service feedback is important, even if it’s just ‘well done, continue!’ This type of acknowledgment is often underrated.
About 90% of all support interactions fall under this category, so reassuring agents that they do everything right in most cases can keep them motivated and aware of their own strengths.
All conversations that receive a satisfaction rating from customers
People use customer surveys to provide feedback on different parts of their customer experience, not just your support quality. These surveys give companies their overall CSAT score.
Firstly, reviewing these conversations indicates whether you should forward customer feedback to the product, marketing, sales, or any other team.
That’s why CSAT surveys alone will not give you a clear rating of customer service performance.
Secondly, reviewing interactions that received positive or negative feedback from your customers helps you understand what drives customer satisfaction and what brings it down. You’ll learn what to avoid to stop making the same mistakes repeatedly and which communication patterns to replicate to make more customers happy with your services.
All conversations when onboarding new agents
Coaching has proven to be one of the most efficient methods of getting newcomers up to speed. Conversation reviews function as an effective coaching tool in new agent training programs.
Moreover, if you set up your onboarding workflows so that all new agents’ responses are reviewed before they are sent to customers, you’ll be able to let newcomers tackle real-life support cases right away – without the risk of an incomplete answer being sent to a customer.
Customer service QA aligns all your training, coaching, and quality monitoring procedures with your internal quality standards. This way you’ll provide quality customer care consistently across all support channels and agents. Your customers will appreciate that.
Before diving into your customer interactions, set up a support quality program for your team. This will provide a framework for your QA activities and make sure everybody works towards the same support vision.
75% of customers are willing to spend more with companies that provide positive customer experiences, according to Zendesk’s 2021 report. So let’s get to it…
Here’s how to set up a quality and QA program for your customer service team:
- Define your support vision and goals – In other words, define what “quality” means for your team. All companies have a unique concept of what matters most for their business and customers. Some focus on delivering personalized assistance to drive product engagement and upsell their products – while others might prefer to keep their interactions short and speedy.
Check out our quick guide on how to choose customer service goals.
- Create internal quality standards. Based on your support vision and goals, write down guidelines for your support agents to follow in their customer interactions. Use these criteria in your support QA reviews to assess how well your team’s responses align with your quality standards.
69% of customer service teams conduct regular support QA reviews.
Most teams use three rating categories: accurate product knowledge, appropriate empathy/tone, and completeness/correctness of the solution.
- Decide who will do the reviews. In most cases, managers are responsible for providing feedback to agents and, thus, also do support QA. However, more and more teams are switching to peer reviews or hiring quality assurance specialists to avoid overburdening the managers and to dedicate more time to conversation reviews.
Continue reading: Who Should Do Customer Service QA Reviews – and Why Does it Matter?
- Decide which conversations to review. Most teams generally review random samples of their customer interactions. In addition to that, the vast majority of support teams review all tickets rated by their users to see how their internal quality standards align with those of their customers.
Increasingly, teams are aware of the importance of being data-literate. Customer service AI software uses machine learning and language processing to decipher which interactions are the best for you to review and filter accordingly.
- Create a feedback flow in your team. What you do with the QA results is as important as doing conversation reviews in the first place. Track your team’s performance over time, discuss the progress in team meetings, and provide individual feedback in regular one-on-one sessions.
Support agents need negative feedback to grow professionally. If you don’t know how to deliver that to your team, look at these feedback techniques that will help you do it tactfully and constructively. You might find this customer service one-on-one meeting template useful, too.
When you’ve planned your customer service quality program starting from your support vision to how you will follow up on the review results with your agents in one-on-one meetings, you’ve created a thorough framework for your support QA – and set your team up for success!
The customer service QA scorecard is the foundation of your quality program. You will use the rubric to rate how well your team’s support interactions meet your internal standards.
✨ PS We created this free downloadable customer service QA scorecard you can customize for your team.
Here’s what to keep in mind when creating your customer service QA scorecard:
Use rating categories that reflect your support goals and standards
Make sure that each of your quality criteria is represented by at least one rating category in your scorecard.
Less (rating categories) is more (reviews)
Keep your rubric short and simple. The more rating categories you have and the more complex the scorecard gets, the less likely you are to actually do regular support QA.
Using 3 to 5 rating categories is a common practice.
Prioritize your rating categories
Your rubric likely consists of aspects of different importance. Add more weight to those that matter the most and mark some categories critical – meaning that failing in this aspect would mark the entire ticket as failed.
Choose the rating scale that meets your needs
From 2 points to 11 points (and more), your choice of rating scales can affect your conversation review rates. The larger the scale, the more precise the results – but, at the same time, the more complex assessments will become for the reviewers.
Read about the pros and cons of different rating scales and choose wisely!
The scorecard is the backbone of your quality program. The results of your support QA depend on how you’ve set up the rubric. Take your time to make sure it covers all the right aspects to give you an adequate overview of your team’s performance.
We dug into Klaus’ anonymized usage data to understand how large teams with 25 and more agents evaluate their interactions. Here’s what we learned.
The most popular rating category among support teams is Solution: 11% of large customer service teams ask “Did the agent provide the right answer and instructions to customers’ questions?”. Solving customers’ issues is one of the main functions of support, so it definitely deserves a place in the most popular rating categories. It’s also important to note that solutions are often evaluated from two perspectives: their correctness and completeness.
The next most popular rating criteria, again, emphasizes the fact that each team has its own standards they must meet. 11% of customer service teams check whether agents’ responses comply with their internal processes, such as:
- Tickets were categorized and tagged correctly
- Cases were appropriately forwarded to other teams, if necessary
- Macros were used as intended
9% of customer service teams evaluate agents’ interactions in the product knowledge category. This covers technical know-how from troubleshooting issues to providing accurate advice and instructions to the customer.
The aspects related to the tone and style of communication are evaluated in 9% of support teams who conduct conversation reviews. Whether the agents are expected to interact in a formal and professional manner or take on a friendly and casual approach depends on the company’s voice. It would be impossible to give any guidelines on tone and style that would suit all companies. It’s an aspect that has to be defined in internal standards and evaluated in your QA process.
There’s one more quality criterion worth pointing out for making it into the most popular rating categories. 7% of customer service teams review how well their agents validated whether they understood the customers’ issues correctly. It’s about asking the right clarifying questions to reassure that they know what issues they need to solve.
There is no one-size-fits-all when it comes to the quality of customer service and how to measure it. If these numbers and graphs spiked your interest, check out the full article on different support QA categories.
Quality control can be a time-consuming process. In addition to the time it takes for the reviewers to analyze the interactions and provide feedback, teams could spend hours on managerial tasks related to the upkeep of the system if done manually.
A simple spreadsheet can be enough for small teams whose support (and review) volumes are low. However, as the team and number of conversations grow, manual copy-pasting and reporting can become a bottleneck in your support QA.
Hence, more and more customer service teams switch to support QA tools that:
- Connect to their helpdesk and automatically pull conversations in for review,
- Offer custom scorecards to cover each company’s unique needs,
- Calculate Internal Quality Score to measure the team’s performance,
- Generate meaningful reports that help to track support quality over time,
- Notify agents about the feedback they’ve received via Slack, email, etc,
- Use AI-driven features to provide data insights and speed up the quality assurance process.
Klaus is a customer service quality assurance tool that you can set up in minutes and try out for free. Connect it with your helpdesk software or use the Browser Extension to review conversations in any online environment.
“Since Education Perfect switched to Klaus, their support QA time was cut in half. No need to navigate between different tools and tabs, no more waiting for the gigantic spreadsheet to load, no manual copy-pasting ever again.”
Read about how Education Perfect pushed their CSAT to 96% with Klaus.
To answer the question of when to switch from spreadsheets to a support QA tool: as soon as the managerial tasks take up a disproportionately large part of your QA time. Copy-pasting tickets and sending manual reminders and notifications are not what you should spend hours of your time doing every week.
Customer Satisfaction Score (CSAT) is one of the most widely used support metrics across all industries. Measuring and tracking how satisfied your customers are with what you do is a must for all customer-centric businesses. But CSAT doesn’t show the full picture. Here are the three main reasons why you need to combine customer feedback with internal evaluations and support QA:
- CSAT doesn’t only reflect your support quality. Your CSAT score combines customers’ attitudes towards your customer service, product, marketing, sales, and other parts of the customer experience.
- Customers don’t understand the complexity of their inquiries. Some bug fixes or feature requests might sound like a simple thing to a customer but could, in reality, mean weeks of work for the product team. So, some negative CSAT ratings reflect users’ disappointment in that, not the quality of your agents’ work.
- Customers don’t know your quality standards. They can only tell you how well your team’s performance matches their expectations of good customer service. At times, your internal criteria might hold your team to higher standards than those of your customers.
Conversation reviews will help you see how well your customer satisfaction aligns with your internal quality criteria. If you combine CSAT with IQS, you’ll get a complete picture of your team’s performance based on external and internal feedback.
Today, most customer service teams have already implemented quality assurance or simple ticket review processes to maintain their support quality. If you’re not sure how to set up your quality program, see how others have done it.
Here are a few examples of industry leaders with outstanding quality programs:
- Automattic, a famously remote team that delivers a CSAT of 95% and more, conducts regular peer reviews on Klaus.
- Wistia conducts regular peer reviews in groups of three to include a third-party facilitator in all feedback sessions.
- Dreem pushed their CSAT from 80% to 90% since they’ve started doing regular conversation reviews on Klaus.
- Geckoboard uses agent feedback to increase proactive help to drive product engagement and upsells.
- Education Perfect cut their support QA time in half by switching from spreadsheets to Klaus.
All companies have unique quality programs based on their internal standards and goals. Different teams prefer different types of reviews: some hire QA specialists, others use manager, peer, or self-reviews. There is no right or wrong way. Instead, choose the format that suits you best.
Make sure to pay extra attention when building a scorecard for your team. This will be the most important part of your QA program because all reviewers will base their work on this rubric. Keep it simple (but also measurable and trackable).
Download the free Customer Service Quality Handbook and dig into:
- What are conversation reviews and how do they improve customer service quality.
- How to build a scorecard for your internal quality program: how many rating categories to use, how to choose the right ones, etc.
- How the Internal Quality Score helps you track and report your support team’s performance over time
Or join Quality Tribe! ✨
It’s our CX quality community for customer service professionals. Connect with and learn from other experienced folks in the customer service space, follow the topics that interest you the most, ask questions and share ideas.
Continue reading: A Day in The Life of a Support QA Specialist