Customer Service Quality Assurance is like code review for software developers or the editorial process for writers: it’s a systematic means of training and onboarding agents by reviewing their support interactions. The goal of support QA is to improve and maintain support quality.
Support QA has become a standard practice for all quality-oriented and ambitious customer service teams. Though simple in nature, these procedures can bring immense value to your business: customer service quality assurance can grow customer loyalty and retention and boost your business results.
Most teams implement support QA for the following conversations:
Random samples of all support interactions to get an overview of the team’s performance and track support quality over time. This strategy makes sure that your quality program doesn’t tilt too much towards finding mistakes in your agents’ responses, but also provides valuable feedback on cases that were handled well.
“That’s fine, continue” is great feedback that often goes unsaid. About 90% of all support interactions fall under this category, so reassuring agents that they do everything right in most cases can keep them motivated and aware of their own strengths.
All conversations that receive a satisfaction rating by customers to, firstly, understand whether you should forward their feedback to product, marketing, sales, or any other team. People use customer surveys to provide feedback on different parts of their customer experience, not just your support quality. That’s why, for example, CSAT surveys alone will not give you a clear overview of how your customer service team is performing.
Secondly, reviewing interactions that received positive or negative feedback from your customers helps you understand what drives customer satisfaction and what brings it down. You’ll learn what to avoid in order to stop making the same mistakes over and over again, and which communication patterns to replicate to make more customers happy with your services.
All conversations when onboarding new agents because coaching has proven to be one of the most efficient methods of getting newcomers up to speed. Conversation reviews function as an effective coaching tool in new agent training programs.
Moreover, if you set up your onboarding workflows so that all new agents’ responses are reviewed before they are sent to customers, you’ll be able to let newcomers tackle real-life support cases right away - without the risk of an incomplete answer being sent to a customer.
Customer service QA aligns all your training, coaching and quality monitoring procedures with your internal quality standards. This way you’ll provide quality customer care consistently across all support channels and agents. Your customers will appreciate that.
Before diving into your customer interactions, set up a support quality program for your team. This will provide a framework for your QA activities and make sure everybody works towards the same support vision.
Here’s how to set up a quality and QA program for your customer service team:
Define your support vision and goals - In other words, define what “quality” means for your team. All companies have a unique concept of what matters most for their business and customers. Some focus on delivering personalized assistance aimed at driving product engagement and upselling their products, while others might prefer to keep their interactions short and speedy.
Here’s a quick guide on how to define customer service goals. If you’d like to see how other companies are doing it, read how support leaders from Vimeo, Ericsson, HubSpot, Adobe, and Workable define support quality.
Create internal quality standards. Based on your support vision and goals, write down guidelines for your support agents to follow in their customer interactions. Use these criteria in your support QA reviews to assess how well your team’s responses align with your quality standards.
Most support teams use three rating categories for QA: accurate product knowledge, appropriate empathy/tone, and completeness/correctness of the solution. Read more about the most popular support quality criteria.
Decide who will do the reviews. In most cases, managers are responsible for providing feedback to agents and, thus, also do support QA. However, more and more teams are switching to peer reviews or hiring QA specialists for the job to avoid overburdening the managers and to dedicate more time to conversation reviews.
Decide which conversations to review. As discussed above, most teams generally review random samples of their customer interactions. In addition to that, the vast majority of support teams review all tickets rated by their users to see how their internal quality standards align with those of their customers.
Reviewing all conversations for new agents has also become a common practice in customer service. Read more about how to integrate QA into your new agent onboarding program.
Create a feedback flow in your team. What you do with the QA results is as important as doing conversation reviews in the first place. Track your team’s performance over time, discuss the progress in team meetings, and provide individual feedback in regular one-on-one sessions.
Support agents need negative feedback to grow professionally. If you don’t know how to deliver that to your team, take a look at these feedback techniques that will help you do it in a tactful and constructive manner. You might find this customer service one-on-one meeting template useful, too.
When you’ve planned your customer service quality program starting from your support vision to how you will follow up on the review results with your agents in one-on-one meetings, you’ve created a thorough framework for your support QA - and set your team up for success!
The customer service QA scorecard is the foundation of your quality program. You will use the rubric to rate how well your team’s support interactions meet your internal standards.
Here’s what to keep in mind when creating your customer service QA scorecard:
Use rating categories that reflect your support goals and standards. Make sure that each of your quality criteria is represented by at least one rating category in your scorecard.
Find out the most commonly used rating categories for large support teams.
Less (rating categories) is more (reviews). Keep your rubric short and simple. The more rating categories you have and the more complex the scorecard gets, the less likely you are to actually do regular support QA.
- Prioritize your rating categories. It is likely that your rubric consists of aspects of different importance. Add more weight to those that matter the most and mark some categories critical - meaning that failing in this aspect would mark the entire ticket as failed.
Choose the rating scale that meets your needs. From 2 points to 11 points (and more), your choice of rating scales can affect your conversation review rates. The larger the scale, the more precise the results - but, at the same time, the more complex assessments will become for the reviewers.
Read about the pros and cons of different rating scales and choose wisely!
The scorecard is the backbone of your quality program. The results of your support QA depend on how you’ve set up the rubric. Take your time to make sure it covers all the right aspects to give you an adequate overview of your team’s performance.
Quality control can be a time-consuming process. In addition to the time it takes for the reviewers to analyze the interactions and provide feedback, teams could spend hours on managerial tasks related to the upkeep of the system if done manually.
A simple spreadsheet can be enough for small teams whose support (and review) volumes are low. However, as the team and number of conversations grow, manual copy-pasting and reporting can become a bottleneck in your support QA.
Hence, more and more customer service teams switch to support QA tools that:
- Connect to their helpdesk and automatically pull conversations in for review,
- Offer custom scorecards to cover each company's unique needs,
- Calculate Internal Quality Score to measure the team’s performance,
- Generate meaningful reports that help to track support quality over time,
- Notify agents about the feedback they’ve received via Slack, email, etc.
Klaus is a customer service quality assurance tool that you can set up in minutes and try out for free. Connect it with your helpdesk software or use the Chrome Extension to review conversations in any online environment.
“Since Education Perfect switched to Klaus, their support QA time was cut in half. No need to navigate between different tools and tabs, no more waiting for the gigantic spreadsheet to load, no manual copy-pasting ever again.” Read the full case story.
To answer the question of when to switch from spreadsheets to a support QA tool: as soon as the managerial tasks take up a disproportionately large part of your QA time. Copy-pasting tickets and sending manual reminders and notifications is not what you should spend hours of your time doing every week.
Customer Satisfaction Score (CSAT) is one of the most widely used support metrics across all industries. Measuring and tracking how satisfied your customers are with what you do is a must for all customer-centric businesses - and so is Internal Quality Score (IQS).
Here are the three main reasons why you need to combine customer feedback with internal evaluations and support QA:
- CSAT doesn’t only reflect your support quality. Your CSAT score combines customers’ attitudes towards your customer service, product, marketing, sales, and other parts of the customer experience.
- Customers don’t understand the complexity of their inquiries. Some bug fixes or feature requests might sound like a simple thing to a customer but could, in reality, mean weeks of work for the product team. So, some negative CSAT ratings reflect users’ disappointment in that, not the quality of your agents’ work.
- Customers don’t know your quality standards. They can only tell you how well your team’s performance matches their expectations of good customer service. At times, your internal criteria might hold your team to higher standards than those of your customers.
Conversation reviews will help you see how well your customer satisfaction aligns with your internal quality criteria. If you combine CSAT with IQS, you’ll get a complete picture of your team’s performance based on external and internal feedback.
Read more about why CSAT alone doesn’t cut it.
Today, most customer service teams have already implemented Quality Assurance or simple ticket review processes to maintain their support quality. If you’re not sure how to set up your quality program, see how others have done it.
Here are a few examples of industry leaders with outstanding quality programs.
- Automattic, a famously remote team that delivers a CSAT of 95% and more, conducts regular peer reviews on Klaus.
- PandaDoc reviews random samples of agent conversations and all tickets for new agents during their onboarding program.
- Wistia conducts regular peer reviews in groups of three to include a third-party facilitator in all feedback sessions.
- Dreem pushed their CSAT from 80% to 90% since they’ve started doing regular conversation reviews on Klaus.
- Geckoboard uses agent feedback to increase proactive help to drive product engagement and upsells.
- Education Perfect cut their support QA time in half by switching from spreadsheets to Klaus.
All companies have unique quality programs based on their internal standards and goals. Different teams prefer different types of reviews: some hire QA specialists, others use manager, peer, or self-reviews. There is no right or wrong way. Instead, choose the format that suits you best.
Make sure to pay extra attention when building a scorecard for your team. This will be the most important part of your QA program because all reviewers will base their work on this rubric. Keep it simple (but also measurable and trackable).
If you’d like to start doing support QA right away, give Klaus a go. It’s the quickest way to get going with conversation reviews. 10 reviews per month are always free of charge, so it’s a tool suitable for smaller teams, too.