Poor customer service loses US businesses $75 billion annually. Improving customer service can prevent the negative experiences that drive customers away, so it’s a good, nay excellent idea to design a robust customer service strategy.
Tracking KPIs alone won’t pass muster – you need to set up a quality assurance program to ensure your success. Customer service quality is closely tied to customer acquisition and retention, hence why so many companies are pouring resources into their customer-facing teams.
- What does a QA program look like?
Conducting customer service reviews
Catching problems before they damage your reputation
Aligning processes to your goals
Coaching your team to perfection
Sounds like a long process? Don’t worry – this guide will help you learn the best practices and tools to make quality an easy, everyday activity for your team/s.
- What are customer service quality reviews?
- Why should customer service teams conduct QA reviews?
- Who should review customer service conversations?
- How many customer service tickets should you review?
- Which customer service tickets should you review?
- Which rating categories should you use on your scorecard?
- Which rating scale should you use in your customer service reviews?
- Why is feedback important for customer service teams?
Customer service quality reviews are a systematic means of providing feedback on your team’s customer interactions. Reviewers rate support agent conversations against set quality standards and create a QA scorecard to provide feedback. Managers, specialists, or peers can conduct reviews across any support channel.
This process is sometimes called customer service quality assurance (QA).
A programmer’s output undergoes a code review; a writer’s words pass through an editor; a support rep’s conversations should be QA reviewed.
Customer service quality reviews are suitable for all companies that offer customer service across any platform, including emails, live chat, and phone support. This can be done manually or using customer service QA software!
The main reason for doing conversation reviews is to improve the quality of your company’s customer service (and thus make customers happier).
86% of consumers are willing to pay more for a better customer experience. The more adept your support team is at satisfying customers, the more successful your company is.
You need to substantiate the data
These days, most companies track at least some customer service metrics. But the numbers alone don’t tell the story in full. Being data-driven means understanding where those numbers are coming from – what your agents are doing right, where there is room for improvement.
Human interactions are complex! (Almost as much as those of the feline kind.) Tracking the right metrics can help you understand why some interactions end better than others.
First Response Time (FRT) and Average Handle Time (AHT) both give teams insight into how long their customers have to wait for help.
Customer Satisfaction Score (CSAT), Net Promoter Score (NPS), and Customer Effort Score (CES) are the three most popular ways to measure how happy your customers are with what you do.
But wait, that’s not enough…
Many customer service teams rely on the above metrics to measure their customer service quality. But how do you know you’re doing the right thing to improve these? This is where measuring Internal Quality Score can lend a paw.
Internal Quality Score (IQS)
This score tells you how well your team performs based on your own standards. Define what a purrfect customer interaction looks like from your perspective and work out where support reps are not meeting internal quality guidelines.
Use your IQS to understand why other metrics are not where you want them to be and make the necessary adjustments to your support processes and coaching plans.
Our original question was: why conduct customer service quality reviews? Because it is the only way to find out your IQS:
Create rating categories
Review every conversation against these categories
Aggregate for IQS result
IQS serves as the basis for improving the quality of your customer service. QA reviews offer insight into your support team’s performance and provide regular feedback to your agents, which will help them grow professionally.
There are four possible formats for customer service QA:
- Manager reviews
- Peer reviews
- Specialist reviews
They each have their advantages and disadvantages, and some teams incorporate a combination of several (or all!) in their customer service strategy.
Let’s claw into the pros and cons of each.
Manager, or team lead, reviews are currently the most widely used form of providing feedback. Because they are responsible for the excellent quality of their support agents, it’s logical for managers and team leads to assess customer interactions.
- Regular feedback cycle between manager & support reps
- Aligned scores for each team (only one reviewer)
- Takes time and focus away from other responsibilities
- It may not be their top priority
- Scores could vary across teams (if you have multiple teams and therefore managers/team leads)
Peer reviews are the most time-efficient form of conducting customer service reviews. You promote shared learning and make quality a focus for the entire team.
- Creates open & collaborative feedback culture
- Support reps learn from each other
- Leaves managers free for other responsibilities
- It’s time-saving to have many reviewers
- Support reps need to carve out time in their day
- Peers are less comfortable giving negative reviews
- Scores can vary
Self-reviews are one of the best ways to encourage professional growth. Reflecting on one’s own performance helps agents understand their communication patterns and revise how they interact with customers.
- Excellent for personal growth
- A proven method to improve NPS
- Gives your support reps a voice on what quality means
- Some people find self-evaluation a struggle
- Ineffective as a solo format
Find out more on why we recommend occasional customer service self-evaluations.
QA specialist reviews
Growing and bigger companies find it very beneficial to employ customer service quality assurance specialists. A QA specialist focuses on measuring and reporting quality, then developing and introducing improvements.
- Aligned, expert feedback
- Doesn’t detract from anyone else’s time
- Meaningful insights
- Higher analysis of trends and better reporting
- (Usually) not a possibility for smaller teams
Improving customer service quality and keeping it at a consistently high level requires a long-term strategy. With a dedicated specialist to handle the review workload, they can make it a much more meaningful endeavor – analyzing KPIs, sculpting training and onboarding programs, etc. However, this isn’t an option for smaller teams with fewer human resources.
Considering hiring (or becoming) a QA specialist? Check out these resources:
So, which should you choose?
Some companies combine manager reviews with self-reviews, while others combine peer feedback with QA specialist assessments. The form that suits your company best depends on your aim, available resources, team setup, and ticket volume. They all serve the same purpose – growing quality through feedback.
Are you a small team?
We recommend a combination of self-reviews and peer reviews.
Are you a large or growing team?
We recommend hiring a QA specialist or allocating enough time for team leads/managers to conduct reviews.
The number of customer service reviews you should conduct is dependent on several variables. The volume of total support tickets and the capacity of your reviewer/s are deciding factors.
As a general guideline, most companies aim to review between 2% – 10% of their total ticket volume. Although some companies review much more, some review less. For example, PandaDoc’s customer service strategy includes reviewing almost all tickets during specific periods, e.g. for new employees as part of their onboarding process.
If you’re not sure which amount is right for you, don’t leave it up to fate to decide. You have several options:
- Set a percentage goal
If your goal is an overview of team performance then 10% of your total volume is a nice sweet spot. This gives you a good outline of common problems, covering all agents.
- Set a ticket goal per support rep
If your goal is more focused on fine-tuning your coaching, aim instead to review a certain number of tickets per agent. For example, 5 tickets per agent, per week.
- Set a ticket goal per reviewer
This is useful for peer reviews or managerial reviews. A target of a certain number of tickets, e.g. 10 per week, ensures that your review program is steady and everyone is sharing the responsibility.
- Set a percentage goal
Klaus’ Review Assignments feature does the heavy lifting (well, manual labor) for you. Assign your reviewer/s a specific number, frequency, and criteria for reviews. Then sit back and let Klaus keep them on top of their goals (you can also look at the Assignments Dashboard to check that everything is ticking over as required!).
Another way to avoid spreadsheets and pesky reminders. <happy sigh>
Of course, if you have a dedicated QA specialist or review team, tackling a higher percentage or higher number of tickets is possible. But quantity shouldn’t be your only line of attack, as the next section illustrates.
What you review matters as much as how much you review.
Many teams simply choose to review at random – it seems to be a logical way of reviewing that provides an accurate sample of your help desk interactions. But random does not give you diversity.
And when you’re reviewing customer service interactions, the outlier interactions are most valuable, not ones that sit in the average.
Focus on the conversations that matter.
The conversations that are most important to review include:
- Longer interactions
In conversations that have more back-and-forth between customer and support rep, the problem clearly isn’t clean cut. Dig into problem areas by sifting out the simpler dialogues and spending time on the lengthier ones.
- More complex interactions
Maybe the conversation has passed through a couple of support reps, or there wasn’t just one issue at stake. Review these tickets to understand how to streamline processes or find weaker links in your team.
- Interactions where the customer was dissatisfied
There’s no easy way to say it – sometimes the customer is just not happy. Conversations with a low CSAT rating help you understand what is going wrong and how to improve for next time.
The Complexity filter is our one-click solution which selects the top 15% (most interesting) tickets.
The Sentiment filter is an ML feature which filters out conversations where the customer displayed either contentment or frustration.
Conversation Insights helps you make data-driven decisions by filtering conversations based on your chosen parameters.
Choosing your rating categories for your customer service scorecard is one of the most important decisions to make. Reflect on your company priorities and support goals – and also what you want to review.
For every conversation you review, these are the criteria through which you’ll be judging quality.
Common rating categories:
- Product knowledge
Start with looking at your support goals and initiatives. For example, at Klaus our values are to ‘be human’, ‘be solution oriented’, and ‘follow up’ with customers. So, we chose ‘tone’, ‘solution’, and ‘follow up’. This gives our support specialists three pillars on which to focus in every interaction.
89% of our customers use between two and four rating categories (but some use as many as ten!).
The floor is open for you to decide, as many tools give the option to create custom scorecards. If you choose too many, your reviewers will get decision fatigue. But if you choose too few, you’re unable to accurately track trends and glean insights for coaching purposes.
Reviews consist of scorecards with rating categories against which support conversations are measured – but you need to decide how the rating will be presented. Do you want a simple binary pass-or-fail? Or do you want to get more granular and implement an 11 point rating system?
Things to consider:
- Size and alignment of your review team
- Agents response to feedback
- Reporting outputs
A binary scale is a great place to start. This means you are more likely to be consistent in your grading, and results are evident at a glance.
But agents should understand that a negative rating isn’t necessarily Bad with a capital B – it just indicates room for improvement. Many companies opt for a three-point scale to circumvent this issue by offering some middle-ground for the reviewer.
Your IQS score is calculated by aggregating these scores into a final percentage.
Read about rating scales in further detail.
Rating category weights
Klaus also allows you to add more weight to the more important categories, so they contribute more to the final score. You can also mark a category as critical – so, for example, if support reps do not display Product Knowledge in the conversation, the IQS for that conversation will be zero.
Why is feedback important for customer service teams?
So, you have your QA program all set up!
But not so fast – there’s no point putting in the effort without seeing the results through. Customer service reviews provide you with quantitative AND qualitative data for your teams.
Conversation reviews don’t just help companies level up their support; they’re also crucial for agents’ professional development.
There is one fundamental reason why agents should take an active role in pushing their managers for regular reviews: feedback is the quickest and most efficient way to grow professionally. Customer service quality reviews offer insight for improving interactions and making customers happier. Oftentimes, these are quick and easy things to fix.
Professional development becomes clear and systematic with a robust customer service strategy. The more feedback one is able to collect, the more boxes they get to tick and the further they will go.
- Decide who should review tickets.
- Decipher how many and which tickets to review.
- Choose rating categories and rating scales for your scorecard.
- Deliver feedback.
Still have questions? We can answer them.