Every year US businesses lose $62 billion due to bad customer experiences driving away their customers. When you realize that the improving customer service quality is closely tied to customer acquisition and retention, you can understand why companies are putting more and more effort into improving the quality of their customer service.
One of the key mechanisms for doing that is regular conversation reviews. This increasingly popular process covers areas of support that other KPIs fail to reflect and boost agents’ professional growth.
Let’s dive into what conversation reviews are all about and how to start doing them in your company.
Conversation reviews are a systematic means of providing feedback on customer service interactions. Reviewers read support agents’ interactions and rate them in predefined categories using a unified scorecard. Additional feedback is provided as comments.
This process is sometimes also called “customer service quality assurance (QA)”. It is, indeed, a bit like software code review for customer support teams. However, we are not fans of the term “quality assurance” because it’s not representative of everything that’s great about conversation reviews (plus we think customer support should get their own term and leave QA to the engineers).
We’ve written more about the reasons we stick to calling them conversation reviews here.
Conversation reviews are suitable for all companies who offer customer service across all platforms, including emails, live chat, phone support, and others. All interactions held in a help desk software can be reviewed either manually or using a dedicated conversation review tool.
The main reason for doing conversation reviews is because you want to improve the quality of your company’s customer service. Several studies have proven that the better the support you provide, the better your company will do.
These days, most companies who care about the quality of their support, track at least some customer service metrics. This helps them pinpoint the areas of growth they need to tackle.
For example, First Response Time and Average Handle Time both give teams insight into how long their customers have to wait for help. Customer Satisfaction Score, Net Promoter Score, and Customer Effort Score are the three most popular ways to measure how happy your customers are with what you do.
Conversation reviews offer another way to measure the quality of customer interactions. Internal Quality Score - the metric used in conversation reviews - tells you how well your team is performing based on your own standards. Define what a great customer interaction looks like from your perspective. At times, IQS can hold your agents to a higher standard than your customers do.
Conversation reviews offer insight into your support team’s performance and provide regular feedback to your agents, which will help them grow professionally. This metric serves as the basis for improving the quality of your customer service.
Conversation reviews don’t just help companies level up their support; they’re also crucial for agents’ professional development. However, the thought of having someone critique one’s work might feel intimidating at first. In fact, it might feel uncomfortable for the person giving feedback too.
We get that, and it’s worth confronting this concern directly with your agents. If that is something you’re struggling with, check out these seven feedback techniques for support teams that will make it easier for agents to both give and receive feedback.
There is one fundamental reason why agents should take an active role in pushing their managers for regular reviews: feedback is the quickest and most efficient way to grow professionally. Conversation reviews offer advice on how to improve their interactions and provide better customer support. Oftentimes, these are quick and easy things to fix.
For example, if a reviewer suggests that an agent was a bit too blunt in their response, the agent can start with a friendlier opening line in their very next email. It’s instantaneously actionable advice.
Professional development becomes clear and systematic with conversation reviews. The more feedback one is able to collect, the more boxes they get to tick and the further they will go. This can help agents to build an outstanding track record for their next big career move and present it to the decision makers when the time is right.
Read more about how customer service agents can advance their career.
There are four possible forms that you can use for conversation reviews. They all have their advantages and disadvantages, and some teams prefer to combine them all:
- Manager reviews are the most widely used form of providing feedback. As team leads are responsible for assuring that the service their agents offer is of excellent quality, they usually take on the responsibility of assessing customer interactions. Manager reviews work best with smaller teams due to the feasible volume of conversations to review.
- QA specialist reviews are mostly used in larger companies where there is a strong focus on the quality of customer service. The main advantages of having dedicated QA staff is that nobody has to do reviews in addition to their other duties. Having full-time employees for quality assessment helps keep the quality of all interactions consistent.
- Peer reviews are the most time-efficient form of doing conversation reviews. By letting agents provide feedback on each other’s work, it’s possible to track the quality of support interactions in large teams. Peer feedback is also one of the most useful sources of information in customer service; there is so much your team members can learn from each other. For example, read why Valentina Thörner fosters peer review at Automattic.
- Self-reviews are one of the best ways to encourage professional growth. Reflecting on one’s own performance helps agents understand their communication patterns and gain control over the way they interact. Moreover, a study found that assisted self-reviews increases customer satisfaction and can result in a 5% increase in NPS.
Some companies combine manager reviews with self-reviews, while others collect peer feedback together with QA specialist assessments. The form that suits your company best depends on your aim, available resources, team setup, and ticket volume. However, none of these methods should be neglected just because they feel daunting. They all serve the same purpose - growing quality through feedback.
Some companies aim to review almost all tickets for specific periods of time. For example, PandaDoc reviews all tickets for every new employee as a part of their onboarding process. Conversation reviews help them train newcomers effectively by providing feedback on how well their responses align with company standards.
However, the most common practice is setting a review goal, expressed as a percentage of the total ticket volume - for example, 5% or 10% of all cases. This helps to scale conversation reviews as the company grows.
The key to successful quality tracking is picking a random sample. Instead of focusing on the outstandingly good or terrible conversations, systematic conversation reviews treat all cases equally, giving an accurate overview of all interactions.
If you distribute the load among your team as peer reviews, you can set individual daily goals for each reviewer. This could be as little as 2-3 tickets every day, which will cover quite a large part of your total volume as a team effort.
There are no set standards for conversation review scorecards. The rating categories that you use - i.e., the aspects of the interaction that you assess - depend on what you and your company deem necessary.
The most common categories include topics like “Product knowledge”, “Empathy”, and “Solution”. Valentina Thörner, team lead at Automattic, advises to use 2-4 categories to keep reviews meaningful, but you can add as many as you’d like.
If you create a scorecard, instead of using a dedicated tool, you should also think about the rating scale that you are going to use. The most important part of this is making sure that all reviewers understand the scale in the same way so that their results would be comparable. You can ensure this consistency by reviewing a few of the same conversations and then comparing scorecards across reviewers to discuss discrepancies.
At Klaus we’ve included a 2-point rating scale. Reviewers are expected to give a thumbs up or thumbs down in each rating category. This system avoids misalignments between the assessments provided by different reviewers.
Klaus also allows you to add more weight to the more critical categories and calculates your average Internal Quality Score (IQS) that reflects the results of conversation reviews. Read more about IQS here.
Conversation reviews provide two types of results: the quantitative Internal Quality Score, and the qualitative feedback left as comments. IQS helps you track how your team performs over time based on your company’s standards. It helps you make quality measurable and trackable.
- Having quantitative and qualitative data helps you analyze your team’s performance and report on your progress to the exec team. You can focus on specific agents or certain periods of time to see what affects your IQS the most. You can also see how IQS relates to other metrics that you track, such as CSAT or NPS.
- Conversation reviews provide input to your 1:1 meetings and agents’ professional growth. For example, if you know your support reps’ career goals, you can set up specific rating categories that will help them learn through feedback and achieve their aims. If your agents know how to use conversation reviews as a means to boost their career, they will be happy for all the feedback they can get.
Use the results of your reviews to understand and report on how your team is doing, and as a starting point for making improvements in your customer service.
If you believe that your company would benefit from doing conversation reviews (and most likely it will), here’s how to set up the procedure: 1. Define the problem that you’re solving: e.g., are you looking for ways to boost your CSAT or would you like to improve your team’s product knowledge? 2. Decide who should review the tickets: do you want to do manager/QA specialist reviews, peer reviews, or self-reviews? 3. Create a conversation review scorecard:
- Set up your rating categories based on the problem you’re solving (e.g., if you want to improve your team’s product knowledge, make sure to set up a category for that).
- Decide which rating scale to use if you’re not using dedicated software with a predefined range. For example, Klaus comes with a thumbs up/thumbs down voting system.
- Decide whether you want to give more weight to any of your rating categories. These will have more influence on the total IQS calculation.
- Decide where you will manage your data: a spreadsheet might be enough for smaller teams, while larger ones would benefit from a dedicated conversation review tool like Klaus.
- Document the procedure and communicate it to the team. Don’t forget to explain the problems you are solving and what agents will gain from it.
- Launch, test, and iterate: as you complete your first rounds of reviews, you’ll learn what works for your team and what doesn’t. Make changes as necessary. As your team and company grow, and your goals change, you will probably make adjustments to your conversation review setup. Ask your team for feedback on the process to find areas of improvement.
Also, give Klaus a go if you haven't already. See how convenient it is to provide feedback with a dedicated conversation review tool. It will also make it easy to track and report on your team's progress.
After implementing a conversation review system, you’ll have a clear picture of how your team interacts with customers and how you can improve the quality of your customer service. Use continuous feedback to help your agents grow and see how it reflects in your customer satisfaction and business results. You shall be amazed.