This website requires JavaScript.
Join our monthly webinar showcasing ameowzing new features 👉 Register now

5 Steps to a Successful Support QA Calibration Process

Conversation reviews10 MIN READDec 10, 2020

5 Steps to a Successful Support QA Calibration Process

Customer service Quality Assurance (QA) programs are booming in leading global support teams. While systematic conversation reviews used to be a rare sight just a few years ago, today, every tenth customer service team ranks Internal Quality Score (IQS) as their most important KPI. 

The increasing demand for exceptional customer experiences is forcing support managers to shift their focus to dedicated quality improvement programs. Fine-tuning support QA, ticket reviews, and agent feedback processes with routine calibrations have become an integral part of quality-oriented business processes. 

Here’s a quick guide on how to set up support QA calibrations to help your team deliver consistently delightful customer experiences.

What is customer service QA calibration?

Customer service QA calibration is the process in which internal quality reviewers align their rating techniques to make sure that agents receive the same level of feedback from all reviewers. 

Calibrations help reviewers synchronize their assessments, provide consistent feedback to agents, and eliminate bias from quality ratings. The goal of customer service QA calibrations is to make sure that support reps receive the same quality of feedback regardless of who reviewed their conversations.

Usually, the aspects that require calibration are the following: 

  • Rating scale to check whether all reviewers understand the different ratings in the same manner. The larger the rating scale, the more important calibration – and the habit of doing it regularly – is to understand which assessment to give in which situation.
  • Failed vs non-applicable is a question many reviewers struggle to answer when a ticket was handled correctly but a specific aspect of the conversation was missing. For example, when evaluating whether an agent offered additional help, it would be marked as failed if the reviewer saw the potential for a follow-up. Another reviewer might, however, find that the conversation was closed successfully.
  • Free-form feedback is a relatively tricky part to calibrate. The three main goals for syncing additional comments left to agents are:
    • How much free-form feedback is included in each review;
    • the feedback techniques used in the comments; and
    •  the overall style and tone of feedback.  

QA calibrations are an integral part of support quality programs because they help to make feedback consistent and fair across the team. If you don’t have these processes in place yet, here’s how to set your quality program up for success.

Customer Service QA calibration

1. Design your calibration process 

Customer service QA calibration is done by comparing how different reviewers evaluate the same support ticket. Although it’s a relatively straightforward process, there are three different approaches on how to design the sequence of the reviews.

Choose how to set up your calibration process from the following options:

  • Review separately, discuss together: the aim of this approach is to let reviewers assess conversations on their own, and then compare the results with others. That’s a great way to kick off your calibration activities and map the differences in your reviewers’ rating styles.
  • Review and discuss together: this method removes the feeling of being graded and potential stress factors from the calibration session. Calibrating feedback collectively with all reviewers is perfect for discussing and (re)defining quality standards in your support team.
  • Review together with agents: that’s the most transparent way of handling quality calibrations. Reviewing conversations together with agents allows your team to voice their feelings, concerns, and expectations towards the feedback they receive. This approach also allows your support reps to provide additional context to the situations when reviewers don’t agree on the final rating. 

If you’re curious to see how other companies have kicked off quality calibrations, check out how Wistia defined their process for peer reviews.

“In the first 2 months, I acted as the facilitator in the feedback sessions. In the 3rd month, I sat in on the feedback sessions but allowed the Champs to facilitate the meeting on their own. In the fourth month, the Champs did the feedback review sessions without me being present, and I read through the conversation reviews in Klaus.

In the fifth month, we’ll go through one more round with me giving feedback to the reviewers,” explained Stacy Justino, Director of Customer Happiness at Wistia.

Once you’ve chosen the design for your calibration process, you’re almost ready to start syncing your conversation reviews. Follow the next steps to complete setup with a few more crucial aspects.  

2. Define your quality calibration baseline

Support QA calibration baseline defines how much you allow your reviewers’ ratings to differ. It’s expressed as a percentage and usually falls around 5%. Having some bandwidth for differences removes minor fluctuations from the picture and helps you focus on the most important discrepancies. 

Here’s how to work with your baseline: 

  • If the difference in your reviewers’ ratings is below the baseline, you can conclude that evaluations are done evenly. You’ve completed the calibration process and can be sure that your team receives consistent feedback.
  • If the difference in your reviewers’ evaluations is above the baseline, you need to proceed with the calibration process and discuss the details with your team. That’s a clear indicator of discrepancies in the feedback your agents receive. 

Having a measurable baseline makes calibrations a lot easier for your team. You’ll know which differences in your reviewers’ ratings deserve attention, and which ones are fine to leave unnoticed. 

3. Choose a facilitator to be in charge

Holding regular customer service QA calibrations take some time and organizational efforts. That’s why it is good to have dedicated facilitators who are responsible for making the process successful.

Here are three tips on how to successfully engage facilitators in your QA calibration process:

  • Let facilitators take the lead. Calibration sessions consist of dozens of mini-decisions, starting from which tickets to choose for review, to who has the final say in how to rate a ticket in case of a difference of opinions. Allow your facilitators to be in charge to avoid any unnecessary quarrels about each aspect of the process. 
  • Rotate facilitators. Instead of having the same person run the calibration sessions, let all reviewers take turns in facilitating the process. This allows everyone in the team to take a fair share of the responsibility and helps them understand the complexity of aligning all reviewers towards the same rating style.
  • Create facilitator guidelines. Help your reviewers lead calibration sessions successfully by having specific guidelines in place for the process. Think of different scenarios that could potentially happen, and provide tips on how to deal with different situations. 

Allowing your reviewers to facilitate calibration sessions helps to keep your team engaged and motivated. It increases the sense of shared responsibility for customer service quality and helps everybody work together towards the same goals.

4. Use a support QA tool to facilitate calibrations

Just like the support QA process itself, calibration sessions can create a lot of work for your team if done manually. Switching to a dedicated conversation review tool with built-in calibration features can save you a lot of time spent on manual copy-pasting, file sharing, and notifications

Klaus is a support QA and conversation review tool that allows you to seamlessly conduct calibration sessions within the app. 

Here’s how to calibrate conversation reviews on Klaus:

  1. Head over to your Workspace settings.
  2. Select the Calibration checkbox in the General tab. This hides all ratings and comments left by other evaluators.
  3. Ask reviewers to rate the conversations you’ve picked for calibration, or those already reviewed by other reviewers (These will be marked with a checkmark in the conversations list).

Klaus QA calibration

That’s it! Your reviewers can do calibrations as easily as they do conversation reviews on Klaus. As the workspace admin, you can see all the ratings given by your team members and find the areas that need your attention.

5. Utilize your findings from quality calibrations

The most important part of quality calibrations comes after you’ve completed a round of comparative reviews and discussed your results. That’s when you need to conclude what you are going to do with the findings from your calibration session.

Here are three ways how you can utilize the results:

  • Give feedback to reviewers and help them understand the quality criteria and rating scales in a unified manner. Do it individually, depending on each reviewer’s performance, or organize a training session to help everyone get up to par. 
  • Modify your quality criteria if your scorecard is confusing, or has too many or too few rating categories to adequately measure your team’s performance. If reviewers tend to always stumble upon the same quality criteria, investigate what could be improved in the wording or the aim of these specific categories.
  • Switch to another rating scale in case your reviewers are constantly misinterpreting the scores. For example, a binary scale could make it difficult for reviewers to give negative ratings if the response was not entirely wrong, while a 9-point rating scale could make it difficult to distinguish whether to give a 7 or 8 to a conversation that was handled nicely.

When you’ve set up a support QA calibration process, make sure you always utilize the results in your daily work. You can only make improvements in your quality reviews if you learn from your previous experiences and act upon it.

Successful Support QA Calibration

Customer service QA calibrations are a critical part of support quality programs. Without checking how your reviewers’ ratings align with each other, you won’t know whether your agents receive consistent feedback that helps them excel in their jobs.

Take some time to look into the different ways of setting up your calibration processes to find the one that suits your team the most. These sessions are only helpful if you do them regularly and with the whole team being engaged in the process. 

If you have any questions or suggestions regarding support QA calibrations, join the conversation in our online CX quality community Quality Tribe. We’d be happy to learn how you’re syncing reviews in your team. 

More in this series: Pros and Cons of 3 Support QA Calibration Strategies

Written by

Merit-valdsalu
Merit Valdsalu
Merit is the content writer at Klaus - though most of her texts have probably been ghostwritten by her rescue cat Oskar.

Never miss an update