This website requires JavaScript.

A Guide to QA Calibration: Ca-li-brate Good Times, Come On

QA program13 MIN READMar 20, 2023

Regular conversation reviews are a great way to provide a constant stream of actionable feedback to your support reps and im-purr-ove your support quality.

Assuming the people performing the reviews are aligned and on the same page with your company’s quality standards, of course.

But what happens when reviewers are misaligned, inconsistent, or biased in their own grading criteria?

  • Your quality metrics become skewed,
  • Your support reps become frustrated,
  • There is no longer one true path to Qualityville.

That’s where support QA calibration can get you back on track. 

Klaus presenting a famous Cali-o-brator machine.

What is QA calibration? 

Customer service QA calibration is the process in which internal quality reviewers align their rating techniques to make sure that support reps receive the same level of feedback from all reviewers. 

Essentially, QA calibrations help reviewers synchronize their assessments, provide consistent feedback to support reps, and eliminate bias from quality ratings. The goal is to have everyone on the same page. Support reps should receive the same quality of feedback regardless of who reviewed their customer interactions.

Usually, the aspects that require quality assurance calibration are the following: 

  • Rating scale to check whether all reviewers understand the different ratings in the same manner. The larger the rating scale, the more important the calibration.
  • Failed vs non-applicable cases — when a ticket was handled correctly but a specific aspect of the conversation was missing, reviewers might rate it differently.
  • Free-form feedback AKA additional comments left to support reps, which are tricky to calibrate. Nonetheless, reviewers should agree on the “length” of feedback included in each review, feedback techniques used in the comments, and the overall style and tone.

Now that we’ve got the calibration definition in quality assurance out of the way, let’s start setting up your own QA calibration process. 

Klaus being dressed up as a referee.

Plan your calibration sessions

Customer service QA calibration is done by comparing how different reviewers evaluate the same support ticket during a calibration session.

Imagine a group of referees preparing for the world cup. The players (support reps) are hoping for consistent, fair, and unbiased refereeing. And the referees (reviewers) have guidelines and rulebooks (scorecards) to follow. 

Leading up to the world cup, the referees will convene and discuss certain rule changes or dig deeper into a particular interpretation of a rule. These discussions will help ensure that the refs view the rules in the same way and deliver consistent team performance. 

That’s exactly what QA calibration sessions aim to do for your reviewers.

Although it’s a relatively straightforward process, there are three different approaches to designing your sessions:

1. Review separately, discuss together (AKA blind sessions)

Most support teams opt for the “review first, then discuss” strategy in their quality assurance calibration. In this setup, all reviewers assess the same conversations without seeing others’ ratings. After everyone has cast their votes, the team of reviewers comes together to compare and discuss the results.

➕ Pro: Pinpoints the discrepancies in your reviewers’ feedback. By doing ‘blind’ evaluations on the same tickets, you get a good understanding of where possible inconsistencies are lurking in your feedback.

➖ Con: Reviewers can get defensive on how they graded. The discussion session can become heated because everybody wants to prove why their response is right, instead of figuring out the correct solution together.

⭐ Here’s how to organize a ‘blind’ session: 

Step 1: Find your conversations to calibrate and share these with your reviewers.

Step 2: Have your reviewers grade the conversations before your calibration session (make sure they can’t see how each other has rated).

Step 3: Meet with your reviewers and reveal how each person has scored each ticket.

Step 4: Wherever any discrepancies are found, open a round of discussions until a consensus is reached on how to score the ticket.

Step 5: Continue until you have gone through each discrepancy.

Step 6: Document all important findings and agree on how these will be shared with your support teams.

Klaus in a matrix.

2. Review and discuss together (team sessions)

Calibrating feedback collectively with all reviewers is perfect for discussing and (re)defining quality standards in your support team. In this setup, you read over customer conversations for the first time together as a group. As a team, you decide how to score them. If you run into any conflicts or disagreements, usually either the leader of the QA program will make the final decision.

➕ Pro: This strategy removes the feeling of being graded – and the possible stress that this can cause – from the sessions. By allowing reviewers to discuss tickets together, you strengthen the common understanding of your quality criteria among your reviewers.

➖ Con: You will not be able to measure the current discrepancies in your reviewers’ work. If you don’t have an understanding of how consistent and unbiased your reviewers are, it might be difficult to iron these differences out.

Klaus encouraging Dan The Hedgehog to do some work.

3. Review together with the support team (support rep sessions)

Invite your support reps to meetings where you discuss the best way of handling specific situations. This engages them in the quality process so they have a more hands-on understanding of quality expectations, and see customer interactions from another perspective. This approach also allows your support reps to provide additional context to the situations when reviewers don’t agree on the final rating. 

➕ Pro: It’s the most transparent way of handling quality calibrations. Discussing review scores together allows support reps and reviewers to learn from each other and it aligns the entire team around the same goals and QA standards.

➖ Con: You will not have your reviewers’ original scores to compare to the agreed benchmark. Plus, some support reps may also feel uncomfortable defending their actions to a room of reviewers.

Klaus having a feast.

Before you invite your QA team to a calibration session, though, there are a few things to consider first:

  • How many conversations to calibrate? This number can vary for different teams, as the complexity of the tickets, the number of reviewers you have, as well as how much time you have allocated to the session will dictate the right amount for you. Start off with 5 in your first session, and you can always adjust for future sessions.
  • How often will you hold your sessions? For most teams, monthly sessions are the perfect amount. Keep in mind the more reviewers you have, the more often you should calibrate!
  • What is your calibration goal? To help keep your discussions on track, it is recommended to set a goal for your sessions. Are you wanting to update the rating scales on your scorecard? Or would you prefer to discuss an internal process around escalations? Or maybe you want to use the session to align your reviewers with your QA standards. By having a clear goal for your sessions you can avoid the discussion going off-track or down a rabbit hole!

Klaus with a quality assurance checklist.

Choose a facilitator to be in charge

Let’s face it: holding regular customer service QA calibrations takes time and organizational effort. That’s why it is good to have dedicated facilitators who are responsible for making the process successful.

Here are three tips on how to successfully engage facilitators in your QA calibration process:

  • Let facilitators take the lead. Sessions consist of dozens of mini-decisions, starting from which tickets to choose for review, to who has the final say in how to rate a ticket in case of a difference of opinions. Let your facilitators take charge to avoid any unnecessary quarrels about each aspect of the process. 
  • Rotate facilitators. Instead of having the same person run the sessions, let all reviewers take turns in facilitating the process. This allows everyone in the QA team to take a fair share of the responsibility and helps them understand the complexity of aligning all reviewers towards the same rating style.
  • Create facilitator guidelines. Help your reviewers lead sessions successfully by having specific guidelines in place for the process. Think of different scenarios that could potentially happen, and provide tips on how to deal with different situations. 

Allowing your reviewers to facilitate sessions helps to keep your team engaged and motivated. It increases the sense of shared responsibility for customer service quality and helps everybody work together towards the same goals.

Klaus getting his cats funny hats to boost morale.

Define your quality calibration baseline

Now, the quality assurance calibration baseline defines how much you allow your reviewers’ ratings to differ. It’s expressed as a percentage and usually falls around 5%. Having some bandwidth for differences removes minor fluctuations from the picture and helps you focus on the most important discrepancies. 

Here’s how to work with your baseline: 

  • If the difference in your reviewers’ ratings is below the baseline, you can conclude that evaluations are done evenly. You’ve completed the calibration process and can be sure that your team receives consistent feedback.
  • If the difference in your reviewers’ evaluations is above the baseline, you need to proceed with the calibration process and discuss the details with your team. That’s a clear indicator of discrepancies in the feedback your support reps receive. 

Having a measurable baseline makes calibrations a lot easier for your team. You’ll know which differences in your reviewers’ ratings deserve attention, and which ones are fine to leave unnoticed.

Klaus practicing swimming with the team, saying that you should have seen the first practice session.

Use a quality management solution to facilitate quality assurance calibration

Just like the support QA process itself, calibration sessions can create a lot of work for your team if done manually. Switching to a dedicated tool with built-in calibration features can save you a lot of time spent on manual copy-pasting, file sharing, and notifications. This is particularly crucial as you scale, to avoid quality growing pains.

With Zendesk QA (formerly Klaus) you will easily find and share conversations to calibrate, manage the visibility of scores, and analyze your results all in one place.

Here’s how easy it is:

  1. Activate the sessions under your workspace’s ‘Calibration’ tab settings.
  2. Ensure given calibration reviews are unbiased by modifying the visibility settings for your workspace reviewers and managers. 
  3. Everything you need for successful calibration is available under the ‘Conversations’ view. Get an overview of past and active sessions, and schedule new ones. 
  4. Filter out and focus on more insightful conversations to calibrate and add them to the relevant calibration session.
  5. Ask reviewers to rate the conversations you’ve picked for calibration. The best part? It works with the same logic and scorecard setup as with regular reviews.
  6. Compare results on the dedicated calibration dashboard.

Make use of the findings from your quality assurance calibrations

One aspect of calibration sessions that often gets overlooked is what you actually do with the calibration results. The solution?

  • Keep agents in the loop. 
  • Update your quality standards with any changes. 

For example, perhaps in one of your sessions, you realize that one of your rating categories is unclear for your reviewers. So, you decide to further define this category in your session. You should also inform your support reps of this updated definition, as likely they too found the category unclear.

Keeping your documentation up to date with all of your calibration findings and discussion outcomes helps keep your support team moving along the same path.

Klaus drowning in the documentation.

A clear path to Qualityville

Like conversation reviews, your calibration sessions should be a recurring activity. The benefits of regular sessions include:

  • Keeping a relevant scorecard: Perhaps a rating category is poorly defined, or a 4-point rating scale does not make sense for one of your categories. Sessions bring all of your reviewers together, allowing you to discuss what is working and what is not working in your QA scorecard
  • Internal process updates: You may calibrate cases where you notice the support rep did everything purr-fectly, however, it was still not a great customer experience. Your sessions can help bring to light processes in need of an update, while also helping to identify coaching opportunities for your team.
  • A clear path to Qualityville: By having these discussions regularly, you are able to achieve consistency and keep an up-to-date definition of what quality means for your team. This is crucial for making sure that your reps are all heading in the same direction.

Want more? You can learn everything there is to know about calibration sessions and other conversation review topics in our first customer service quality course – Setting up the purr-fect customer service QA program. See you there!

Klaus Courses Banner.

Originally published on December 10th, 2020, last updated on March 23st, 2023.

Written by

Riley Young
Riley Young
Riley is an Educational content specialist at Klaus. Previously he was leading the Training and Quality team at Pipedrive before his cat Pickles insisted on the move.

Never miss an update

By subscribing you agree to Klaus' Privacy Policy and would like to get educational content to your email.