This website requires JavaScript.

Ca-li-brate good times, come on! The ultimate guide to conversation review calibrations

Quality management9 MIN READJul 26, 2022

Regular conversation reviews are a great way to provide a constant stream of feedback to your agents and im-purr-ove your support quality. Assuming, the people performing the reviews are aligned and on the same page with your company’s quality standards, of course.

But what happens when reviewers are misaligned, inconsistent, or biased in their grading?

  • Your quality metrics become skewed,
  • Your support reps become frustrated,
  • There is no longer one true path to qualityville.

That’s where calibration sessions can get you back on track! 

These sessions help reviewers synchronize their assessments, provide consistent feedback to agents, and eliminate bias from ratings.

Think of a group of referees preparing for the world cup. The players (support reps) are hoping for consistent, fair, and unbiased refereeing. And the referees (reviewers) have guidelines and rulebooks (scorecards) to follow. 

Leading up to the world cup, the referees will convene and discuss certain rule changes or dig deeper into a particular interpretation of a rule. These discussions will help ensure that the refs view the rules in the same way and deliver consistent performance. 

That’s exactly what calibration sessions aim to do for your reviewers.

A clear path to Qualityville

Like conversation reviews, your calibration sessions should be a recurring activity. The benefits of regular calibration sessions include:

  • Keeping a relevant scorecard: Perhaps a rating category is poorly defined, or a 4-point rating scale does not make sense for one of your categories. Calibration sessions bring all of your reviewers together, allowing you to discuss what is working and what is not working in your scorecard. 
  • Internal process updates: You may calibrate cases where you notice the support rep did everything purr-fectly, however, it was still not a great experience for your customer. Your sessions can help bring to light processes in need of an update, while also helping to identify coaching opportunities for your team.
  • A clear path to Qualityville: By having these discussions regularly, you are able to keep an up-to-date definition of what quality means for your team. This is crucial for making sure that your reps are all heading in the same direction.

Where to start?

Before you invite your review team to a calibration session, there are a few things to take care of first:

  • How many conversations to calibrate? This number can vary for different teams, as the complexity of the tickets, the number of reviewers you have, as well as how much time you have allocated to the session will dictate the right amount for you. Start off with 5 in your first session, and you can always adjust for future sessions.
  • How often will you hold your sessions? For most teams, monthly calibration sessions are the perfect amount. Keep in mind the more reviewers you have, the more often you should calibrate!
  • What is your calibration goal? To help keep your discussions on track, it is recommended to set a goal for your sessions. Are you wanting to update the rating scales on your scorecard? Or would you prefer to discuss an internal process around escalations? Or maybe you want to use the session to align your reviewers with your quality standards. By having a clear goal for your sessions you can avoid the discussion going off-track or down a rabbit hole!
  • Who will facilitate the session? Having someone in charge of the session will help keep things on track! You can choose a permanent facilitator, or to keep things exciting, you can rotate the facilitator role between your reviewers. The facilitator’s role is to make sure your discussions stay focused on your goal, as well as make sure that all of your grading decisions are clearly documented.

Successful Support QA Calibration

So how do calibration sessions work? 

There are a variety of different approaches you can take with your calibration sessions, don’t be afraid to try a few of them out to find the one that suits you best. Here is a brief overview of 3 approaches to calibrations:

1. Review first, then discuss (the blind approach)

Each reviewer grades the conversations, without seeing others’ ratings. Scores are then revealed and discussion ensues.

Pro: Pinpoints the discrepancies in your reviewers’ feedback. By doing ‘blind’ evaluations on the same tickets, you get a good understanding of where possible inconsistencies are lurking in your agent feedback.

Con: Reviewers can get defensive on how they graded. The discussion session can become heated because everybody wants to prove why their response is right, instead of figuring out the correct solution together.

2. Review and discuss together (team approach)

You read over the conversations for the first time together as a group. As a team, you decide how to score the conversations. If you run into any conflicts or disagreements, usually either the Head of Quality or the leader of the review program will make the final decision.

Pro: This strategy removes the feeling of being graded – and the possible stress that this can cause – from the calibration sessions. By allowing reviewers to discuss tickets together, you strengthen the common understanding of your quality criteria among your reviewers.

Con: You will not be able to measure the current discrepancies in your reviewers’ work. If you don’t have an understanding of how consistent and unbiased your reviewers are, it might be difficult to iron these differences out.

3. Review together with agents (agent approach)

By inviting your support reps to meetings where you discuss the best way of handling specific situations, you allow them to learn your quality expectations and see customer interactions from another perspective.

Pro: This is a very transparent way of doing QA calibrations. Discussing review scores together allows agents and reviewers to learn from each other and it aligns the entire team around the same goals and quality standards.

Con: You will not have your reviewers’ original scores to compare to the agreed benchmark. Some support reps may also feel uncomfortable defending their actions to a room of reviewers. 

Read more about the pros and cons of the 3 main approaches.

An example:

Let’s look at the most common approach, the blind approach

Step 1: Find your conversations to calibrate and share these with your reviewers.

Step 2: Have your reviewers grade the conversations before your calibration session (make sure they can’t see how each other has rated).

Step 3: Meet with your reviewers and reveal how each person has scored each ticket.

💡With Klaus, you can see the results on a dedicated Calibration dashboard

Step 4: Wherever any discrepancies are found, open a round of discussions until a consensus is reached on how to score the ticket.

Step 5: Continue until you have gone through each discrepancy.

Step 6: Document all important findings and agree on how these will be shared with your support teams.

seems good

Focused sessions

You can even decide to focus your session on a particular topic. Perhaps billing cases have been causing some issues quality-wise, then selecting a group of billing tickets to calibrate can help bring light to any issues, or processes that may need updating. Having clear goals around your calibration sessions helps ensure that the discussions do not get derailed. 

Calibration session ☑ What next?

One aspect of calibration sessions that often gets overlooked is what you actually do with the calibration results. For example, perhaps in one of your calibration sessions, you realize that one of your rating categories is unclear for your reviewers. So, you decide to further define this category in your session. You should also inform your support reps of this updated definition, as likely they too found the category unclear.

Keeping your documentation up to date with all of your calibration findings and discussion outcomes helps keep your support team moving along the same path. 

Keep your reviewers aligned

If you follow the blind approach you will be able to measure your reviewer’s alignment by comparing their initial grades to the agreed-upon score for each conversation. If there are some reviewers who are repeatedly falling outside of the consensus, then they may need a helping hand in getting aligned with your quality standards. 

Pro tip: You can make use of Klaus’ coaching sessions feature to coach your reviewers too!

kb tools

Time to celebrate calibrate!

If they are not already, then calibration sessions should become an integral part of your overall quality program. They are crucial in keeping not only your reviewers aligned but also in keeping an up-to-date definition of what quality means for your company. 

Calibrations are made easy with QA software such as Klaus. Easily find and share conversations to calibrate, manage the visibility of scores, and analyze your results all in one place!

If anyone still uses spreadsheets for their QA and calibrations, it’s time to switch to Klaus

Want to learn more?

You can learn more about calibration sessions and other conversation review topics in our first customer service quality course – Setting up the purr-fect customer service QA program.

Klaus Courses Banner

Written by

Riley Young
Riley Young
Riley is an Educational content specialist at Klaus. Previously he was leading the Training and Quality team at Pipedrive before his cat Pickles insisted on the move.

Never miss an update

By subscribing you agree to Klaus' Privacy Policy and would like to get educational content to your email.