This website requires JavaScript.

Sign up now to get a full access to the course!

Please log in

Calibration sessions – Part 1

Three approaches

What you’ll learn

Let’s explore how you can set up calibration sessions to help your reviewers synchronize their assessments and eliminate bias. We go over three different approaches you can take when conducting calibrations: blind sessions, team sessions, and support rep sessions.

Share this course

RILEY YOUNG: Today, we are going to discuss everything you need to know about calibration sessions. Let’s begin with the basics. What are calibration sessions in a conversation review environment?

NARRATOR: The goal of calibration sessions is to make sure that support reps receive the same quality of feedback regardless of who reviews their conversations.

These sessions involve getting all of your reviewers together to discuss how they would grade a particular sample of conversations. They help reviewers synchronize their assessments, provide consistent feedback to agents, and eliminate bias, therefore leading to more aligned reviews in the future.

RILEY YOUNG: By performing calibration sessions regularly, It can help ensure that your conversations are being reviewed in a consistent manner, regardless of who is actually doing the reviewing.

Having consistent and fair grading is crucial, especially from your support reps’ point of view. There is nothing worse than being told you are doing well one week by one reviewer, and then being told it’s not good enough the next week by a different reviewer for virtually the exact same conversation.

So how exactly do you perform calibrations? While there are a few different approaches you can take, we will discuss three of the main strategies used. However, feel free to be creative and find a solution that works for your team.

NARRATOR: The three calibration strategies we will look at today are: blind, team and support rep sessions. For all strategies, it’s best to start by finding a calibration facilitator.

This person is responsible for leading the sessions and starting the discussions. This can either be the person in charge of your review program, or you can nominate a different reviewer to be the facilitator for each session.

RILEY YOUNG: Let’s start with blind sessions. Going in blind is one of the more traditional ways to perform calibrations. This is where all reviewers are given the same conversations to grade and must do so before the calibration meeting. It is called the blind approach because reviewers cannot see the scores left by other reviewers so their decisions won’t be influenced by the others’ grades. Once the meeting begins, the scores given by each reviewer are displayed for all to see. If the scores are identical, then jackpot! Your reviewers are all aligned and you can move on to the next conversation. If there are any differences in the grades – spoiler alert – there usually are, then it is time to open up the discussion. After a round of discussions, you have to make a decision on how to grade the conversations. This establishes the benchmark score.

NARRATOR: The main benefit of the blind approach is that it highlights any discrepancies in reviewer feedback by comparing the scores to your set benchmark.

A downside to this approach is that some reviewers might get defensive about their opinions as they’ve already made up their minds about their scoring. The discussion can get heated because everybody wants to prove why their response is right, instead of figuring out the correct score together. 

RILEY YOUNG: Our second calibration strategy helps mitigate some of the more defensive reviewers. We call these team sessions. During team sessions, you read over the conversation for the first time together as a group. As a team, you decide how to score the conversations. If you run into any conflicts or disagreements, usually either the Head of Quality or the leader of the review program will make the final decision.

By allowing reviewers to discuss tickets together, you strengthen the common understanding of your quality criteria among your reviewers.

NARRATOR: A benefit of this approach is that it removes the reviewers’ feeling of being judged for their initial scoring and the possible stress this can cause. The drawback is you will not be able to measure the current discrepancies in your reviewers’ work, as you will be reviewing the conversation together as a team, rather than individually before the calibration session. 

RILEY YOUNG: As a more inclusive practice, the third option is to include some support reps in your calibration sessions. During these sessions, you read through the conversations altogether and discuss how to grade them. Having input from your support reps can spark some great discussions and help better understand why certain behaviors exist within your support team. However, you should avoid calibrating conversations that are from the support reps who are taking part in the session. It puts them in an uncomfortable position where their responses are discussed publicly, which, in turn, can make them defensive.

NARRATOR: This is a very transparent approach to calibration. It allows support reps and reviewers to learn from each other and aligns the entire team around the same goals and quality standards. A downside to this approach is that, once again, you won’t be able to compare your reviewers’ original scores to an agreed benchmark. Some support reps may also feel uncomfortable defending their actions to a room of reviewers. 

RILEY YOUNG: It’s decision time! From these three approaches, choose the one that best reflects your review program’s goals and matches your team’s ability for these discussions.

Join us for part two, where we discuss calibration goals, frequencies, and what you should do

with the calibration results. We’ll see you there!

Continue the conversation

Join like-minded pawfessionals in our customer service community, where you can ask questions, talk shop, discuss Klaus' Courses lessons and see more cat gifs.

Join the tribe