This website requires JavaScript.

Pros and Cons of 3 Support QA Calibration Strategies

Setting up a QA program7 MIN READDec 21, 2020

Pros and Cons of 3 Support QA Calibration Strategies

Share

Conversation reviews and agent feedback can have a huge impact on the support team’s performance and productivity. But how do we know that the impact is positive? More specifically, how can we be sure that all reviewers leave unbiased and consistent feedback across the team?

The answer lies in support QA and quality calibrations – a systematic way of comparing different reviewers’ rating styles and patterns with the goal of providing consistent feedback that is fair and helpful to agents, regardless of who the reviewer is. 

But before you jump into reviewing your reviewers’ reviews, take a moment to think about which support QA calibration strategy to use for your team. We’ve brought out the three main setups that you can use; choose the one that suits your goals and your team culture the best

These support QA calibration strategies consist of the same elements – the reviewers, the conversations picked for review, and the facilitator leading the discussions – but they are handled in a different order and, because of that, yielding different results.

Review first, then discuss

Most support teams opt for the “review first, then discuss” strategy in their quality calibration. In this setup, all reviewers assess the same conversations without seeing others’ ratings. After everyone has cast their votes, the team of reviewers comes together to compare and discuss the results.

Here’s how to set up support QA calibrations with individual reviews:

  1. Let the facilitator decide which conversations to review during the session.
  2. Ask all your reviewers to evaluate the selected tickets on their own.
  3. Compare the scores given by different reviewers.
  4. Discuss the discrepancies between the reviews.
  5. Decide what the most appropriate score is together, led by the facilitator.

Klaus has a dedicated calibration feature that allows you to hide reviews left by others. It also highlights the conversations that have been reviewed by other reviewers in the tickets list, so that quality calibrations are as easy to conduct as regular ticket review. 

Pros: This strategy pinpoints the discrepancies in your reviewers’ feedback. By doing ‘blind’ evaluations on the same tickets, you get a good understanding of where possible inconsistencies are lurking in your agent feedback.

Cons: This approach can make some reviewers defensive about their opinions. As they’ve already made up their minds about the score they are going to give to a ticket, the discussion session can become heated because everybody wants to prove why their response is right, instead of figuring out the correct solution together. 

Support QA Calibrations: review first, then discuss

Review and discuss together

More and more teams are looking for ways to bring more transparency and discussion to their calibration sessions. One solution for that is to bring all reviewers around a (virtual) table to talk through the calibration evaluations together and decide the most suitable way of rating each ticket. 

Here’s how to conduct a common support QA calibration session:

  1. Let the facilitator pick the conversations to review.
  2. Invite all reviewers to a (virtual) calibration meeting.
  3. Read the ticket together.
  4. Discuss how to score the ticket.
  5. Make the final decision on the score together, led by the facilitator.

Pros: This strategy removes the feeling of being graded – and the possible stress that this can cause – from the calibration sessions. By allowing reviewers to discuss tickets together, you strengthen the common understanding of your quality criteria among your reviewers. 

Cons: You will not be able to measure the current discrepancies in your reviewers’ work. If you don’t have an understanding of how consistent and unbiased your reviewers are, it might be difficult to iron these differences out.

Support QA Calibrations: review and discuss together

Review together with agents

Engaging agents in the support QA calibrations is a great way to make the most out of the process. By inviting your support reps to meetings where you discuss the best way of handling specific situations, you allow them to learn your quality expectations and see customer interactions from another perspective.

Here’s how you can hold calibration sessions with your agents and reviewers:

  1. Let the facilitator choose the conversations to discuss.
  2. Organize a calibration session for the entire team.
  3. Read the ticket together.
  4. Allow agents to share context around similar situations they’ve encountered.
  5. Listen to reviewers’ thoughts on how the ticket was handled.
  1. Discuss the final score together.
  2. Let the facilitator draw conclusions and decide upon the final decision.

Pros: This is a very transparent way of doing QA calibrations. Discussing review scores together allows agents and reviewers to learn from each other and it aligns the entire team around the same goals and quality standards.

Cons: Individual reviewers’ performance receives little attention, and you won’t get a very detailed understanding of the discrepancies in the feedback that your agents receive. Moreover, this setup can make agents defensive and put them in an uncomfortable position where their responses are discussed publicly.

Read more: A Day in The Life of a Support QA Specialist

Support QA Calibrations: review and discuss together with agents

Your support QA calibration setup plays an important role in the outcomes that you receive from the session. Think about the goal you’re trying to achieve when choosing the strategy for your quality calibrations.

The decision on whether to review conversations individually or together with the team will impact your calibration results: you will either get an overview of your reviewers’ performance or focus on aligning the team towards the same standards.

Your choice between calibrating your QA among reviewers or together with agents will define the transparency of your sessions. There are different players in each of these setups, try to figure out how each of them would feel in these situations. 

Find the most comfortable and productive strategy for your team. This way, you’ll make QA calibrations an integral part of your quality program – and that’s the key to making sure that your agents receive consistent feedback from your reviewers.

Happy calibrating!

There’s a discussion around customer service QA calibrations happening in the CX community Quality Tribe. Join the discussion and share your thoughts!

More in this series: 5 Steps to a Successful Support QA Calibration Process

Klaus Courses Banner

Written by

Merit-valdsalu
Merit Valdsalu
Merit is the content writer at Klaus - though most of her texts have probably been ghostwritten by her rescue cat Oskar.

Never miss an update

By subscribing you agree to Klaus' Privacy Policy and would like to get educational content to your email.