Conversation reviews and agent feedback can have a huge impact on the support team’s performance and productivity. But how do we know that the impact is positive? More specifically, how can we be sure that all reviewers leave unbiased and consistent feedback across the team?
The answer lies in support QA and quality calibrations – a systematic way of comparing different reviewers’ rating styles and patterns with the goal of providing consistent feedback that is fair and helpful to agents, regardless of who the reviewer is.
But before you jump into reviewing your reviewers’ reviews, take a moment to think about which support QA calibration strategy to use for your team. We’ve brought out the three main setups that you can use; choose the one that suits your goals and your team culture the best.
These support QA calibration strategies consist of the same elements – the reviewers, the conversations picked for review, and the facilitator leading the discussions – but they are handled in a different order and, because of that, yielding different results.
Review first, then discuss
Most support teams opt for the “review first, then discuss” strategy in their quality calibration. In this setup, all reviewers assess the same conversations without seeing others’ ratings. After everyone has cast their votes, the team of reviewers comes together to compare and discuss the results.
Here’s how to set up support QA calibrations with individual reviews:
- Let the facilitator decide which conversations to review during the session.
- Ask all your reviewers to evaluate the selected tickets on their own.
- Compare the scores given by different reviewers.
- Discuss the discrepancies between the reviews.
- Decide what the most appropriate score is together, led by the facilitator.
✨ Klaus has a dedicated calibration feature that allows you to hide reviews left by others. It also highlights the conversations that have been reviewed by other reviewers in the tickets list, so that quality calibrations are as easy to conduct as regular ticket review.
🟢 Pros: This strategy pinpoints the discrepancies in your reviewers’ feedback. By doing ‘blind’ evaluations on the same tickets, you get a good understanding of where possible inconsistencies are lurking in your agent feedback.
🔴 Cons: This approach can make some reviewers defensive about their opinions. As they’ve already made up their minds about the score they are going to give to a ticket, the discussion session can become heated because everybody wants to prove why their response is right, instead of figuring out the correct solution together.
Review and discuss together
More and more teams are looking for ways to bring more transparency and discussion to their calibration sessions. One solution for that is to bring all reviewers around a (virtual) table to talk through the calibration evaluations together and decide the most suitable way of rating each ticket.
Here’s how to conduct a common support QA calibration session:
- Let the facilitator pick the conversations to review.
- Invite all reviewers to a (virtual) calibration meeting.
- Read the ticket together.
- Discuss how to score the ticket.
- Make the final decision on the score together, led by the facilitator.
🟢 Pros: This strategy removes the feeling of being graded – and the possible stress that this can cause – from the calibration sessions. By allowing reviewers to discuss tickets together, you strengthen the common understanding of your quality criteria among your reviewers.
🔴 Cons: You will not be able to measure the current discrepancies in your reviewers’ work. If you don’t have an understanding of how consistent and unbiased your reviewers are, it might be difficult to iron these differences out.
Review together with agents
Engaging agents in the support QA calibrations is a great way to make the most out of the process. By inviting your support reps to meetings where you discuss the best way of handling specific situations, you allow them to learn your quality expectations and see customer interactions from another perspective.
Here’s how you can hold calibration sessions with your agents and reviewers:
- Let the facilitator choose the conversations to discuss.
- Organize a calibration session for the entire team.
- Read the ticket together.
- Allow agents to share context around similar situations they’ve encountered.
- Listen to reviewers’ thoughts on how the ticket was handled.
- Discuss the final score together.
- Let the facilitator draw conclusions and decide upon the final decision.
🟢 Pros: This is a very transparent way of doing QA calibrations. Discussing review scores together allows agents and reviewers to learn from each other and it aligns the entire team around the same goals and quality standards.
🔴 Cons: Individual reviewers’ performance receives little attention, and you won’t get a very detailed understanding of the discrepancies in the feedback that your agents receive. Moreover, this setup can make agents defensive and put them in an uncomfortable position where their responses are discussed publicly.
Your support QA calibration setup plays an important role in the outcomes that you receive from the session. Think about the goal you’re trying to achieve when choosing the strategy for your quality calibrations.
The decision on whether to review conversations individually or together with the team will impact your calibration results: you will either get an overview of your reviewers’ performance or focus on aligning the team towards the same standards.
Your choice between calibrating your QA among reviewers or together with agents will define the transparency of your sessions. There are different players in each of these setups, try to figure out how each of them would feel in these situations.
Find the most comfortable and productive strategy for your team. This way, you’ll make QA calibrations an integral part of your quality program – and that’s the key to making sure that your agents receive consistent feedback from your reviewers.
There’s a discussion around customer service QA calibrations happening in the CX community Quality Tribe. Join the discussion and share your thoughts!
More in this series: 5 Steps to a Successful Support QA Calibration Process