FAQ: Customer Service Conversation Reviews on Klaus

QA & Conversation Reviews6 MIN READJan 1, 2020

FAQ: Customer Service Conversation Reviews on Klaus

Conversation reviews have become a common practice for all quality-oriented customer service teams. Constant feedback is the most efficient way to improve agents’ performance, so more and more support teams are rolling out internal quality programs to facilitate that.

As the makers of the conversation review tool Klaus, we’ve talked to numerous support teams who are looking for ways to boost their performance. Here are answers to some of the most frequently asked questions about customer service conversation reviews and Klaus.

What are customer service conversation reviews?

Conversation reviews are a systematic way to assess how well your agents’ responses meet your internal quality standards. Support teams implement regular conversation reviews to improve the quality of their customer service.

Just like with code review for software engineers or editorial processes for writers, the goal of customer service conversation reviews is to have an extra pair of eyes to read and provide feedback for improvements.

The term “customer service quality assurance” is sometimes also used to describe this process. However, as “quality assurance” is already coined by software QA and doesn’t represent everything that conversation reviews stand for, we prefer to use the latter term.

Read more about the reasons why we prefer “conversation reviews” to “support QA” here.

Why should I do conversation reviews in my support team?

Conversation reviews are the most efficient means of improving your support quality. Internal assessments pinpoint your team’s areas of growth and, with constant feedback, help your agents become better at what they do.

Here are the main benefits of doing conversation reviews:

  • Gain control over your support quality: find your team’s knowledge gaps and provide actionable feedback that helps your agents boost their performance.
  • Bring consistency into your support interactions: analyze your team’s conversations across all channels to make sure that customers always get excellent treatment, regardless of the agent approached or platform used.
  • Build team coherence: knowing that all agents are held to the same bar makes the entire team work harder towards the same goal.
  • Track and report your team’s progress: conversation reviews allow you to keep an eye on your Internal Quality Score (IQS) and notice any changes in your support quality that need your immediate attention.

Conversation reviews provide accurate insight into how well your support team is performing. While customer-based metrics like CSAT, CES, and NPS reflect how satisfied your customers are with your product and services, IQS - the metric of conversation reviews - gives you an internal perspective on the quality of your support.

Read more about why CSAT is not enough here.

Which support interactions should I review?

Most support teams analyze only the conversations that customers have rated negatively to understand what went wrong. However, that doesn’t give you a complete picture of how your team is doing.

To get an overview of the quality of your customer service, you should review your interactions based on the following principles:

  • Look into the interactions that have been rated by your customers. Analyze why your team received negative ratings but pay attention to those conversations that received positive feedback, too. There’s a lot you can learn in both cases.
  • Analyze a random sample of all conversations. 90% of your interactions never receive any feedback from the customers. Looking into these conversations will give you an objective overview of the quality of your customer service. “That’s fine, continue” is great feedback and can go a long way in motivating agents.
  • Go through the interactions you’ve had with your churned customers. This can give you valuable insight into the reasons why people unsubscribe from your services or stop buying from you.

If you implement a quality program that systematically looks into the conversations that your team has had with your customers, you’ll get a clear picture of how your team is performing.

Which rating categories should I use in my scorecard?

As the goal of conversation reviews is to track how well your team is performing against your internal quality standards, there isn’t a single scorecard that suits all companies’ needs. Your rubric should reflect your company’s customer service goals and values.

However, we’ve done some research on the most popular rating categories amongst all Klaus users as well as specifically amongst large support teams. The results of these studies can give you some ideas and inspiration for developing your own scorecard.

62% of all Klaus users rate their interactions in the following categories:

  • Correctness and completeness of the solution;
  • Empathy and tone expressed in support interactions;
  • Accuracy in product knowledge.

In addition to that, “Adherence to internal processes” made it into the list of most popular quality criteria for large support teams with more than 25 agents.

If you’re setting up conversation reviews for the first time, you can start with 2-4 basic rating categories and iterate as you go. Your quality standards may change over time, so it’s best to keep your rubric flexible.

When should I switch from spreadsheets to a conversation review tool?

Conversation review tools are there to make support teams’ internal feedback processes easier, faster, and more enjoyable by doing most of the tedious managerial work. Smaller teams whose conversation volume is relatively low might find that spreadsheets are enough for them.

Companies who haven’t used a conversation review tool before, usually switch to one in the following occasions:

  • Teams who are creating their quality programs for the first time often prefer to use an existing framework for their process. Building their own spreadsheets and formulas can be quite complex and time-consuming.
  • Teams who spend the majority of their support QA time on the managerial tasks related to conversation reviews (like copy/pasting tickets, deciding which conversations to review, etc). Teams like this find enormous value in dedicated conversation review tools.

We’ve seen companies reduce 70% of their support QA time since they’ve switched from spreadsheets to Klaus.

The bottom line is that conversation reviews only work if you really do them. If your manual processes are inconvenient and time-consuming, your team won’t be motivated to do conversation reviews, and it’s high time to switch to a solution that does those tedious tasks for you.

Why choose Klaus?

Klaus is a conversation review tool built by actual support folks for support folks. We’ve condensed our years of experience in building global quality-oriented support teams into this tool to help support teams improve their performance with agent feedback,

Here’s how Klaus makes conversation reviews efficient and enjoyable:

  • Klaus integrates with all popular help desk solutions (like Intercom, Zendesk, Help Scout, etc.) and pulls support interactions automatically in for review.
  • Custom scorecards allow you to create rating criteria that suit your team’s needs.
  • Advanced filtering and features for building random samples help you find the right conversations to review.
  • Automatic Slack and email notifications make sure your agents always know about the feedback they’ve received.
  • Easy reporting and Internal Quality Score tracking help you keep an eye on your team’s performance and immediately notice any changes that need your attention.

Klaus is a universal tool for all quality-oriented customer service teams, used by small and large teams alike. Here are a few examples of how companies are using it:

What other questions do you have about conversation reviews, customer service quality, or life in general? Let us know in the comments below and we’ll be happy to share our thoughts.

Never miss an update