Automation has progressed quickly from a hot new topic to a must-have for ambitious teams. Customer expectations are evolving as fast as technology, and advances in AI and automation help your service team keep up. Apply auto QA wisely, and your path to consistent, excellent customer service will be faster and smoother.
However, there is no absolute binary between automated and manual. Auto QA does help reduce labor on meaningless tasks, and it is more of an applied enhancement to your quality assurance efforts. It does not replace your QA specialist.
Capabilities of Auto QA
Scanning the surface
Signal vs Noise
Klaus + Auto QA
Limitations of Auto QA
Automatic ticket assessment for 100% coverage
This is the bare minimum you should expect from a quality assurance tool. Every support interaction can be processed by Klaus’ proprietary ML engine and scanned for preliminary grading.
“Machine learning allows us to analyze every customer message automatically using the latest innovative approaches.
We can focus on quickly reaching a broad and international audience by using multilingual pre-trained large language models. This allows us to, for example, conduct sentiment analysis and named entity recognition. Through this, we added support for two new languages (Polish and Portuguese) in the space of just a few weeks.
Relying on language models enables AI to achieve things at a speed that seemed impossible just 5 years ago.
In addition to ML models, we focus on AI knowledge interpretability. Not everyone is trained in data interpretation – we know that! It’s our job to help our customers make sense of the vast data. We hide much of the complex AI insights in informative and concise dashboards and graphs.”
However, you cannot confuse this grading with the scorecard rating done by reviewers later in the pipeline. This assessment serves to give context to the entirety of your helpdesk’s interactions: 100% coverage without you having to examine a single ticket.
Support teams often handle hundreds of conversations daily.
This ocean of communication is too deep to dive into without getting lost in the details. What you’re interested in are the currents.
What does that mean? Well, 100% coverage can safely plunge into the depths of your help desk interactions so that you only need to scan the surface.
A large proportion of your conversations are very straightforward. Many conversations aren’t fruitful to your endeavor – for example, one in which there is a quick back-and-forth between agent and customer, solving a common problem, which tells you nothing new about performance and processes. Highlighting a conversation like this for review, and rating it on a scorecard with ten categories, is a waste of time.
Auto QA can label and score these conversations with the details that matter.
By scanning the surface, like this, you can identify overarching trends and see the bigger picture of performance.
But getting every single item into the system isn’t enough. You don’t want the golden tickets that serve as caramel goodness for quality improvement purposes to stay mired in the status quo.
Instead, auto QA also acts as a discovery tool, guiding you to where you can get the insights (aka dessert) you are craving. This is where the real magic lies.
You can go on your own discovery journey through the Conversation Insights layers – our data exploration tool that puts metrics in perspective. Or you can focus solely on sentiment or critical tickets as markers for where to focus your attention.
There’s a lot of noise, so we’ve made it easy for you to filter out the signal.
Klaus’ AI features
How Klaus assesses every ticket:
The Sentiment Filter allows you to pluck out conversations where the customer displayed either contentment or frustration.
- Review tickets with negative sentiment to detect areas for improvement.
- Or find conversations with positive sentiment to let good examples rise to the top for praise.
Klaus’ Spotlight is a unique conversation discovery feature that automatically samples the conversations critical for review. These are selected through a multilevel statistical analysis of your own communications metadata. In other words, based on criteria AI has customized for your support team.
By reviewing conversations highlighted by Spotlight, you can ensure your QA efforts are focused on conversations that are critical to review and contain the most influential learning moments.
Klaus into the future
“When you want to monitor customer service, the inclination is to want to know everything. But you don’t want to – and don’t have time to – review everything.
So when it comes to reviewing, sampling is everything.
Not knowing how to build a good sample and the need for more detailed information creates a breeding ground for bad data. Sampling is a data mining task. This means it is not an easy task, not a task most people are typically trained in.
With Klaus, you have an automatic evaluation of the more uncreative themes, with the critical sampling done for you repeatedly, automatically, and effortlessly. (for the sanity of your reviewers).
We have found that, on average, companies use 11 categories on their scorecards. But half of their conversations only have 3 messages. If the agent only said 2 sentences, can you expect to get good data on their empathy skills, listening skills, follow-up, and upselling? Rating these conversations on irrelevant categories similarly culminates in bad data.
We come back to problem number one: you want to know everything. But you don’t need to review everything.
So those short conversations where only a couple of categories are relevant? We’re going to review those for you.
In other words, our Auto QA feature is coming soon.”
“You don’t need context to decide about a grammar mistake, or whether someone used a proper greeting. That’s one reason why it is a boring task, depleting human reviewers’ patience and attention span – while machines can learn it easily (and without complaints).
You definitely want to see context when, for example, emotions run high and the conversation has strayed from the original problem. A bot can tell you that this conversation went longer than average. But that could mean the agent did not control the conversation, or, on the contrary, that they managed to calm down an upset customer and bring them back onto a path to resolution.
Providing a customer with reassurance where needed, asking the right questions, steering a conversation – these skills are difficult to operationalise for a machine, and super easy to spot for a human. Let the machine spot the conversations that need human review, and then let a human reviewer give feedback that makes sense.
As you decide on your company’s quality requirements, it’s crucial to think about the value of Auto QA in combination with the human capability of analyzing things in context.“
– Head of Remote & Quality, Klaus
Want to know more about the topics we touch on?! Here are some recs:
- More about customer service automation in general: Automated customer service: A full guide
- More about Klaus’ mission to scale quality assurance AI: Klaus raises €12m to scale AI platform
- More about quality management: How to manage customer service quality: a complete guide