Well, a first this season – we finally have someone on who says YES when asked if AI is coming for our jobs.
But don’t worry. There’s a lot of nuance in her answer and you are not going to be unemployed any time soon if you have some easily developable skills.
Mervi Sepp Rei, PhD is our guest: Klaus’ Data & Machine Learning Lead. Mervi’s goal is to bring everyone in customer service the latest tech and AI capabilities. Grace is your host this season – she feeds Klaus customer service content in written and audio form from Prague.
Listen in to learn:
- Why data literacy will help you persuade people that you are right.
- Why looking into the conversations from the quality perspective is absolutely crucial.
- How QA specialists roles will change to take into account technological advances.
If some of the terms we talk about have you puzzled, check out episode one of this podcast for more context.
You can also read the podcast transcript in full below!
Grace: Hello! For the first time this season, I am welcoming a fellow Klausian to the podcast! I’m really happy to have Mervi here to talk to because we can finally talk about a few things that I’ve been wanting to for a while about customer service careers. And how Klaus is going to help people. I have the absolute best person to do it from our side.
So Mervi, can you please maybe explain first who you are and what you do at Klaus?
Mervi: Hey, I’m Mervi. I am the Head of Machine Learning and Data at Klaus. But my background is in physics. I’m a physicist by training, and I’ve built mathematical for a decade. I simulated diverse processes like how light moves in a photonic crystal or how murderous waves happen in the oceans or how energy is transferred in heart muscle cells.
I’ve also done analytics solutions for contact centers. And eight years ago, I built my first machine learning model that sorted emails based on how critical they were in Finnish. At Klaus, I’ve been for three years and here I lead a machine learning team that has engineers and data scientists that build AI products. We have built a very advanced, NLP processing that is at the end of the platform. And it runs completely parallel with our regular data which means all of the language models and the GPT analysis and the morphological analysis for each incoming text is done approximately 45 seconds later when it enters our system.
All of this I do because I want to make sure people have the latest tech and AI capabilities in Klaus, they can review their tickets better. They can make their review sample better. And also they don’t have to do manual work as much.
Grace: That is immensely impressive and slightly scary, only partly because you said murder…
Mervi: I do that.
Grace: …at some point in that answer. But for somebody who, for example, is working as a customer service leader or customer service agent, why is that important to them?
Mervi: It’s important because understanding quality and figuring out what exactly happens in a contact center is necessary. Sure, there’s myriad of other KPIs that people monitor, but because they are not sufficient and they don’t tell the full picture, it is important to go and check what happens in the actual conversations.
This is especially important if one chooses the path of automation: applying some sort of bots, if they’re generative or regular NLP bots, where to better place them, what are the most occurring themes, how to understand the bot behavior after it has gone live so that it actually does what it’s supposed to do there rather than annoy customers.
For this, having the ability to look into the conversations from the quality perspective is absolutely crucial.
Grace: And it’s the type of thing I imagine that just with the mass of data that is available, it’s just impossible really without these smart tools.
Mervi: So how Klaus handles data is completely different from how a helpdesk handles data. For a helpdesk, it’s important that no conversation, no ticket is ever left hanging. There has to be an owner, someone who deals with it at every given time. When the ticket is already closed, it’s done with, and the customer has received some sort of solution. This is less of an issue, this is not the helpdesk’s problem anymore.
Sure, we have to count how many there were and this and that, but from the quality monitoring point of view, helpdesks don’t really do that. And this is what Klaus does, we analyze the post facts of the conversations a bit differently, or completely differently, that makes it possible for you to find conversations from a completely different angle than what helpdesks would normally let you do.
Grace: That’s fascinating, and I think that’s something that is maybe even a conversation that is not being focused on enough at the moment. When everyone is talking about AI, everyone is talking about bots, but only on that side of things, right? Instead of the quality management side.
Do you wish people understood more the importance of quality management?
Mervi: Absolutely. Especially in this realm of exploding ChatGPT and generative bots being everywhere. I remember eight, nine years ago, it was the same story that bots are coming. They will take over everybody’s job. There will no longer be humans needed in the contact center. Fast forward eight years, this has not happened.
And what I see that has happened is, even though there are so much more capable generative bots that can live off of your knowledge base or any other like large body of text and construct answers: how to direct these generative bots, how to control them, how to have optics into what they’re doing, how to design, how they are supposed to interact has made it that, that much more important.
I think that, right now, actually the wave of automation has arrived in a meaningful sense. That it’s not just let’s stick some kind of bot somewhere, but how to keep this bot alive so that it evolves in time. It evolves when your business changes, it evolves where your agents change, so that it’s not like one off, and then it’s left to die.
Grace: That’s so similar to what several of our other guests have said. Sylvain from Ultimate and Declan from Intercom both echoed the same thing, that it’s about continuous management, instead of something that you set up once and then you leave it going and as you said, like something that’s going to replace us. Because that’s not in any way the case, right?
How have you seen AI evolve in the realm of customer service quality management then during your career? Rather than purely from a customer service point of view, how has it affected quality assurance or quality management?
Mervi: From the quality management side of things, first of all what AI can do and how it should be used is to uncover conversations that you should review.
So how to position AI in a way that it doesn’t do the quality assessment for you, but it’s used to enrich the conversational data in a way that makes it easier for you to find conversation you must review. Like to build the sample, because I’ve analyzed tens and tens of customer service platforms and what the data looks like there. There’s a portion of the data that is very typical. You likely know how you should solve it. Yes, there’s automation potential there, but from the quality perspective, these are known cases. If you check those for quality review, you know what the outcome will be. That doesn’t give you any learning potential.
How to find conversations that are somehow deviant, they stray from the norm, are somehow different. This is what we are focusing on as one aspect that we have statistical models, AI models, that help you find these things like, ‘Hey, this is interesting. If you fix this, you might gain a lot. So check that for quality perspective.’ So this uncovering of needles in a haystack for a more efficient review process.
On another side, where we’re going right now is trying to figure out how to do the quality assessment completely automatically. And when I say completely automatically, I don’t actually mean completely automatically.
I mean that the simple things that your quality analyzers do right now manually: give these over to the machine so that they can do these things automatically. And you don’t need to bother or use people. For example, if there are some tedious monotonous tasks like grammar evaluation, it takes incredible effort and focus for humans to do it properly. For machines it is that much easier. Or there are simple things like people looking for greetings or hellos or goodbyes, or like simple areas where a bot could have failed, and let them get to these conversations faster.
Grace: So you’re talking about when you’re doing reviews, there are two steps, right? There’s the finding of what to review, and then there’s the actual reviewing it.
What we’ve made smarter is both sides of that. I remember when I first started at Klaus, and you helped me with this piece on the mistake of doing everything at random, of random sampling. We perceive random sampling to be the best thing. You choose conversations at random and that gives you a good overall picture of your help desk. Whereas that’s actually completely wrong. What you’ll get is a lot of the same, you’ll get a lot of the average stuff, whereas what you don’t want is what’s average.
You mentioned the outliers. What we want are those hidden gems, those needles in the haystack. So basically from the first step, that’s what we’re able to make smarter with AI because we’re actually able to find those for you.
And then on the other side of it with automated reviews, this is adding a whole other dimension of making this process faster is that we’re actually able to automate the scoring of the more tedious things to review. So you mentioned grammar, that kind of thing.
Mervi: Yes, grammar and that kind of thing.
When we started designing this full QA automation, how we approached it was that we checked what hundreds of customers have in their scorecards. What do they review? What is important for them? And we uncovered that, while there are many things that are about very specific processes, like following specific guidelines that are hard to build into a general solution, there are many aspects of basic human communication – being empathic, grammar, tone – they can be universally understood. These things can be measured automatically.
Sure, you can say that I have a very specific tone. But if you have a very specific tone, you can still go and do this portion manually. But this universally understandable tone, was there aggression, was there formality, was there politeness? This can be done automatically over all of the conversations. It gives you some kind of baseline to work from and go do these very specific things that you need a qualified QA agent.
Grace: Definitely. We’re not talking about doing the work of a QA specialist. We’re talking about this really enhancing the work of a QA specialist.
Grace: And what do you think this means for people who work in customer service, whether on the front line or behind the scenes or as a leader?
Mervi: I think where this is going is that previously, and I think still in many places, the building of the bots or building of the automation is very much a separate thing or far away from the front lines of the support action. So there’s a gap between that and we know that this is a frequent issue. We need this to be solved.
The identifying of the issue is a data mining, data scientist issue. Then applying a solution or fix for it is another issue. So there’s a long process of how the automation machines actually come to life. And I think having QA that does some things automatically, having QA very close to support, we can shorten this path in a way. QA specialists can also do bot reviews or figure out how the bot is participating, and thus the path of how the bot can be improved, this journey can be made much shorter.
Grace: And so much better as well and seamless in view of the customer as well, because there’s not then these separate entities, bot and human or knowledge base working, but they’re all seamlessly brought together through quality management because it understands the entire journey, right?
Mervi: Yes. And I also think for people really in the frontline, if there’s a virtual agent working next to it and they get used to it and they understand how it operates, that it’s not always the smartest, that it doesn’t come for their job. But what it can do is take away these things that people don’t want to answer because, not because they don’t know, but because it’s very boring or tedious. These repetitive things can be taken away and they will start seeing it as actually their helper, that it leaves them the more interesting things.
It doesn’t try to go into this realm and it can ask customers: these needed pre-steps that are required to make a decision by a human, but it doesn’t decide anything that the human should still decide. Because then there can also be resentment that now I have to fix what this bot did.
Grace: Yeah the mess it left behind basically.
It’s instead like stepping stones, right? And it hopefully gives the agent much better context from which to start if they know what the bot has already covered.
We’ve talked on the podcast before about the emergence of new roles, like we’ve talked about conversation designer and the chatbot manager. But the role of QA specialist, how do you see that changing then?
Mervi: I think that the QA specialist role will be more important in trying to not only assess human agent training needs, but also to assess virtual agent like training needs. So I think that it’s partially what a bot designer or bot manager does, because essentially we have to understand where the bot can fail, how to build additional steps into the bot workflows, where we can say that the bot can generate answers, where we can say that the bot can have fixed answers, and uncover areas where there is no bot coverage.
The QA portion of the work can give much more insights or much more sort of content showing, ‘Hey, this thing is not covered. This is a simple thing that I see that the agent is doing. Let’s see how prevalent this is and let’s try to quickly make it automated.’
Grace: And I guess as well using a smart quality management tool just makes that so much easier, because you’re able to identify those conversations far quicker.
Mervi: Yes, exactly. The interest of everyone – for the customer, for the agent – is to have a quick resolution that the issue is solved quickly. We understand where we see, we have this AutoQA category where we see that the agent did not give a solution to the customer. So finding these in a snippet that gives me all the conversations where we see that there was no solution given, and then dive in to see why did it happen? What’s the issue?
It’s like fast forwarding these, giving the areas where there might be a problem so they can be fixed.
Grace: It’s fast forwarding like crazy, especially when we’re talking about companies who have not thousands, but millions of conversations to dig through. This is a magic wand pretty much.
Mervi: Pretty much.
Grace: What skills then do you think should be prioritized? Because we talked a lot about the responsibilities, whereas what specific skills are needed for that?
Mervi: I think data literacy in general. Putting any kind of quantitative analysis on top. QA does this qualitative analysis by nature, but putting this in a reference frame, understanding if you uncover a problem, then trying to figure out the magnitude of this issue: does it make sense to be solved automatically?
Because not everything that is a problem should be solved automatically, not because technically it cannot be done. Technically, anything can be done. But the return on investment is not there. It’s not reasonable to try to solve everything automatically.
And this goes against human nature a little bit. We want to solve the issue fundamentally and how automation goes, it’s like you solve the issue that is not so important. But if you chip in gradually into these little things that can be solved, little things over time, if there’s many different little things, then they make up a big chunk.
So it’s not like I will force the sale, or I will make the VIP customers’ lives better. Let the VIP customers still be dealing with the actual human agents, but chip away from the other side. Like this is not so important to look at, or not so interesting to look at problems, try to solve this.
Grace: I guess data literacy and great analytical skills as well.
I suppose, thinking about your answer, that also means having a very good overview of what the company goals are, to an extent. Because you’re having to think about the priorities in terms of automation. The priorities to send to a human support agent.
Mervi: Yes, absolutely. There are many companies that have set some goals or some KPIs for the automation – they want to automate 40%, 50%, 60% incoming load.
What it can translate to is that you don’t pick the problem that occurs 40% of the time. You pick tens and tens of tiny problems individually, they don’t make up that much, but if you solve many of them and they are easy to solve, then you can achieve your targets much faster.
And I think it’s also on the shoulders of the bot managers or QA analyzers, that you have to position that: these things are there. They can be solved like this. What we put out for the automation, like selling, these automatable issues becomes the role of the bot managers or the also taking input from the QA work.
Grace: It should be a joint effort and joint opinion rather than something that’s just working towards a KPI.
Grace: I guess it lends to what we always say at Klaus, which is quality also depends on the company. There’s not necessarily one definition for it. We have our benchmarks for certain metrics, but you should prioritize what is right for your customer support team and what is right for your customers, really.
Mervi: Yes, exactly. Automation or better understanding of the content of the incoming conversations also means that if we have this 100% coverage, we analyze where the customer has threatened to leave, like churn risk. This has very little to do with agent quality. Often it’s not the fault of the agent, but it also offers additional information for the wider organization. What can be improved from their product perspective, processes perspective. So it takes it away or it…
Grace: … maybe less takes it away, but it has this ripple effect on the whole company.
So what practical advice do you have for someone pursuing a career in customer support or a career in QA, as a QA specialist?
Mervi: I’m a data person myself. So for me, having analytical skills is crucial and it becomes more crucial in this era of overwhelming automation being everywhere. It can be even useful to say, look, we have automated this bit, but there’s very little that it does, or it does more damage than good. Showing this in numbers to someone is always that much better than just giving the opinion.
So positioning yourself in a way that you can articulate your needs from the data perspective is always good in my book. And not being afraid of the generative bots, or not being afraid of the bots because they will not replace you, they can help you.
Grace: You’ve just answered my final question, which is AI coming for our jobs?
Mervi: Oh, actually, the answer is absolutely yes. Absolutely, it will come for the jobs, but I think that it will come for the jobs that humans don’t wanna do. Or humans… shouldn’t be doing. It comes for the tedious and the mundane.
Grace: I guess like computers have been doing for decades. They’ve been taking all of the things that used to be manual and they’re making it digitized. And just because we’re seeing this media hype around how AI is going to do that, it doesn’t actually mean it’s going to change how jobs have evolved already anyway.
Mervi: Yes. How I see it that having a calculator everywhere hasn’t taken the way, the need to do math in your head. Or like you have to still take this number from the calculator and put it in the context, what does it mean? Same thing as AI.
Grace: We just have smart assistants.
I always just remember that my math teacher used to be like, you’re never going to have a calculator with you at all times. Sucks to be her, because yes I do, I have a phone in my pocket all the time.
Mervi: Yes, but even like you can do the operation there, but you have to articulate it. You have to know that the operation is correct.
And this critical positioning of the, and thinking about the problem, like what problem does it make sense to solve. This still falls on, on, on humans. Yeah. It becomes easier to solve it, but how to position the problem in a way that can be machine solvable, is a skill definitely that needs honing.
Grace: Machines might take some of our jobs, but they can understand the problems that, or the, yeah, I guess the problems that we’re trying to solve, and why we’re trying to solve them as well.
Mervi: Yeah. And we have to still translate the problem to them in a way that we know that if we translate in this way, they will find a solution because we have seen from, operating with these large language models, if you change the prompt ever so slightly, the outcomes can be completely different.
So you have to have some sense of what is happening underneath. You have to have this little bit of an understanding that there’s a giant matrix underneath. It doesn’t have consciousness, even though it appears to have consciousness, it doesn’t.
Grace: I think that’s a perfect note to end on. Thank you so much for chatting with me. This has really used up a lot of my brain cells for the day, but I think it’s also been really helpful for whoever listens. So thank you.
Mervi: Thank you.