People don’t tend to work in Customer Support because they have secret dreams of becoming a data scientist or a statistician. Fundamentally, Customer Service is a people-oriented career. And although the profession has evolved over the years, at its core, it’s still about delighting customers by helping to solve their problems.
However, the trend toward creating joined-up, omnichannel customer experiences does mean that CS folks are now working with more digital platforms and data points than ever.
This can simultaneously create a sense of opportunity and doubt.
Opportunity – because clearly all this data can be used in smarter ways, to create more delightful customer experiences. But also doubt – because sometimes there is pressure to become ‘data-driven’ without a clear direction of what that actually means in practice, or how to approach it.
So, for any Customer Support teams asking themselves: ‘how can we become more data-driven?’ – we want to provide some helpful pointers.
1. Define what ‘being data-driven means to you
If your team is talking about becoming more data-driven, you should stop and question what this really means to you. ‘Data-driven’ is a term that is overused to the point of being a buzzword. And, like all buzzwords, it can begin to mean all things to all people.
For your boss, data-driven might mean:
“I expect to see more metrics that track team performance.”
Whereas, for your team members it might mean:
“Finally, we can automate the parts of my job I find boring.”
Without further context, data-driven is not always a helpful term. Data-driven implies that data is in the driving seat – like a self-driving car. In other words – automation of decision making.
In actual fact, when we talk about being data-driven, we’re rarely talking about automation. We’re usually talking about leveraging data to make better decisions, more quickly. That means better access, understanding, and awareness of our data. But ultimately, we still need people to be accountable for their decisions.
As a result, some people prefer to use terms like data-literate, data-led, or data-informed, rather than data-driven.
2. Remember qualitative data is still data
Which area of your company would you say is the most data-driven? Operations? Sales? Finance?
For many, Customer Support might not immediately spring to mind, but there is a strong argument to say CS is actually one of the most data-driven parts of the organization. Why? Because CS are speaking to customers all day, every day.
The reason we don’t immediately think of this as a data-driven activity is because it deals with qualitative data – customer relationships, interactions, feedback, stories, and use cases.
Qualitative data (qual) and quantitative data (quant) serve different purposes but are equally important in creating a data-driven culture. Quant data allows you to understand wider trends, but it rarely delivers the individual insights that enable you to fix customer problems and pain points. For this, you need qual.
This principle was crystallized by one of the data-driven greats, Jeff Bezos, who regularly prized qual over quant:
“The thing I have noticed is when the anecdotes and the data disagree, the anecdotes are usually right. There’s something wrong with the way you are measuring it.”
As your Customer Service team seeks to get better with quant data, you should not dismiss or underestimate their experience in handling qualitative data.
3. Start with the goal, not with the measurement
Most of us are aware of SMART goals. This technique was developed by George T. Doran in the 80s to help people write goals that are more specific, actionable, and measurable.
Instead of writing something vague like ‘our goal is to improve customer satisfaction’, SMART goals force you to explain how you are going to measure and evidence the improvement. For example, you aim to increase CSAT from 90% to 95% by 1 March.
The problem is that with so many metrics readily available, we sometimes see the inverse of this problem. Teams focus on improving metrics without giving enough thought to the wider purpose of the activity. “Our goal is to increase Email Open Rate by 10%, Our goal is to decrease Avg. Call Time by 20 seconds.”
It may be that improving these metrics creates little to no effect on the overarching mission of the team – improving customer service.
Data-driven teams start with strategic goals then choose the right measurements and KPIs. Not the other way around.
4. Think critically
Working with data requires us to think critically. In simple terms, this means not accepting beliefs and conclusions at face value.
It’s important for Customer Support teams who want to become data-driven to start actively questioning their data. This could be as simple as questioning your own understanding. You may use Net Promoter Score and Customer Effort Score, but does everyone in the team know how this is collected and calculated?
A good place to begin with critical thinking is by learning about statistical fallacies, which are common causes of mistakes in data interpretation and analysis.
Statistical fallacies are the result of human bias or misunderstanding. They include phenomena like:
- The Cobra Effect: when incentives and goals inadvertently incentivize the wrong types of behavior
- Cherry Picking: when people select data that supports their point, whilst ignoring the data that doesn’t
- Sampling Bias: when non-representative samples cause us to draw incorrect conclusions from data.
For any teams who are working more with quantitative data, I’d highly recommend Stephen Few’s book, The Data Loom – which gives a very easy-to-understand crash course in both critical thinking and scientific thinking.
5. Segment, but don’t over segment
Perhaps the two most commonly used KPIs in Customer Support are Customer Satisfaction (CSAT) and First Response Time (FRT). In general, we want our CSAT to be high and our FRT to be low. CS leaders pay particularly close attention to the smallest changes in these metrics, as they can indicate a drop in quality.
The problem with CSAT and FRT is that they are both averages, and averages can lie.
Suppose CSAT on email support rises from 80% to 90%, but CSAT on call support drops from 90% to 80%. If we only ever looked at the overall CSAT figure, we would be looking at a metric that averages out the scores and masks the reality of what’s happening. Whereas segmenting the data gives us a more granular picture.
Most people intuitively understand that segmentation like the above can help us uncover the hidden story behind averages. However, there is a catch. The further we segment, the smaller the sample size becomes. And the smaller the sample, the less representative it is.
For example, say, on a Tuesday afternoon, Robin records a particularly low CSAT of 60%. Should you call Robin in for a performance conversation?
No, because on Tuesday afternoon, Robin only had five calls, two of which recorded negative feedback. And a sample of five tells us virtually nothing.
In general, we need to be far more cautious than we might think when investigating statistics through segmentation. Sample size calculators can help us to understand when we have over-segmented our data, by giving us confidence levels (how accurate the sample is likely to be overall) as well as the margin of error.
6. Consider ranges, not just averages
Many of the platforms used by Customer Service teams also report CSAT and FRT as single, specific numbers, which rise and fall. Of course, we know that it would be foolish to respond to every slight rise in FRT with panic. Intuitively we understand there is a ‘normal range’ where we would expect the data to fluctuate.
But when exactly is the point we should react? What is the normal range and how do we find it?
Understanding the distribution of your data can help you make better reactive decisions. The way to do this is by not just understanding your average for a metric like First Response Time, but also the standard deviation.
Standard deviation might sound like the type of thing you last heard about in high school and hoped to never hear about again. But it’s much easier to understand (and calculate) than you might remember.
Simply take a representative sample of your data, let’s say FRT (First Response Time) for the last quarter. Plug it into excel and use the following method to calculate the average (the mean) and the standard deviation. This helps you to start thinking about FRT as a range, rather than just a single average figure:
Suppose your average FRT is 100 seconds and your standard deviation is 10 seconds. You would normally expect about two thirds of all responses to fall within one standard deviation (i.e. between 90-110 seconds). So if your daily FRT has risen to 104s one day, then there is not much cause to panic because this is normal.
However, normal distribution says you would also expect nearly all (99.7% to be precise) responses to fall within three standard deviations (i.e. between 70-130 seconds). So if your FRT is up to 150 seconds one day, then this is definitely out of the ordinary.
Whilst this is a slight oversimplification, the broader point is worth keeping in mind. Don’t just think about KPIs like FRT as single metrics that go up and down. Try to think about whether they are in or outside of normal, acceptable ranges.
7. Plan your experiments
Perhaps one of the most appealing (and fun) aspects of becoming more data-driven is the prospect of increased experimentation. Running A/B tests and other similar experiments can help Customer Support teams refine the perfect customer experience.
However, you should be careful not to fall into a common pitfall when adopting a culture of experimentation – lack of planning.
One of the first things you will discover when you attempt to run A/B tests is that your sample sizes will need to be big enough to make the test statistically significant. If you’re a small company with a low volume of customer touchpoints over a given period, this may mean you only have the capacity to run a relatively limited number of experiments.
Furthermore, in order to run a fair test, you should normally only change one variable at a time. So running two separate A/B tests at the same time means you are actually splitting your sample population into four.
This is why most teams quickly realize they need to prioritize and plan the most significant experiments, in order to make the most of the opportunity.
In other words, don’t spend three months testing the color of a button. Focus on the big stuff instead.
8. Create feedback loops
Perhaps one of the biggest challenges for Customer Service teams becoming data-driven is that their data stays buried – in platforms that are difficult to access, or behind gatekeepers. It’s not enough for performance metrics to be viewed at the start or end of projects, or in monthly performance meetings.
By visualizing and displaying data in real-time, teams can create feedback loops, where teams can literally see the effect of their work, and change their behavior accordingly. Visualizing data in the form of a KPI dashboard is a tried and tested way of getting teams to become more data-driven and accountable without having to force the culture change.
I hope this article shows that when it comes to being more data-driven as a Customer Support team, it doesn’t need to be a huge transformation project. There are plenty of simple, actionable approaches you can bring into your day-to-day practices to get more out of your data.
It comes down to the simple act of CS teams being more familiar with their data – as familiar as they are with the customers they serve every day.
Curiosity, debate, investigation, and experimentation are all more likely to follow if teams can just get their data in front of the eyes of those who need to see it.