The makers of WordPress.com pride themselves on some of the best customer service in the industry and to maintain that, they have built a robust peer to peer feedback system using the Klaus Zendesk integration for quality assurance.
For makers of websites and online shops, brands like WordPress.com, WooCommerce, and Jetpack need no introduction. In fact, according to the company, WordPress powers 32% of the internet.
In addition to being an opinion leader in the customer support community and being ever present at Support Driven events, Automattic is famously a fully remote team. This means that tools for collaboration, communication and feedback are more critical than in an environment where you can simply walk over to people in the office.
The people answering customers’ questions at Automattic are called Happiness Engineers and there are more than 300 of them at the company. They are divided into teams based on products and consistently deliver a CSAT score of more than 95%.
Going beyond CSAT with internal conversation reviews
Valentina Thörner, a Happiness Team Lead at Automattic, says that the CSAT number does not tell you much about the actual quality of the conversations beyond a certain point.
With a high overall CSAT, the Automattic Happiness Engineers mostly see the metric change depending on the speed of replies.
“In some instances, even a great answer delivered slowly can yield a negative feedback and vice versa, but telling my team to answer quicker does not help them develop,” she said.
In the case of Automattic, the percentage of customer feedbacks of the total volume is a low single-digit number, which means that more than 95% of the conversations would go without any kind of feedback if it wasn’t for the conversation review process.
Thörner used to go through every single feedback related to WooCommerce support interactions herself, but it soon became clear that only looking at those is not enough for systematically improving the quality of conversations.
Peer to peer reviews distribute the load
While not yet common for such a large group to have a peer to peer review setup, Thörner says that it was born out of a simple necessity: “When we started analyzing the interactions, I was doing everything on my own and the peer to peer setup, where everybody gives each other feedback was just a good way to distribute the load”.
According to her, it’s crucial to frame this process in a way that does not come off as purely critical towards the agents.
“In our case, it is communicated so that the person doing the reviews will deliver his or her findings of what they learned from the other agent and then chat for 15 minutes after doing the reviews,” she explained.
The prospect of spending time on such a process may seem daunting, with the queue waiting to be served, but Thörner points out that the process is exactly as time-consuming as you make it:
“If you create a setup where each person needs to do 100 reviews weekly and review 5 categories, then of course it will be a chore. If instead, you involve everyone and encourage spending 15 minutes per week on it, you still get a surprising amount of feedback with a small time investment per person ”.
Klaus is a time-saver
As most teams, the folks at Automattic started the reviews using spreadsheets and feedback was given as free/form comments.
“Klaus integrates directly with Zendesk and that is a huge time-saver as it removes the need for all those manual steps - everything from pasting URLs to sending reminders and creating a rating system,” pointed out Thörner.
One of Thörner’s favorite features is the sophisticated filtering system, which allows users to combine Zendesk search parameters with Klaus parameters to deliver the right samples of interactions that need to be looked at.
She also likes how Klaus’s conversation rating interface delivers feedback to the agents in a clear and universally understandable way. All that while still allowing for customizable rating categories and a tweakable weighting system.
Better onboarding, boosted team morale and more
When asked about the tangible benefits of the process, Thörner pointed out improved team morale and better onboarding of new agents.
“Since we are a fully remote team, it is not possible to ask somebody to sit next to a more experienced colleague and watch them work. We have to find other ways to deliver the necessary knowledge in an efficient manner. Including the agents in the review process means that they are systematically exposed to all sorts of cases and ways of solving them,” said Thörner
The other notable benefit of constantly giving each other feedback is agents being exposed to the fact that their colleagues give good answers to customers most of the time.
Thörner says: “That is a morale booster because if you only look at your own work, it is easy to start thinking that you’re the only person who knows anything”.
Start small, experiment, learn, and grow
You don’t need to roll out new processes to everyone at once. At Automattic, Klaus was first used by a single team, then by several teams, then by a product division, and by now about 2/3rds of all support teams are using it.
This dynamic has allowed for small-scale experiments with teams learning from each other about how many and which rating categories to use, how to foster a peer review culture and how to make Quality part of the normal workflow.
While Automattic’s CSAT is sky-high, Thörner believes that Klaus could also improve CSAT. “You could easily set up a review filter that includes the negative CSAT ratings and target those for review every time,” suggests Thörner.
When talking about for which teams this process is useful, Thörner concluded: “If you believe that your team is already at absolute peak performance and cannot improve in any way, you don’t need this process.
But, if you are looking for a way to develop your agents and make a meaningful impact on customer service quality, doing systematic conversation reviews is definitely one way of achieving it”.