The problem with traditional QA

Few processes are more broken in today’s customer service department than quality assurance or QA for short. QA is the process whereby companies “audit” calls for how well a representative adheres to company-defined scripting and language during a customer interaction. Did the rep say the customer’s name three times? Did the rep thank the customer for her loyalty? Did the rep read the required disclosure statements after executing the transaction? Did the rep display empathy, friendliness, and professionalism?

The problems with quality assurance—as currently practiced by companies—are many. From the perspective of company leadership, QA’s biggest shortcoming is that it is a manual, people-driven process. And people are inherently inefficient and expensive. What this means in practice is that companies will end up listening to and auditing only a small percentage of customer calls—typically 1% of calls recorded will ever be audited by a company’s QA team. As a result, QA becomes a source of frustration for reps who feel that they aren’t being treated fairly and that the company is assessing their performance on too small a sample to be valid. So, it should come as no surprise that for every QA process in a large company, there is also an appeals process for reps to dispute spurious results and scoring that is perceived to be unfair.

AI-enabled speech analytics as a solution

For this reason, companies have latched onto new AI-based technologies (namely, machine learning-powered speech analytics) as an opportunity to automate QA—to stop having QA managers listen to a small percentage of calls and instead teach a machine to listen to all calls, without the cost and inherent bias of people. Understandably, service leaders’ eyes widen at the idea of automatically scoring all service interactions without any human involvement.

As an AI and machine learning platform company, Tethr is often asked to help companies automate their QA process. But, as attractive as it seems to use AI-based “listening” approaches to automate a manual process, our advice is that companies think twice before doing this. In our view, digital technologies are better deployed to fix QA, not just automate it. People are the problem in QA…but it’s less because of how QA is administered and more about how QA is designed.

The far bigger issue with QA isn’t that it’s inefficient and expensive (which it is), but that it’s built off of assumptions, hunches and gut instincts. Companies ask QA to listen for things they think are important (e.g., saying the company’s name at the beginning of a call for the brand association or saying the customer’s name multiple times to make the customer feel the interaction is personalized). But these assumptions, regardless of how well-intentioned, have rarely been tested with data. This is why most companies constantly update their QA scorecards—without a compass to show them where to go, they resort to guessing.

New technologies today allow companies to combine the best of human intelligence with the best of artificial intelligence to deepen a company’s knowledge of what actually drives customer outcomes. Put differently, the promise of AI isn’t just about automation, it’s about understanding.

Armed with machine learning and data science techniques, leading companies are seizing upon this opportunity to finally overhaul QA so that it delivers what it was originally intended for: higher quality customer interactions.

Putting AI to use – real-world examples

One large telecommunications provider, for example, had long used their QA team to assess whether their reps demonstrated appropriate acknowledgment when customers expressed frustration—i.e, “I’m sorry you’re having this problem” and “I know how frustrating this must be for you.” Using AI to understand how unstructured voice data impacted known outcomes (like CSAT, NPS, and Customer Effort Score), they came to learn that this sort of acknowledgment—which the company had always assumed drove positive customer outcomes—actually made customers more frustrated, not less. The frustration level, in fact, was on par with what customers experience when they’re transferred to another department. Now, the company teaches its reps to resist the urge to acknowledge and apologize and instead get on to solving the problem at hand.

A provider of home services that we work with used AI to figure out what objection-handling techniques lead to higher sales conversion rates. The company had long assumed that when customers balk about the price of their services, the best approach was to explain to the customer that the company offered some of the lowest rates in the business and if push came to shove, to offer a small discount. But, when the company used machine learning to study this technique across thousands of sales calls, they found that this technique wasn’t remotely correlated with higher sales conversion. Instead, reiterating the company’s money-back guarantee ended up being much more highly correlated with conversion (and much cheaper to offer than a discount).

This company also used AI to understand—at a very specific level—how their reps should demonstrate “advocacy” in customer interactions. While advocacy had been on the QA “checklist” for many years, they never really knew how best to demonstrate advocacy in different customer situations. A large-scale analysis using machine learning demonstrated that in sales interactions, reps are best served by using language that demonstrates control, confidence, and authority (e.g., “Here’s what I recommend” or “This is the option I would pick”). In fact, the data analysis suggested that such approaches backfire in issue-resolution situations. In those sorts of calls, reps are far better off proposing an option but hinting that there are other options if the first one doesn’t pan out (e.g., “I’ve got some ideas for how to fix this…let’s try this first”).

Finally, we worked with one large insurer to apply AI to their voice data in order to identify–among more than 250 categories of rep behaviors and interaction dynamics–which ones actually drive one of the key outcome metrics the company is focused on, Customer Effort Score. In the end, we identified 14 statistically significant drivers (ten behaviors that eroded CES and four that improved it). Most impactful among the behaviors that eroded CES was when reps used language that indicated they were “powerless to help” a customer to resolve a specific issue. The company is now focused on training and coaching their reps on how to avoid these phrases and instead use language that demonstrates advocacy and empowerment. And, importantly, their QA team now knows what critical behaviors and techniques to be listening for in calls.

Fix first, then automate

It is true that advances in AI, machine learning and natural language processing finally afford companies with the opportunity to automate QA, which represents a step-function change in efficiency for companies. But, the greater opportunity is to dramatically improve the effectiveness of QA by finally identifying the language techniques that actually drive quality. Armed with this insight, companies will then be well-positioned to automate their processes and finally capture the scale benefits of scientific and data-driven quality assurance.

Our strong advice to service leaders is this—“fix first, then automate,” not the other way around.

Interested in learning more? REGISTER FOR OUR JUNE 28TH WEBINAR in which we’ll share the experience of the credit union, BCU, in using AI to fix their QA function and bring new levels of quality and customer impact to the enterprise.

Author

Chief Product & Research Officer at Tethr, author, speaker and advisor

1 Comment

  1. David Starc Reply

    The concept of fixing QA with the help of Ai is explained in a simple and understanding manner. Thanks for the wonderful piece.

Write A Comment