The Customer Service Survey
Developing Metrics (Part 1: Bad Metrics)
Usually, the motivation to perform a customer service survey is to somehow manage the quality of customer service. The survey is important because, as the old adage goes, you can't manage what you can't measure.
The converse is also true: what you measure is what you manage.
That's why it's vital to measure the right things; but it's also tricky. There's no pressure gauge you can go buy which has a needle swinging between "Satisfied" and "Dissatisfied." Somehow you have to translate the management or business goal--"improve customer satisfaction" or "help more customers faster"--into something you can actually measure like "customer satisfaction" or "first call resolution." Then you need to come up with an operational definition of that parameter which states how to calculate the number based on some sort of raw data
In many cases, the operational definition of the metric doesn't bear much resemblance to the thing you hoped to manage, and this can have some seriously bad consequences. Let's look at a common example:
Management Goal: Improve the performance of an automated customer service system
So far so good. Nobody sits down in a strategy session and announces, "We want our speech recognition system to frustrate as many customers per dollar as we can!"
But we start getting trouble almost as soon as we try to translate "performance" into something we can actually measure:
Metric: Call Containment: The fraction of calls which don't get transferred to an agent
On the surface, using Containment to measure the performance of an automated customer service system seems reasonable. After all, the entire function of the system is to serve customers without having to send them to live agents, therefore the fewer calls which are transferred the better the system is performing.
There's a couple of unstated assumptions built into this metric which make it a poor way to measure IVR performance.
The first assumption is that the system's only function is to automate customer calls. In truth, the IVR has a second, just as important (and maybe more important) function: to identify which customers' calls must go to an agent, and efficiently connect those customers to people who can help them. This latter group of calls is going to include the sales calls, billing errors, and already-upset customers who have been trying to resolve their problem for weeks.
You can never automate those calls, and failing to identify them and get them off the IVR and into an agent's hands is as much of a system failure as sending a potential self-service call to a human.
The other unstated assumption is that a customer who hangs up before transferring to an agent was successfully served by the self-service system. That may be the case for a lot of calls, but there are other reasons customers hang up: they might be confused by the IVR choices, angry because they feel they need to talk to an agent right now but the system won't let them, or simply need more information or time to complete the call (maybe the baby woke up). Often when a customer gets lost in an automated system, the reaction is to simply hang up and call right back.
That's not where we stop, though, since we still have to worry about the details of actually calculating our Call Containment metric:
Operational Definition: Call Containment is calculated by taking the total number of calls, subtracting the number of calls transferred to an agent queue, and dividing by the total number of calls.
This definition of Call Containment (which is equivalent to "the fraction of total calls which are not transferred to an agent queue") is simple, obvious, and very common. It can be output directly from many IVR systems, and tracked minute-by-minute
And it bears almost no relationship to the original management goal, which was to improve the performance of the IVR.
Not only does it ignore half of the performance problem (how well you handle customers who must speak to an agent) and assume that people who hang up were successfully served, but this particular definition can be manipulated to make the system look better than it actually is.
The problem of manipulating the metrics is going to occur all the time, since presumably there's someone who's bonus or career depends on "hitting the numbers." That person might be the call center manager, the consultant who designed the IVR, or the vendor who sold the system, but if you're giving someone any kind of incentive to hit a target, you have to assume that person will do what it takes to meet the goal.
That is, after all, the whole point of measuring the metric and creating the incentive.
So to know if this is a useful metric or not, you need to ask, "What could someone do to make this metric look good which might not be consistent with the business goals?"
In this case, it's easy to get a fabulous Call Containment score by designing a really bad IVR system, one which does a terrible job of identifying customers who have to talk to an agent. In other words, no matter what the customer wants to do, the system tries to force him or her into a self-service system. Many customers will simply hang up in frustration, but that's OK because it improves the Call Containment metric.
Call Containment is an easy metric to pick on because it's so common, despite the fact that most people know that it has serious problems. Developing a better way to measure the performance of an IVR system takes some more thought, though, so I'll write about that next time.
- About this Blog (2)
- About Vocalabs (64)
- Above and Beyond (14)
- Agile Customer Feedback (31)
- Analysis (20)
- Customer Experience (92)
- Hall of Shame (28)
- Interesting Tidbits (28)
- NCSS (15)
- Pretty Good Practices (11)
- Rants and Horror Stories (29)
- Success Stories (2)
- Survey Design and Technique (63)
- Things We’ve Learned (14)