The Customer Service Survey

Vocalabs' Blog

Peter Leppik's blog

It's Not Customer Feedback if it Didn't Come From a Customer

Getting good, actionable customer feedback is not easy. Wouldn't it be great if there was some technological solution which would just tell us how customers feel, and we could avoid all the mess and bother of actually going out there and talking to customers?

The world is not lacking for ideas of how to measure customer satisfaction without talking to customers. Back in the old days of call centers, a lot of people thought they could track a few metrics (average time to answer, abandon rate, etc.) and know that customers were satisfied.

These days I see more technological solutions like speech analytics and big data number crunching. But I have yet to see anything which comes even close to convincing me that it can replace direct customer feedback for understanding what opinions the customer has, and why.

The problem is that without getting your hands dirty and talking to customers, you are limited to information available on your own side of the customer experience. Customer experiences, like any other kind of story, have (at least) two sides: the customer's side and the company's side. Even with perfect records of the entire interaction, you only have the company's side of the story.

This becomes obvious when you put a customer interview next to the record of the customer experience. As human beings, customers carry a lot of invisible baggage into any interaction: they may be biased by outside factors, they want to avoid making waves, they don't want to embarrass themselves or the employee they are dealing with, and if the experience is going badly they don't want it to last any longer than it has to.

So what appears to be a calm, forgiving customer could be seething with rage on the inside. Or someone who seems to accept the company's resolution to a problem may be planning to take his business elsewhere at the next opportunity. And the customer is actively hiding his true feelings because of the social context of the customer interaction.

But when you approach the customer and ask for feedback in the right way (that is, in a way which communicates that you want honest feedback, care about the customer's opinion, and genuinely want to improve) you will get this side of the story. And what's more, you will also be able to get the customer to explain why he feels the way he feels, what could be done differently, and how any problems could be fixed.

None of this is available using just the data available with the company's side of the story. The best technology in the world can't find information which simply is not there.

(That said, if there is way I want to be the one to invent it. Never say never.)

So that's why I say that it's not customer feedback if it didn't come from the customer. Metrics, analytics, and big data are all powerful tools, but they can't tell you what the customer is thinking.


This probably does not actually need to be said, but I'll say it anyway: Don't spill coffee on your laptop.

That was what I did first thing Tuesday morning. I cleaned it up as best I could right away, and everything seemed OK.

But when I unplugged from the external keyboard and monitor I normally use at my desk, I found that my laptop's keyboard was completely messed up. Some keys didn't work, the shift key was stuck on, and the top two rows would push two buttons at once. So trying to type lower-case "t" would give %T if it yielded anything at all.

I figured I would give it a day to dry out, and use my external keyboard in the meanwhile. But this morning, when it had not improved, it was time to take it in.

That, and, since the shift key was stuck on, the only way it would reboot was in "Safe" mode. "Safe" as in "Safe from getting any actual work done."

So I dutifully hauled it to the Genius Bar where the Genius took it apart and confirmed that it did, in fact, have coffee in the keyboard. Repairs will cost somewhere between $200 and $800, depending on whether just the keyboard needs to be replaced, or if they need to replace the entire main logic board. And at $800, I'm just getting a new laptop.

The rest of my day was spent mostly recovering files from backup onto an old spare laptop we have, so I have something to use while waiting for my repair or replacement, and handling what I can from my iPad.

All told, it has been a spectacularly unproductive day.

So don't spill coffee on your laptop.

Dunning-Kruger Effect

You know that thing everyone always talks about how most people think they are above-average drivers?

As I learned today, it turns out this has a name. It's called the Dunning-Kruger Effect, after a pair of psychologists who first described it in 1999.

(Of course, the rest of us have been describing this effect for decades longer than that, but Dunning and Kruger were the first to describe it using actual, you know, research.)

The Dunning-Kruger Effect is a cognitive bias where people who are bad at something believe they are above average. For example, in one study, participants in the bottom quartile for a particular skill (that is, the people who were worse than at least 75% of the participants) estimated that they were, on average, in the 62nd percentile--well above average.

Anyone with any level of skill in anything has probably seen this in others. And--let's be honest here--probably every one of us is guilty of thinking that we're above-average in some skill we're actually really bad at.

The problem seems to be that if you're really bad at something you don't know how to evaluate your skill level. You don't know what it means to be good at it.

Dunning and Kruger also found that their effect can be reversed through some basic training. This training serves to both increase skill level and also help people recognize their mistakes. Ironically, the people who are the most skilled tend to underestimate how good they are--probably because they are both more sensitive to their own mistakes, and they are likely to be surrounded by other experts.

I'm convinced that the Dunning-Kruger Effect applies to companies just as much as it applies to people. In the Customer Experience world it seems that the companies with the worst customer experiences are also the ones which declare the loudest that they do a great job serving their customers.

(This is often followed up with a statement that they don't feel the need to improve their customer experience, because it's already so good.)

Stirring a little confirmation bias in with Dunning-Kruger gives a particularly toxic brew: a company which is bad at something, but thinks it's above average, and actively ignores evidence to the contrary.

I can name a few companies off the top of my head which provide terrible customer service, but think they're pretty good and refuse to listen to anyone who tells them otherwise. You can probably name a few, too.

So how do you overcome this? Fortunately Dunning-Kruger is relatively easy: you learn a little about what good service looks like and improve the company's self-evaluation skills. It's getting to that first step which can be hard. Confirmation bias makes people think that little bit of information isn't needed and would be a waste of time or money. So here's a three-pronged strategy:

  1. Provide credible feedback. Make an effort to acknowledge all limitations of the feedback, and focus on individual customers' stories rather than statistics (at least initially).
  2. Be positive. The message should be "there's always room to improve," rather than "you stink."
  3. Keep at it. Over time people's opinions will shift with a persistent stream of contrary evidence.

Constructive Criticism

Back when dinosaurs roamed the earth, which is to say in my college days, I entertained the idea of fiction writing. For me it was more of a hobby than a career, but my wife also enjoyed writing and we had some friends who were of a similar mind. We formed a small writers' circle, which is how I learned the power and difficulty of constructive criticism.

Part of the challenge of becoming a good writer is getting the feedback to understand what good writing is. Since writing is both personal and subjective it can be hard to understand what a reader thinks of your work. Also, since writing is both personal and subjective very few people want to offer sincere criticism. And, since writing is both personal and subjective it can be hard to hear negative feedback without taking it personally

As it happens, good customer service is also both personal and subjective (how's that for a segue?). And it can be hard to both give and receive honest feedback about how a customer interaction should have been handled better.

Enter constructive criticism. Constructive criticism is a technique for providing feedback which relies on a basic contract between the person giving the feedback and the person receiving it:

  • The person providing the feedback agrees to offer criticism in a genuine attempt to help the receiver improve. The giver provides both positive feedback and well-reasoned negative feedback along with suggestions for improvement. The giver acknowledges that the feedback is just his or her own opinion.
  • The person receiving the feedback agrees to make a genuine effort to improve. The receiver does not need to agree with the feedback, but does have to give it honest consideration. The receiver recognizes that the feedback represents just one person's opinion.

Using the constructive criticism approach can allow people to be more open to change, but it doesn't come naturally. It takes a deliberate effort (and some practice) to give and receive honest feedback in a non-confrontational manner.

Unfortunately most customer feedback programs are structured to be almost the opposite of constructive criticism. For example:

  • Only asking very general questions without getting detailed enough feedback to show employees how to improve.
  • Treating each survey as the gospel, rather than one customer's opinion.
  • Setting punitive consequences for not meeting targets.
  • Ignoring flaws in the survey process.

These mistakes make it hard to make any use of customer feedback even if you want to. Combine it with the very human desire to not have to change, and it leads to frustration and apathy about the feedback process. But these problems persist because it's easier to just tell people to improve than it is to help them improve.

Here are some ideas for bringing an attitude of constructive criticism into a customer feedback process:

  1. Treat every customer survey (and especially the negative ones) as a gift from the customer: an opportunity to coach, learn, and improve. Every time a survey is completed, look for both things you did well and things you could have done better.
  2. Be willing to disagree with the customer's opinion, as long as your disagreement is well-reasoned and based on what the customer actually said. For example: don't just assume that the customer gave you poor feedback because she's mad about a policy; but if the customer said she's mad about the policy, it's OK to recognize that a front-line CSR can't do anything about it.
  3. Actively look for ways to get better, more useful, and more accurate customer feedback. 
  4. Goals and metrics are important, but only when you have a statistically large number of surveys. Don't make an employee accountable for one customer's grumpiness.


The Right Process for the Right Purpose

A disturbing number of vendors in the customer survey arena (who should know better) promote a one-size-fits-all approach to customer feedback.

Naturally, each vendor claims that its own particular size fits everyone best.

The reality is that different customer feedback programs have different goals, and need to use different questions and methodologies.

For example: if a client is primarily looking to track performance on one or two key metrics, then an automated survey with just a few questions is likely to be sufficient and cost-effective. The disadvantage is that this gives very little feedback at the level of an individual employee--it's very difficult to get a statistically meaningful sample on a single customer service rep--and each survey doesn't have enough detail to be useful at that level.

At the other extreme, if a client is looking to use customer feedback for coaching and training, person-to-person interviews are extremely effective. That conversation with an individual customer has far more impact than months' worth of statistics in terms of changing employee behavior. The downside of this approach is, of course, cost. There are lots of good reasons to use interviews--but if those good reasons don't apply to a given project, there's no point in spending the money.

Another example: Net Promoter is a useful metric for measuring customers' overall level of engagement with a company, but it tells you very little about how good a particular customer experience was. Other factors, such as price, reputation, and earlier interactions all have a strong influence on Net Promoter.

For looking specifically at the customer service experience, Customer Effort is a much more targeted metric. But if you're trying to measure an individual employee's performance, Customer Effort also tends to be too general. Scores will often get dragged down by factors outside the employee's control, such as IVR problems, the wait to speak to someone, or (in a bricks-and-mortar setting) lack of parking.

So to get survey data specific to the employee you need to ask a question specifically about the employee, such as a customer satisfaction question or a resolution question. But those questions often aren't sensitive to important issues outside the employee's control--like price, reputation, or lack of parking.

In the end, then, the only way to make sure a customer feedback process is appropriate to its goals is to tailor it to the particular needs. A one-size-fits-all approach can work in some instances--in particular, the specific goals that vendor's process was designed to meet--but the real world is usually more complicated.

Is it that much trouble to proofread?

If you give an infinite number of monkeys an infinite number of typewriters, eventually they will bang out the collected works of Shakespeare.

The problem is that before you get there, someone needs to proofread it all to find the pearls of Hamlet.

As this screen capture shows, much the same thing applies to online surveys.

via Daily WTF

Vocalabs Newsletter #72 is published

Issue #72 of Quality Times, our newsletter, has been published. E-mail subscribers should be receiving their copies in the next few minutes.

In this month's newsletter we talk about the importance of credibility for a customer feedback process, and give some pointers as to must-have reporting capabilities for an effective program.

As always, I hope everyone finds this newsletter interesting and informative. I welcome any ideas or suggestions.

Accountability with no Authority

One of my favorite blogs, Consumer Report's Consumerist, nails it again with a confession from a car salesperson who says that he gets penalized $100 (out of a measly $150 commission) for a less-than-perfect customer survey. The kicker is that two-thirds of the survey isn't even about the salesperson: it's about the finance office, the cleanliness of the dealership, and so forth.

Is it any wonder that he intercepts all the customer surveys and fills them out himself?

Needless to say, this survey process is entirely broken. I'm guessing that the manufacturer dings the dealer for subpar customer feedback, and the dealer simply takes it all from the salesperson. The dealership probably knows (or at least suspects) that the salespeople cheat the survey, but doesn't care because it works out better for the dealer that way.

The result forces "accountability" on people who have no authority to make needed changes, and creates a situation where the only way to win is to cheat.

This is also a classic trap for customer feedback programs stuck at Maturity Level Two ("Accountability"). They have created a system of rewards and punishment for getting good feedback, but don't have the tools to incorporate feedback into creating better customer experiences, nor do they have the sophistication to actively monitor the process for fraud.

The result is a feedback program which looks good on paper, but in practice doesn't get taken seriously. Everyone knows that everyone cheats, and therefore nobody believes anything about the system. In the unlikely event that actual customer feedback makes it through, it won't be taken seriously.

There is a way to get out of this, but it probably starts with ending the survey completely and rebuilding the program from scratch. A clean break from the past is sometimes the only way to end a bad process.

Loyalty and Forgiveness

One of the fastest ways to turn a loyal customer against you is with overly-aggressive collection techniques. Some of the angriest customer feedback you will ever read is in the general category of, "I was a loyal customer for ten years, but I was two days late paying my bill and they cut off my service."

So why do companies keep doing stunts like this, where BT interrupted the Internet service of UK publisher The Register with a collection notice? This burns up whatever goodwill the customer has for the company, not to mention the novice blunder of picking a fight with someone who buys ink by the barrel.

The answer, of course, is that it works. It works from the short-sighted perspective of increasing collections and reducing the chances that a customer will be late in the future. Much like touching an electric fence, most people won't want to go through that twice.

But while the CFO is celebrating improved collections, the company's marketing department is trying to figure out how to improve customer loyalty and retention.

The problem is that:

  1. Loyalty and forgiveness are two sides of the same coin. We forgive the minor mistakes and transgressions of those we are loyal to because those mistakes are less important to us than the overall relationship; and
  2. Loyalty is reciprocal. We are loyal to those who are loyal to us.

So when a company brings out the heavy artillery against a customer for an occasional and minor slip-up, it sends a very loud message to the customer that the company (a) will not forgive the occasional mistake, and (b) is not loyal to the customer, and therefore (c) is not deserving of the customer's loyalty.

In my personal experience this plays out very clearly. Only a handful of companies have truly earned my loyalty, and they are the companies which have demonstrated that they consider my business more important than the occasional mistake. I don't care if I wind up paying a little more, and I generally won't even bother checking out the competition. The peace of mind is worth any extra cost.

On the other hand, I am actively disloyal to my credit card companies and mobile phone carriers: whoever has the best deal at any moment gets my business. That's because those companies have proven they will take advantage of any mistake I make, and play "gotcha" games with their terms and conditions. Your payment is delayed by one day? That's $35 plus interest. Exceed your monthly bucket? Be prepared to pay through the nose for the overage.

While those strategies may have earned a small amount of extra revenue (especially in my younger days before I caught on), those companies are paying for it today through the discounts, rewards, and other programs they need to buy my "loyalty."

Just a few blocks away

There was a big crash just a few blocks from Vocalabs World Headquarters late this morning. We could see the smoke from just outside our building. According to news reports, a truck driver died when a van unexpectedly pulled in front of the truck at an intersection.

As much as we all want to find meaning in everything, sometimes stupid things happen and people die.

Stay safe, everyone.

Must-Have Survey Reporting Capabilities

It seems that when talking customer feedback programs, all the discussion is around what questions to ask and what technique to use.

Where the rubber meets the road, however, is often in the reporting. There are a lot of ways to get good data (OK, maybe not great data, but at least usable data). But without some basic reporting capabilities it's almost impossible to use that feedback to actually drive improvement.

I see a surprising number of customer feedback programs where the reporting is barely more sophisticated than an Excel spreadsheet. For that matter, I see a surprising number of programs where the reporting is an Excel spreadsheet. Let's be clear: Excel is not a reporting platform.

Here are some of what I consider must-have reporting capabilities in any customer service survey. Without these, your program is going to be stuck at maturity level zero or one, and you will be realizing very little value from the effort.

  1. Real Time Alerts and Notifications: The reporting system needs to be able to send an immediate notification when a survey comes in, to the person (or people) who need to see that particular survey. In particular: if a customer was dissatisfied or asked for a follow-up, that survey should be sent to a service recovery team right away; and front-line managers should be notified when a survey is completed on one of their employees. This is a critical first step in closing the loop on the survey, and the faster it gets done the more effective it will be.
  2. Easy Drill Down to Individual Surveys: Statistics are great, but (most) people don't think in terms of percentages and margins of error. We think in terms of individual people and their stories. In order to make sense of the feedback we're getting, it's often necessary to read a customer's comments and responses in the context of that one customer's experience. If it takes more than a couple clicks to go from statistics to individual surveys, most people won't bother, and they'll have a hard time understanding what the data means.
  3. Survey Records Include Other Data About the Customer's Experience: Each survey record needs to include the data necessary to put the survey in the context of the customer's experience with the company. For example: who did the customer interact with, what was the transaction, where did it happen, how valuable is the customer to the company, and so forth. Every company has different customer data, so the reporting platform should be flexible enough to accommodate whatever data is available and potentially useful. Without this information it is almost impossible to figure out what (or who) needs to change to improve the customer experience.
  4. An Exact Record of Each Customer's Feedback, in the Original Format: Whatever format the original survey was conducted in, the reporting system must make available an exact record of the survey. That means:
    1. For online surveys: Each question on the survey, exactly as presented to the customer with the exact wording and showing the options available, and in the original order.
    2. For phone surveys: An audio recording of the original phone survey from the time the customer was asked to take the survey to the time the call ended.
    3. For pencil-and-paper surveys: An image of the survey form showing all marks the customer may have made on the paper.

    This is important because information is lost any time we present summary data or move a survey from one medium to another. Often what's lost turns out to be critical to understanding a customer's story and taking action: we lose the emotional impact of a customer's voice in an interview, or the complete context of a webform where the customer is trying to explain what happened. Without that original record in the original format, it may be difficult or impossible to fully understand what the customer was trying to communicate.

  5. Individual Reporting at Multiple Organizational Levels: Reports have to be available for individual employees, managers, functional areas, executives, regions, etc. Without this level of detail, there's simply no way to get anyone to take ownership of the customer feedback. It's human nature to believe that we're doing a good job, and if there's a problem it must be someone else's fault. So if the survey data winds up in a single statistical blob, nobody will ever believe it's up to them to improve things.

Remember, these items are not optional. This is not my list of nice-to-have reporting features. These are things, without which, any customer feedback program will have only limited impact and effectiveness.

Related Content

>> The Customer Service Survey Maturity Model

>> Metrics Are Less Important than Process

>> The Voice of the Customer--Literally

Channel Bias in Surveys

One of the key decisions in designing a survey is which channel to use: e-mail, IVR, interviews, pencil-and-paper, or something else. Often there are cost and practical reasons for choosing one channel over another. It's also to keep in mind that the choice of channel will bias the survey results in two important ways.

The first is that customers' responses will change somewhat depending on whether there's a human interviewer, and also whether the interviewer is perceived as a neutral third party. People natually want to please the interviewer, and will tend to shade their responses towards what they think the interviewer wants to hear.

The other, which in my experience is much more important, is that the channel has a very strong effect on whether customers take the survey at all. The highest response tends to be with an interview; more automated and more annoying channels generally see a substantial drop in response.

When the overall response rate is lower, participants bias towards customers who have stronger opinions, and towards customers who are more engaged with the brand.

Sometimes the channel itself will introduce a strong bias in who takes the survey. For example, many companies struggle to maintain accurate e-mail lists of their customers. If one segment of your customer base is less likely to provide an e-mail address, you will bias against those customers. One of my clients has a particular customer segment where they almost never manage to get a valid e-mail, and that segment is also most likely to churn--this makes the client's e-mail surveys a very poor measure of their overall performance.

Finally, the more automated the survey process, the more brittle it tends to be. Where a human interviewer will let you know if there's a problem in the script and can work around a technical problem, you don't get that level of resilience with an e-mail or IVR survey. If the survey is badly written you may get customers abandoning it or giving nonsense answers; and if something breaks it will give errors. I've seen companies with broken surveys, oblivious to the fact that they are getting worthless responses. Automated surveys require constant monitoring, you can't just set-and-forget them.

360 View

Your customers do not perceive your customer experience the same way you do.

That's simply a fact. But most people are shocked and surprised the first time they discover just how true it is.

That's why it's important to get a 360-degree view of the customer experience. You want to understand how it looks from both the company's perspective and the customers'.

Unfortunately, while most companies get both sides of the story, they don't put the pieces together. There are customer surveys to get the customer's view, and tons of data and statistics to track what happened from the company's perspective. But it's rare that those two processes are merged into a coherent whole.

That's a huge lost opportunity, since getting that 360 view is one of the most powerful tools in improving the customer experience at all levels of the organization.

Here are some easy things to try:

  1. In a contact center, review customer feedback on a call before listening to the call recording. Most contact centers have their agents listen to calls as part of the coaching process. But before you do that, have the agent review customer feedback from the same call. Then, while listening to the call, ask "what on this call made the customer give that feedback?" This is most powerful when you have a recording of a live survey with the customer, since hearing the customer describe their experience has much more impact than reading numbers on a screen.
  2. Call a customer back and ask for some feedback. This works best right after a customer experience, and if you can select a customer who you think might have something interesting to tell you. For example: "My name is Mary, and I'm a supervisor here at ACME. I noticed when you called a little while ago, your call got transferred several times. I'm trying to figure out how to improve our service, and I was hoping you can share your experience with me." The goal is to listen and understand, not collect statistics. Most people won't answer the phone, but those who do will usually be happy to talk. Just remember to not be defensive, listen ten times more than you talk, and take good notes. You will learn more in five minutes of listening to the customer than five months of listening to recordings, and you will blow the customer's mind.
  3. Grab a bunch of your customer feedback data, and match it to data from your CRM, web site visitor logs, contact center, and any other data you have about how the customer interacted with the company. As a science project, this can probably be done in Excel in a day or two, and makes a good mini-project for an intern or The New Guy. Then start poking around for interesting trends and correlations. You're almost guaranteed to find something unexpected.

Most people and organizations think they understand their customer experience, since they spend so much time and effort delivering it. Mapping what the company tracked and measured about the experience to how the customer felt about it takes some effort, but it is worth it.

Related Content

>> Pretty Good Practice: Listen to Interview Recordings

>> Agile Customer Feedback White Paper

>> Pretty Good Practice: Target Customer Feedback Using Analytics

This Is Why Nobody Is Taking Your Survey

It is a myth that people don't take surveys.

The truth is that people don't take stupid surveys.

People are usually willing--sometimes delighted--to provide honest feedback about how you're doing and ways to improve.

But most surveys are not that. More often, the survey communicates in many different ways that the company (a) doesn't really care that much, (b) isn't interested in hearing the customer's honest feedback, and (c) won't change anything because of what the customer said.

Case in point: last night we had this voicemail left on our company's switchboard (click the link for the audio file). I have no idea who is calling or what the purpose of this survey is supposed to be, but clearly very few people are going to start answering random questions from a robocall which can't even be bothered to identify itself. The fact that most robocalls are highly scammy (and this "survey" is probably not legit either) should lead any company to think very hard about what this survey technique will do to their brand image. Yet we continue to see outbound IVR surveys.

On the other hand, our experience proves that when you ask customers personally for feedback (that is, with a person), give them the opportunity to say what's on their mind, and respect their time and opinions, in most cases the majority of customers will agree to take the survey.

So if people aren't taking your survey, it's not because people don't take surveys. It's because they don't take your survey.

Issue #71 of Quality Times is published

We just published Issue #71 of our mostly-monthly newsletter, Quality Times.

In this issue we discuss the National Customer Service Survey results for the first half of 2013. We published midyear updates for both the Communications and Banking reports. The Communications report is the old mobile phone report, which we are in the process of expanding to include wireline phones, Internet service, and cable/satellite TV services. Included in this midyear update is some preliminary data on Comcast and DirecTV.

As always I hope you find this useful and informative. I welcome any comments and suggestions.

SpeechTEK 2013 Wrap-Up

SpeechTEK 2013 is over (except for a few die-hards). I presented in two sessions and spend a lot of time talking to clients (present and future). Here are some of my favorite moments, embellished and redacted as I see fit:

  1. In a Tuesday morning panel about mobile devices, AI, and speech. The discussion had turned to the "always listening" model for cloud-based personal assistants when the moderator basically asked, "don't you think it's a little creepy that some big company is going to be listening to everything we say?" Maybe it's a sign of the moment, but there was actually a pretty good discussion of privacy and security implications, instead of just dismissing the concerns.
  2. A Wednesday session where a presenter bravely admitted that, because of an obscure glitch, their phone system had been hanging up on a certain population of callers. Every. Single. Time. Because the problem was outside the testing parameters, they didn't discover it until it was written up in the newspaper. Ouch.
  3. Tuesday's networking reception: the food didn't run out after the first hour (like it has in some prior years), and people stuck around and networked. Huzzah!


Tracking Bad Service with a Bad Survey

After my rant about Whirlpool a couple days ago, it seems only fitting that Whirlpool also provided me with an excellent case study in how not to do a customer service survey.

Friends, if you are looking for accurate, actionable, and meaningful customer feedback don't do the following:

  1. Call back 36 hours after a customer service call for a survey.... [To make sure I've forgotten important details]
  2. ....with an automated IVR survey.... [Are you trying to make me mad?]
  3. ....using an unconventional question scale.... [1-5 with 1 being best, 5 being worst, opposite of the usual]
  4. ....with no opportunity to provide free response feedback.... [I sure had things to say. Too bad I couldn't say them]
  5. ....and no follow-through with the customer. [I gave very low scores. What happened? Nothing]

When I called Whirlpool to get my brand-new drier fixed, the company sent me the very clear and unambiguous message that they didn't care about my problem and would not do anything more than the absolute bare legally-required minimum to help me.

Whirlpool's customer service survey manages to reinforce that message. This is not listening to customers, it's a kabuki dance of customer feedback designed to create the form of a survey with none of the substance (all while, not incidentally, spending as little money as possible).

And I'm not alone. Read about the numbingly legalistic runaround a New York Times reader got over a defective microwave oven.

Postscript: After I tweeted my earlier rant to Whirlpool, they called me in response. Their message? That the earliest they could get a technician to look at the drier would be in a week. I'm starting to think that the only actual humans at Whirlpool are mind-controlled slaves of the SAP database which makes all the decisions. Evidently, nobody who works there is allowed empathy, good judgment, or common sense.

Dear Whirlpool, When I Spend A Couple Grand on Appliances, I Expect Prompt Repair Service When Something Breaks a Week Later

A week ago, I had a new Whirlpool refrigerator and clothes drier installed. The drier replaced a Whirlpool my wife and I bought when we were first married over 20 years ago.

This new drier has not been nearly as trouble-free as its predecessor, however. What started out as a slight rattle after a few days of use has developed into an awful grinding noise which sounds like a piece of industrial equipment trying to destroy itself.

Even if this noise is harmless (which I doubt), it was time to call for repair. This morning I called the appliance store where I bought it, hoping they might be able to come out today to look at it. My family is going to the North Woods in a few days, and as soon as we get back from that I'm going on a business trip. We really need to be able to get our laundry done. The store's service department was booked, but they gave me the number of another local Whirlpool specialist.

The other repair company was also booked for today, but they also told me that even if they had time available, they are not allowed to fix my drier unless I went through Whirlpool's main customer service line. This is apparently a new rule they recently put in place. But, since Whirlpool could dispatch any repair shop in the city, maybe they would be able to find an earlier appointment.

Unfortunately, Whirlpool did not live up to my expectations.

After navigating their phone menus and waiting on hold, Whirlpool's service line took my details and politely informed me that the earliest they can get someone out to look at my broken drier is next Tuesday. I explained that this really wasn't acceptable because of our upcoming vacation, and the friendly customer service representative told me that there was absolutely no way they could get anyone out any earlier. That extra mile is apparently just too far to go.

So I (reluctantly) accepted an appointment for after our Trip Up North, almost two weeks in the future. But like the cable guy, Whirlpool also apparently can't schedule with any more precision than "between 8 AM and noon." That will require my wife to take a half-day off work (I will be on a business trip). And there is absolutely no way they can give us the first slot of the day or do anything else to try to make it easier on us. We are expected to do whatever it takes to accommodate the whims of Whirlpool's dispatch system.

[As an aside--I wonder if Whirlpool's call center allows its agents to take a half-day off work to stay home for the appliance repair guy?]

I fully appreciate that this is a First World Problem, but when I spend well over $2,000 on new appliances, and one of them starts sounding like a drill press on "auto-destruct" mode after only a week, I expect the manufacturer to fix it.


Meaning today, or maybe tomorrow if they're backed up. Making me wait a week says that they're (a) too cheap to pay some overtime to clear the backlog, (b) too disorganized to hold standby and reserve slots in case of emergencies or cancelations, and (c) don't care enough about me as a customer to do more than the bare minimum.

NCSS Banking Midyear Update is published


We just published a midyear update for the National Customer Service Survey on Banking customer service. The Executive Summary is available from our website (along with lots of other research).

In addition to updating the key metrics for Bank of America, Chase, Citi, and Wells Fargo; we also took a closer look at two common issues in customer service: having to be transferred to a different agent, and spending too much time on hold.

Related Research

>> Get a copy of the National Customer Servuce Survey on Banking, midyear update 2013

>> Get a copy of the Agile Customer Feedback Whitepaper

>> Browse other Vocalabs Research


The power of a customer service survey comes from its credibility.

Credibility is what convinces customer-facing employees that they need to listen when a customer says he wasn't served well. Credibility is what gives executives the confidence to make decisions based on the customer feedback.

Unfortunately, many customer survey programs lack credibility. Without credibility, negative feedback (which is normally a gift!) is easy to ignore. Some common things which undermine credibility in a survey are:

  • Survey results aren't available until weeks or months after a customer interaction.
  • Obvious flaws in the survey process.
  • Manipulation of which customers take the survey, or even outright cheating and misreporting results.
  • No way to connect an individual employee's actions with the results of the survey.

I have personally observed: call center employees only allowing "good" calls to go to the survey, an executive who misreported survey scores in order to hit his target, survey scores delivered three months after a customer experience, and survey questions so confusing that nobody knew what they're supposed to mean. When employees see these things happening, it gives them license to ignore the feedback process because they perceive it as no longer meaningful or relevant.

Here are some tips for making a survey process more credible:

  1. Collect feedback as close to real-time as is appropriate and practical.
  2. Connect individual surveys to the customer experience which the survey is about.
  3. Deliver data in real time to all levels of the customer service operation.
  4. Update survey questions regularly to keep them relevant to current business needs.
  5. Get input from throughout the organization about how to improve the survey.
  6. Monitor for signs of manipulation and cheating.

NCSS Communications Midyear Update


We just released the 2013 Midyear Update for the National Customer Service Survey report on Communications Services.

This is the report which we used to call "Mobile Phones," but beginning this year we are collecting data on customer service at other consumer communications services companies, like landline phones, cable/satellite TV, and high speed Internet service.

We don't yet have enough data to provide full statistics on the new companies, but we have included a "sneak peek" at some of our preliminary results. You can get a copy of the report for free on our website.

Related Resources

>> Get a copy of the NCSS Communcations Services 2013 Midyear Update

>> Get a copy of the NCSS Mobile Phone 2012 Executive Summary

>> Read about the 2012 NCSS data

Customer Experience and Home Repair

It's not just big companies which provide bad customer service. Just ask any homeowner. Contractors are notorious for providing inconsistent-to-terrible customer service.

In my home, we just finished an insulation upgrade project. Of the first four contractors I reached out to last fall, only two ever got back to me. The other two came and looked at our project and promised to send us bids, but we never heard from either of them again (this is sales 101, guys, c'mon).

So we found two more contractors through Angie's List; they were both more responsive and did write estimates for us, but one of them didn't respond promptly when we had questions about his bid, and wasn't able to provide good answers.

The contractor we eventually chose did everything right from a customer service perspective: they scheduled the work well in advance, showed up right on time, explained what they were doing and answered questions, finished on time and cleaned up after themselves.

But the amazing thing is that five of the six contractors eliminated themselves not on the basis of price or the quality of their work, but because they couldn't handle basic communicating with the customer (i.e. me). And I don't think my experience is all that unusual, either.

So why does this happen? And how do companies like this stay in business?

Some issues are just unique to the world of home repair and contracting: schedules slip, weather causes delays, employees are sometimes unreliable, etc.

But I think there's something else going on. Most home contractors are small businesses. Sometimes the owner has the customer service gene, but often he's a tradesman first and sales/service come far behind.

And there will always be customers who hire the first guy they can reach, or who have an emergency and will put up with almost anything to get the job done quickly. So without a brand to protect, a contractor can get by even if he doesn't respond to every sales inquiry or communicate well with customers. He may not build a big business, but that might not be what he wants.

That's why homeowners like me have to call a half-dozen different companies in order to find the one who is easy to do business with.

Survey Fail

This picture (which comes to me from Failblog) nicely encapsulates two of the key challenges in collecting customer feedback:

First, you need to make sure your customers can actually take the survey. Many survey methodologies systematically exclude some segment of your customer base. Unless you account for that there's the risk you could be overlooking something important.

Second, you need to make sure the survey itself doesn't bias the responses, either through poor questions, or because the process (like in this case) annoys your customers.

For what it's worth, I had my own problems with an AT&T text message survey last year.

Related Resources

>> This Three Question Survey Takes Three Hours (and really has four questions)

>> Process Bias

>> Agile Customer Feedback

Easy Ideas for Getting More from your Customer Service Survey

A few weeks ago we released the Customer Service Survey Maturity Model, and along with it, an online self-assessment tool to find out where your feedback program fits into the maturity spectrum.

Maybe I should have expected this, but I was surprised to see several people use the self-assessment tool as a way to generate some ideas for improving their survey programs. In the self-assessment we ask about a few dozen different "pretty good practices," and that got some mental gears turning.

The great thing about using the tool this way is that you can take the self-assessment as many times as you want. So you can preview what kind of feedback program you would have if you adopted certain practices.

So I want to encourage everyone to give this a try. It's quick, easy, and free, and you'll probably come out of it with some things you can be doing to get more from your customer feedback program.

>> Take the Customer Service Survey Maturity Self-Assessment

Three Ideas to Get Started with Customer Service Feedback

I often get asked how to get started with a customer feedback program, when the organization doesn't have any surveys or other feedback processes around the customer experience. Our Customer Service Survey Maturity Model is a useful tool for understanding what's possible and how to get the most out of a feedback program, but most companies don't want to dive right into a maturity level four program on day one. That's a lot of commitment (and culture change), and it usually makes sense to start smaller and work up.

So what's the best way to get started?

I see a lot of companies which decide they want to start a voice of the customer program, but don't give much consideration as to why or what they hope to accomplish.

In keeping with the principles of Agile Customer Feedback, even a "starter program" should address current business needs, respect and listen to customers, and be designed to tell you something you don't already know.

There are lots of ways to get your feet wet in customer feedback. Here are three ways you can start collecting customer feedback in a customer service environment which are likely to be easy and show a quick return on your investment of money and effort.

Idea 1: Start asking a "Question of the Week" on customer service calls.

Customer feedback can take many forms, and it doesn't have to be a formal survey program. You can start by having customer service reps ask a "question of the week" at the end of calls and making a note the answer. For example:

  • "One last thing before we hang up. We're looking for ways to improve our customer service. Do you have any suggestions?"
  • "Before we go, we're looking for ideas to improve our website. Is there anything you wish our website did better?"
  • "One quick question. We're trying to improve the automated part of our calls. Did you have any trouble getting through to a person?"

This is not going to be a scientific survey with anything like statistically valid data. But it will do two things: give you some ideas of what your customers would like to see improved, and get your employees into the habit of listening to customer feedback.


  1. Don't have the customer service reps ask for feedback on themselves. You won't get honest responses, and it will be very awkward.
  2. Don't expect a random sample, since CSRs won't want to ask for feedback from the unhappy customers. This is an idea-generating exercise, not a science experiment.
  3. As a team activity, have the CSRs talk about the feedback they got and what they think it means. This helps build the idea that everyone should be listening to the customer.

Idea 2: Do a survey of your callers who don't talk to a person.

In an earlier article I talked about how the IVR is a big blind spot for many companies' customer feedback programs. So doing a survey of callers who stayed in the IVR is almost guaranteed to tell you something you don't know. These should be follow-up surveys (preferably phone interviews) conducted as soon as possible after the customer hangs up (but not on the same call, since that biases the sample). Getting 500-1,000 surveys will give you a good statistical sample and the ability to get a good understanding of your different caller populations, but even 100 surveys will be enough to generate some new ideas for improving the customer experience.

Some things to ask about are:

  • Did the caller actually self-serve in the IVR, or hang up in frustration?
  • What tasks should callers be able to do in the IVR but often struggle with? What tasks do callers want to do in the IVR but aren't supported?
  • What are the barriers to either self-service (when appropriate) or getting to the right person (when self-service isn't an option)?

Many companies have the general idea that their IVR systems don't work well for customers, based on the complaints and other feedback they get. This one-time survey will quantify that, help identify which customers are having what sorts of problems, and point out ways to make it work better. Often small changes, like updating the options in a menu, can have a large effect.

Idea 3: Add some feedback to your training program.

Hearing the voice of the customer can be a powerful training tool, and the most effective way to deliver this is through the literal voice of the customer--that is, the recording of a follow-up interview with the customer.

Many contact centers already do training sessions where supervisors listen to calls with the customer service reps and offer feedback. It's easy to add customer feedback to this process. You will want to call a small sample of customers back right after their call, and ask for general feedback and suggestions. Play the recording of this interview for the CSR when reviewing the same call for training purposes.

Some tips for implementing this:

  1. You can start very small and informal, and scale it up as appropriate. To begin with it may be as informal as having a supervisor call the customer back and ask, "I'm going to be training Alice in a few minutes, do you have any suggestions?" This can grow into a program with specific questions, defined metrics, and a statistical sample.
  2. Tempting as it may be, don't make the CSR part of the call back to the customer. This will just be awkward. Don't try to get feedback during the customer service call, since you'll only get the happy customers. Call back.
  3. Have the CSR listen to the interview recording first, and then listen to the original call. That puts the agent in the shoes of the customer, and makes him or her more sensitive to how the customer viewed the call.
Syndicate content