The Customer Service Survey

Vocalabs' Blog

Blogs

Issue #77 of Vocalabs' newsletter is published

We just published issue #77 of Quality Times, Vocalabs' newsletter. In this issue we have a pair of articles related to the design and interpretation of customer surveys. One is a few rules of thumb to follow when designing surveys; the other discusses how customers interpret satisfaction questions.

I hope you find this useful and informative, and welcome any comments and suggestions. 

Net Promoter and Customer Effort: Two Metrics Measuring Two Different Things

People often ask, "What's the right metric to use on a customer survey?"

The answer, of course, depends on what you're trying to measure. Often the survey has more than one goal, and this will require measuring more than one metric. Unfortunately, the people promoting the Net Promoter methodology have been promoting the idea that you only need to measure one thing (and, of course, that one thing is their metric).

As a case in point, we have a client currently asking both a recommendation question (similar to Net Promoter) and a customer effort question. Customer Effort is a metric designed to measure the roadblocks a customer experiences in trying to get service, and it's a good way to gauge how smoothly a particular transaction went. Net Promoter, in contrast, measures a customer's overall relationship with the brand and company.

In this survey we noticed a curious thing: a meaningful percentage of customers who both said they would recommend the company, but who also said they had to go through a lot of effort to get what they wanted on the customer service call.

This should be surprising to anyone using Net Promoter to measure a particular customer experience--the theory being that customers who just had a bad experience will be less likely to recommend the company.

That theory may have some truth on average, but when it comes to individual customers there's clearly something else going on.

So we listened to a number of the interview recordings to better understand what the customers were saying. And the message was loud and clear: These customers had a bad customer service experience, but were loyal to the company for completely unrelated reasons.

The recommendation question was doing exactly what it was supposed to do: measure the customer's overall relationship with the company. And the customer effort question was also doing exactly what it was supposed to do: find the ways the company made it hard for customers to get the service they expected.

The lesson is simple, but often needs to be repeated. Ask the question about what you want to know. Don't expect a survey question designed to tell you one thing to measure something else.

Net Promoter and Customer Effort are two different questions which measure two different things.

How long did you wait?

One of the oldest complaints about customer service is having to wait on hold to talk to a person. It's still a problem from time to time in many companies, and we published some research on hold times as part of the mid-2013 NCSS Banking report (see page 3 of the PDF report).

We had a recent opportunity with a client to explore how well customers estimate their wait on hold. Anecdotally, we all know the customer who said he waited ten minutes but only actually spend 30 seconds in queue. For this client, they were able to supply us the actual time in queue for each customer who completed a survey, which we compared to the customer's estimate of the wait for an agent.

The results were interesting and surprising. It turns out that an individual customer's estimate of the time spent waiting bears almost no relationship to the actual queue time for that customer. There were plenty of instances of dramatic over- and under-estimates of the wait time. I'm talking about people who claimed they had to wait ten minutes but actually spent less than a minute in queue--or, conversely, people who said it was under a minute when it was actually several.

However, on average, customers' estimates of the wait time were astonishingly accurate. For example, taking all the people who said their wait time was "about two minutes", and averaging their actual queue time, it was surprisingly close to 120 seconds.

We also found that both actual and perceived wait time correlated to IVR and call satisfaction, but the perceived wait time was a stronger relationship. I suspect this may have to do with the customer's emotional state: the more annoyed he is with the call, the less satisfied, and the longer he thinks he had to wait to speak to someone.

Finally, there's a significant minority of customers (I'm guessing around 20%) who apparently are including the time spent navigating the IVR in their estimates of the wait to speak to someone. So even if the actual queue time was short, a long and complicated IVR makes some people feel like they're waiting for an agent.

So the lessons we learned are:

  • Queue time still matters in customer service. It feels a little old-school in this age of social media and natural language IVR, but make sure you're answering the phones promptly.
  • The actual queue time and what the customer thought the queue time was are two different things. You're probably measuring the former but not the latter, but it's the customer's perception which counts.
  • Making customers go through extra steps to reach a person makes them think it took longer to reach someone, and makes customers less satisfied with the service.

We're Watching You, Comcast!

in

Comcast and Time Warner have launched a PR offensive to try to convice people that it's going to improve it's customer service in advance of their pending merger, as evidenced by a pair of puff-pieces in USA Today and Marketwatch today.

Comcast, of course, is the company which was far behind its peers for customer service in the recent National Customer Service Survey results. Time Warner did better than Comcast, but is still below most of the others.

Speaking as a Comcast customer myself, I truly hope the company is mending its ways in customer service. But I'm also very skeptical. It takes more than good intentions and noise from the executive suite to make this kind of change: it requires changing the way thousands of individual employees interact with customers on a daily basis, it requires fixing broken processes which prevent resolution of customer issues, and most of all it requires time and hard work.

Many customer service initiatives fail because, while the leadership is willing to talk a good game, they aren't willing to devote the effort and resources.

Fortunately, though, we won't have to take Comcast's word on whether their customer service is improving. We will see soon enough, through the ongoing customer feedback in the National Customer Service survey, whether they are actually making any improvements. I look forward to seeing the results over the coming months.

So Comcast, it's great that you're talking about improving service. But we're watching you.

Customers Don't Give Points, They Take Them Away

What does "Very Satisfied" mean? Does it mean "Outstanding job, above and beyond expectations?" or does it mean "I don't have any complaints?"

Many people who receive customer feedback think it means the former. But in most cases, the data suggests that it actually means the latter. In other words, if a customer gives you the top score in a survey, often times it just means you did your job.

Case in point: for one of our clients, we are following up on one of the core satisfaction questions by asking the customer to explain the reason for his or her rating. Because this is an interview format, we are getting a response over 90% of the time.

When the customer gave the top rating, "Very Satisfied," 99% of the reasons given are positive (and the small number of negative comments were mostly about unrelated things). This isn't surprising.

But when the customer gave anything other than that top score, even the mostly-OK-sounding "Somewhat Satisfied," 96% of the reasons the customers gave for their rating are negative.

In other words: If the customer didn't give the best possible score, there was almost always a specific complaint.

We see a similar pattern in most questions where we ask the customer to rate the company or the interaction. Another client which is using a 0-10 point "recommendation" question (aka "Net Promoter"), we see over half the people who gave an 8 out of 10 had some specific complaint (and nearly everyone who gave 6 or below had a complaint).

The notion that the middle point on the scale is somehow "neutral" (even if we call it "Neutral") is simply not consistent with how people really answer these kinds of questions.

Instead, most people start near the top of the scale and mark you down for specific reasons. If the customer has nothing to complain about, you get something at or near the best possible score.

So in most cases, customers don't give you a better rating for better service and a worse rating for worse service. Instead, they give you a good rating and take away points for things you did wrong.

Working Backwards to the Technology

Via Daring Fireball, I found an amazing video of Steve Jobs from 1997 talking about his philosophy of business. This was after he returned to Apple, but before any of the iProducts (iMac, iPod, iPhone, iPad) which would make Apple what it is today.

The whole thing is worth watching, but the core lesson is captured in this quote:

...you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.

The remarkable thing about this quote (other than the fact that he used "customer experience" over a decade before it became a hot buzzphrase) is that this is exactly backwards from the way almost every other technology company develops its products.

You could substitute the word "capabilities" for "technology," and Jobs' lesson would still be true, and it would still be backwards from the way many other companies develop their products and services.

Most companies (even startups) begin product or service development with the capabilities they have (or could quickly acquire), and try to find ways to attract customers by packaging those capabilities up at an attractive price. That favors inertia over developing a good customer experience. It can also lead to feature clutter, since the temptation is always there to include something "because we can."

The Apple approach, on the other hand, puts the desired end result front and center. Along the way, some compromises will inevitably have to be made where there are things which can't be done at a reasonable price. But by starting with the customer experience and working backwards, you keep the experience front-and-center during the whole development process.

So why don't more companies follow the experience-first strategy? My sense is that it's just hard. It's hard to stay focused on the end product, it's hard to say "no" to cool capabilities which don't enhance the overall customer experience, and it's hard to accept that a longer development cycle may be required if a critical piece of the puzzle doesn't exist yet.

This Survey IS a Test

Last week, Bank of America invited me to take a customer survey.

The survey was nothing special, but at the end it gave me this message (click to embiggen):

In case you can't view the image, at the end of the survey it displayed:

Results:
Elapsed time: 366
Pass/Fail: Pass

I was unaware that my opinions about Bank of America were being graded. But as long as they were, I'm glad to see that I passed.

However, the more competitive side of me wants to know what my letter grade would have been, and whether this was graded on a curve.

Who Thought This was a Good Idea?

Via Consumerist comes the story of a guy named Guy. Guy shops at Staples, and is a member of Staples' reward program, and Staples recently invited Guy to participate in a market research survey. In exchange, Guy would get a $5 check for his trouble.

Instead of a check, though, at the end of the survey Guy got this message:

Your opinions are extremely important to us. Unfortunately, we have reached the target number of completes from your group today. However, your time and efforts are greatly appreciated.

Thanks again for your support and participation.

Apparently, sometime between when Guy started the survey and when he finished it, Staples filled its quota of responses. And rather than spend one penny more than necessary on survey incentives, they gave Guy the "Sorry, Sucka!" message.

The survey invitation claims that the survey was being hosted by an "independent research company," but any market research company engaged in this sort of amateurish behavior deserves to be drummed out of the Survey Corps.

I get that surveys sometimes go over quota, but any responsible professional will plan and budget for that, rather than risk angering the very people who are helping you collect your data. And in the bigger picture, five bucks (even five bucks times a hundred) is chump change.

In the end, when reporters from Consumerist asked Staples for their comment, Staples did the right thing and sent Guy his five bucks. But you have to wonder how many other Staples customers got the same message and decided to seethe quietly rather than complain to the media.

I'm guessing those customers are going to cost Staples a lot more than five bucks in the long run.

Vocalabs Newsletter Published: NCSS 2013 Data

We just published Issue 76 of our newsletter, Quality Times. In this issue we announce the availability of 2013 survey data for the National Customer Service Survey in both Communications Services (formerly Mobile Phones) and Banking. We are adding five new companies to the Communcations Services report: CenturyLink, Comcast, DirecTV, Dish Networks, and Time Warner Cable.

As always, I hope you find this useful and informative, and welcome any comments and suggestions.

New NCSS Reports Available

in

We just published updated NCSS reports, including 2013 data for both banking and communications companies.

We've added coverage of five new companies in the communications sector, bringing it to nine companies: AT&T, CenturyLink, Comcast, DirecTV, Dish Network, Sprint, T-Mobile, Time Warner Cable, and Verizon.

The banking report covers Bank of America, Chase, Citi, and Wells Fargo.

In addition to the latest reports, you can still download our older research if you want to view earlier reports.

Well-Oiled Company = Well-Oiled Experience?

I've worked with a lot of client companies over the years at Vocalabs. Some companies are easy to work with: they are well-organized, people are focused on getting the job done, decisions are made easily, and the lines of communication work well.

Other companies are not so much fun to work with, for a variety of reasons. Internal politics, disorganization, people more worried about keeping their jobs than doing their jobs, muddy lines of authority, and disspirited employees, to name a few problems.

As a consumer, some companies are much easier and more enjoyable to do business with than others. And given the choice, I would rather take my business to a company which provides the more positive experience.

I've noticed over the years that the companies I enjoy doing business with as a consumer also tend to be easier to work with professionally. That is to say, if I like buying from the company, I'll probably find that the company is organized and efficient  when it comes to hiring Vocalabs to work on a project.

The opposite also seems to be true: If the company is disorganized internally, chances are I probably won't like doing business with them as a consumer. On the other hand, just because I don't like buying from a company it doens't always follow that they're going to be hard to do business with.

It seems that having a well-oiled corporate machine is a necessary requirement for having a good customer experience. That makes sense, since a disorganized company almost can't help but screw things up as far as the customer is concerned. Many common customer complaints (mistakes, inconsistency, indifferent service, lost records, etc.) are the inevitable outcome when a company can't get all its internal resources lined up and marching in the same direction.

It's not true, however, that an organized and efficient company will always have a good customer experience. The company may be well-organized, but simply not focused on the customer. In those cases, though, the company often knows it's competitive edge isn't coming from the customer experience (maybe it's the low-cost leader instead), and there's a deliberate choice to not invest to improve the experience.

So an important prerequisite to delivering a positive customer experience is having a company which generally runs smoothly. But that, by itself, won't lead to a great customer experience. The company must also deliberately choose to go down the path of customer experience.

Vocalabs Newsletter #75: Customer Experience Journey Map of Christmas

Somehow, a confidential e-mail from the North Pole was recently misdirected to me. I thought the contents would be interesting to readers of this newsletter, so I am taking the risk of coal in my stocking to reveal its contents.
You can read the e-mail in the most recent edition of our newsletter, Quality Times. As always, I hope you find it interesting and informative.

Fooling the Customer

Time had an amusing article this week about The Telemarketing Robot who Denies She's a Robot (complete with recordings!). A company offering health insurance had, for a time, a phone number answered by an a cheerful-sounding woman who would ask several questions about the caller's insurance and respond to conversational questions.

Ask if she was a robot, and she would laugh and insist that she was a real person.

Except that she wasn't, as would quickly become apparent by the awkward pauses before she would reply, her clearly limited set of responses to questions, and the way she would speak the exact same phrase (words, tone, timing) to different callers. Most likely, "Samantha" was a set of prerecorded messages triggered by someone listening to the caller and selecting the best response from a list. This would allow the use of really cheap overseas employees with poor English skills.

In the speech recognition industry about ten years ago there was a debate about how "human" an automated system should be. On the one hand, some designers believed that the best speech systems were the most natural and conversational--in the ideal world, you would call your bank and never know whether you were talking to a person or a machine.

The other view--which I hold along with people like Bruce Ballentine (author of It's Better to be a Good Machine than a Bad Person)--is that you should always make it clear to the caller whether they're interacting with a machine or a person. Leaving aside the fact that the technology is nowhere near advanced enough to allow for a true conversational experience, people just don't like to be fooled.

People care very deeply whether they're talking to a machine or not. It's not that talking to a machine is bad (witness the willingness to use Apple's Siri service). It's because talking to a person carries social context and talking to a machine doesn't, and you can interact with a machine in ways you wouldn't interact with another human.

We observe this in people's willingness to play with the machine and explore its capabilities. While sometimes people will observe social norms when talking to a computer (for example, saying "Please" and "Thank you"), they also feel free to break outside the box. The recordings of the Time reporters taunting "Samantha" are a great example (they can't get her to repeat the exact phrase "I am not a robot"). Ballentine called this the problem of the "Monkey-butt user," after someone in a usability test who randomly said "Monkey-butt" to the computer to see how it would react.

(As an aside: Bruce's book is the only one I know which has "Monkey-Butt" in the index. Look it up yourself.)

It turns out that humans care so much about whether we're talking to a machine or not that, if we suspect a machine it trying to fool us we will spontaneously begin a series of Turing tests to find out if it's really a machine or a person. Current technology has a long way to go before it can get past someone who is determined to discover the truth.

So the lesson is simply this: Don't try to fool your customers. It won't work, and they won't like it.

Most Depressing Survey Ever

From Tumblr user pupismyname.

The Limits of Metrics

Andy Beaumont recently wrote an article, The Value of Content, which is (at its core) a screed against the annoying web design trend of making readers click through overlays in order to get to the actual page they want to read. Beaumont struck a nerve with his "Tab Closed; Didn't Read" collection of annoying examples.

Beaumont's article is an excellent argument for all the reasons against this technique, and one point in particular struck me. In responding to the argument that overlays get used because they work, he writes:

This is what happens when analytics make decisions for you...Analytics will tell you that you got more “conversions”. Analytics will show you rising graphs and bigger numbers. You will show these to your boss or your client. They will falsely conclude that people love these modal overlays.

But they don’t. Nobody likes them. Conversions are not people. If you want the whole story here you should also be sat in a room testing this modal overlay with real people. Ask them questions:

  • “Do you like that overlay asking you to sign up for the newsletter?”
  • “Do you understand what will happen if you do sign up for it?”
  • “Do you know that there is content behind it?”
  • “Do you know how to close it to get to the content?”.

This gets to the heart of the difference between customer feedback and other metrics and analytics: there are two sides to every story, and if you're not collecting customer feedback then you're only getting one side of the story. In this case, the overlay may be effective at getting newsletter signups, Facebook likes, etc., but not at all effective in generating actual customer engagement or useful sales leads.

The key is to recognize that the signups and likes are not, in themselves, the true business goal. The true business goal is the customer engagement or the sales lead. And it's important to recognize that the reason behind the signups and the likes is more important than the actual signup or like.

Put another way, if someone signs up for your newsletter because they want to receive your newsletter, there's a good chance that's a useful sales lead. But if someone signs up for your newsletter because they thought it was the only way to close an annoying window and get to the article they wanted to read, that's just an annoyed web surfer.

We see this same thing play out all the time in the customer service world when companies get overly focused on a particular set of metrics and forget to ask what's really going on for the customer. The classic is using containment to measure how well an automated customer service system is working, and assuming that anyone who hangs up successfully self-served.

We even see this problem when it comes to customer feedback surveys. Many companies track their survey scores and assume that if the number is going up things are getting better, and vice-versa. But it's also possible that employees are gaming the survey, some customers are being blocked from taking the survey, or customers' expectations are changing.

The lesson here is that any given metric is, at best, an approximation for the real business goal. It's important to always keep that in mind, and constantly ask not just "what" is happening but also "why." And the most powerful tool for doing this is using a truly closed-loop process, where you close the loop with the customer, the business, and also your metrics and feedback.

Comcast CEO: Delusional or Just Spinning?

in

Brian Roberts, Comcast's CEO, was interviewed yesterday for the radio show Marketplace. The company has a reputation for poor customer service, and so at one point the host asked Roberts to respond.

His answer to why Comcast has such a bad rap: "...we have about...350 million interactions with customers a year, between phone calls and truck rolls....You get one-tenth of one percent bad experience, that's a lot of people. Unacceptable. We have to be the best service provider or in the end, this company won't be what I want it to be."

(You can listen to the full interview on the Marketplace website, without my edits for brevity.)

So his explanation for Comcast's reputation is that they have so many customers that there's always going to be some tiny fraction who get bad service.

Actually....that's not quite what he said. What he said was that if they had one customer in a thousand get bad service it would be a lot of people. He sort of implied that's the level Comcast is operating at, without actually saying it.

Which is good in a way, because if Roberts actually believes that only one in a thousand Comcast customer interactions is a bad experience, this represents an astonishing degree of executive delusion, bordering on clinical madness. Having 999 of every 1,000 customer experiences be positive represents world-class customer service, at a level probably not attainable by any company with millions of customers.

So how is Comcast actually doing? It so happens that we have some data on that topic, since we have been interviewing Comcast customers since earlier this year as part of the National Customer Service Survey. These interviews happen a few minutes after a customer calls Comcast, and can be fairly extensive. We published some preliminary results a few months ago which showed that (based on very early data with a big margin of error) Comcast's customer satisfaction with its customer service is well below industry peers.

And those peers are not companies famous for good service, either: AT&T, Verizon, Sprint, and T-Mobile. And this isn't just a survey about reputation, like a JD Powers. We are asking about a specific call to Comcast which happened immediately before the interview.

I have to assume that the CEO of Comcast has at least some idea that his company really doesn't provide good service. But maybe not. It's possible this is another example of the corporate Dunning-Kruger Effect. But more likely, he's been well-trained by media handlers to spin the question really effectively.

But whether he's delusional or just offering his spin, it's disappointing that Roberts can't seem to acknowledge what is apparent to so many people outside the company. Because the first step to improving your service is to admit that it needs improvement.

Issue #74 of Vocalabs Newsletter

We just published Issue #74 of Vocalabs' newsletter, Quality Times. We're calling this The Listicle Issue, since we are featuring two "Listicles" on developing a good customer feedback process: 8 Rules for Writing a Not-Awful Survey, and 5 Things Your Survey Should Trigger.

As always, I hope you find this interesting and informative. I welcome any comments or suggestions.

8 Rules for Writing a Not-Awful Survey

Writing a good survey isn't hard, but there are some gotchas if you've never done it before. Here are a few rules of thumb to help avoid the biggest mistakes:

  1. Keep the survey short:
    • Phone interviews should be under 5 minutes
    • Online surveys should fit on a single screen without scrolling
    • IVR surveys should be 5 questions or fewer
  2. Keep the questions short and use simple language. Avoid jargon or brand names, since there's a good chance customers won't recognize them.
  3. Always begin by asking the customer to rate the company as a whole, even if that's not what the survey is about. This gives customers who have a problem with the company a chance to get it off their chest so they won't penalize the representative.
  4. Put the most important questions (usually your tracking metrics) near the beginning. That way they are less likely to be biased by other questions and more likely to be answered.
  5. Be as consistent as possible with your rating scale. For example, don't switch from a 0-10 scale to a 1-5 scale.
  6. In the U.S., it's conventional for higher numbers to be better. Don't make "1" best and "10" worst as it's likely to confuse people. (This rule may differ in other cultures).
  7. Always have at least one free response question.
  8. Plan on making regular changes to the survey. You won't get it perfect the first try.

Following these rules won't necessarily give you a great survey, but breaking these rules will almost always make it worse.

Five Things Your Survey Should Trigger

A customer feedback process doesn't end when the survey is done and the report is generated. In order to be useful, the survey has to start other wheels in motion. Here are five other processes you should be triggering with your customer surveys:

  1. Service Recovery: When a customer has a problem which hasn't been solved, this needs to start a service recovery process to make things right. Usually this involves having a high-level supervisor or someone from a Service Recovery team reach out to the customer, find out the root cause of the customer's problem, and offer whatever resolution is appropriate.
  2. Coaching and Training: Customer feedback can be a powerful tool for coaching and training customer-facing employees if its deployed properly. The ideal is to get the feedback in real time, coach on the same day as the customer interaction, and use a combination of the customer survey and a record of the original customer experience (i.e. call recording, store video, chat log, etc.) to provide a 360 degree view of the event.
  3. Process Improvement: Survey data should be reviewed regularly to look for roadblocks to good customer experiences. Responses to open-ended questions are a great place to start, and tracking how those responses change over time can lead to great insight into what's becoming more or less of an issue.
  4. Quality Review: Quality review in a contact center (i.e. listening to call recordings and scoring them) complements customer feedback. The quality review tells you what happened and the survey tells you how the customer felt about it. Whenever possible, surveys and quality review should be performed on the same call, so that specific actions by the customer service rep can be correlated to higher or lower customer satisfaction.
  5. Survey Improvement: The customer feedback process itself needs to be continually evaluated. Decide which questions are useful, which are not useful, and what new things might need to be added. The survey needs to change over time to match the changes in the business needs and customer expectations.

All five of these are important for an effective customer feedback program, though the implementation will depend on your particular organization. Some companies have very structured programs, for example tracking all service recovery events and their root causes and resolution. This is very valuable data, but a smaller organization often can make an informal process work just as well. The important thing is that you do them.

Related Content

>> Agile Customer Feedback Whitepaper

>> Customer Service Survey Maturity Model Whitepaper

>> Customer Service Survey Maturity Model Self-Assessment

Electronic Arts: A Case Study of the Dunning-Kruger Effect in a Corporation

A couple weeks ago I wrote about the Dunning-Kruger Effect. That's the fancy name for that thing where the majority of people think they're above-average drivers. It's obviously impossible for the majority of people to all be above average at the same thing, and it turns out this sort of delusion is pretty common. Dunning-Kruger is when someone thinks they're good at something that they're actually very bad at.

I believe Dunning-Kruger also applies to companies: there are a lot of companies which think they provide a good customer experience but actually suck. I didn't name names before since all the obvious (to me) examples are companies I'm personally mad at. That's perhaps not the most objective position to be writing from.

But thanks to Consumerist, I have a nice case study of the corporate version of the Dunning-Kruger effect.

Consumerist runs an annual "contest" for the Worst Company in America, and Electronic Arts won not once, but twice in a row. The Worst Company in America is organized like a championship bracket, with two companies facing off against each other and reader votes deciding which is more evil. Eventually the initial 32 companies are narrowed down to a single "winner" which gets a "golden poo" award.

Amazingly enough, this contest sometimes manages to bring out the worst in companies which already have built up piles of customer ill-will. In 2011, for example, Comcast was caught trying to stuff the ballot box to avoid being voted Worst Company two years in a row.

All of which brings us around to Electronic Arts. EA actually did take home the Golden Poo two years in a row, so clearly it has some issues. Customers are apparently upset over the company's nickel-and-diming pricing strategy, and that some of it's most high-profile recent games were apparently really bad, really broken, or both.

In response, EA acknowledged that it could have done some things better, but then blamed its "victory" on customers whining about the cover art on a game, and an organized effort by homophobes to smear the company.

Wait, what?

So Electronic Arts is clearly doing a very bad job of creating a customer experience, as evidenced very strongly by this online poll. I'll go a step further and suggest that it's reasonable to assume that Consumerist's demographic is fairly close to EA's.

And EA seems to think that it actually is doing a good job, as evidenced by the company's response which not only included the weird statements about cover art and bigotry, but also the deep-in-denial claim that "Every day, millions of people across globe play and love our games – literally, hundreds of millions more than will vote in this contest." [As an aside: it is extremely dangerous, but also very common, to assume that having a lot of customers is evidence that your customers like you. You can probably think of companies you hate to do business with but have to buy from anyway. Chances are those companies think you love them.]

That seems to complete the Dunning-Kruger checklist pretty definitively for Electronic Arts. But there is hope for them, in the form of a new CEO who said that the Worst Company in America award should be a "wake-up call" for the company.

Sometimes a fresh perspective is the only way out.

Mistakes

Even though nobody's perfect, there's a weird thing in American business culture of never admitting mistakes.

The usual reason is that admitting a mistake could open the doors to a lawsuit--but this is bunk, as has been shown in medical research (tl;dr version: admitting mistakes and engaging in honest communication with patients cut malpractice costs by over 50%).

The truth is that most people and companies just don't like to admit mistakes. It's hard, it's embarassing, and if the mistake was highly visible or expensive, it can feel like an end of the world moment.

But from the other side--the perspective of the victim of the error--things look completely different. Admitting a mistake (especially when the stakes are high) makes you look responsible, honest, and trustworthy.

Mistakes are also opportunities. As the old saying goes (I heard it in flight school), "Experience is just a series of nonfatal mistakes." Taking ownership of a mistake empowers you to learn, change processes, and improve. Even taking ownership of someone else's mistake can be highly empowering: often there are things you can change to make a mistake less likely or less damaging.

When the excrement hits the whirling blades, it's natural to hope it wasn't your mistake. And it's natural to be relieved if it wasn't. That's my reaction, too.

But if you screwed up, admit it. It hurts, but it's the best way to turn a mistake into experience.

Newsletter #73 is published

We just published issue #73 of Quality Times, our monthly newsletter. In this issue there are two articles on the theme of getting both sides of the story when measuring the customer experience.

As always, I hope you find this useful and informative.

It's Not Customer Feedback if it Didn't Come From a Customer

Getting good, actionable customer feedback is not easy. Wouldn't it be great if there was some technological solution which would just tell us how customers feel, and we could avoid all the mess and bother of actually going out there and talking to customers?

The world is not lacking for ideas of how to measure customer satisfaction without talking to customers. Back in the old days of call centers, a lot of people thought they could track a few metrics (average time to answer, abandon rate, etc.) and know that customers were satisfied.

These days I see more technological solutions like speech analytics and big data number crunching. But I have yet to see anything which comes even close to convincing me that it can replace direct customer feedback for understanding what opinions the customer has, and why.

The problem is that without getting your hands dirty and talking to customers, you are limited to information available on your own side of the customer experience. Customer experiences, like any other kind of story, have (at least) two sides: the customer's side and the company's side. Even with perfect records of the entire interaction, you only have the company's side of the story.

This becomes obvious when you put a customer interview next to the record of the customer experience. As human beings, customers carry a lot of invisible baggage into any interaction: they may be biased by outside factors, they want to avoid making waves, they don't want to embarrass themselves or the employee they are dealing with, and if the experience is going badly they don't want it to last any longer than it has to.

So what appears to be a calm, forgiving customer could be seething with rage on the inside. Or someone who seems to accept the company's resolution to a problem may be planning to take his business elsewhere at the next opportunity. And the customer is actively hiding his true feelings because of the social context of the customer interaction.

But when you approach the customer and ask for feedback in the right way (that is, in a way which communicates that you want honest feedback, care about the customer's opinion, and genuinely want to improve) you will get this side of the story. And what's more, you will also be able to get the customer to explain why he feels the way he feels, what could be done differently, and how any problems could be fixed.

None of this is available using just the data available with the company's side of the story. The best technology in the world can't find information which simply is not there.

(That said, if there is way I want to be the one to invent it. Never say never.)

So that's why I say that it's not customer feedback if it didn't come from the customer. Metrics, analytics, and big data are all powerful tools, but they can't tell you what the customer is thinking.

Unproductive

This probably does not actually need to be said, but I'll say it anyway: Don't spill coffee on your laptop.

That was what I did first thing Tuesday morning. I cleaned it up as best I could right away, and everything seemed OK.

But when I unplugged from the external keyboard and monitor I normally use at my desk, I found that my laptop's keyboard was completely messed up. Some keys didn't work, the shift key was stuck on, and the top two rows would push two buttons at once. So trying to type lower-case "t" would give %T if it yielded anything at all.

I figured I would give it a day to dry out, and use my external keyboard in the meanwhile. But this morning, when it had not improved, it was time to take it in.

That, and, since the shift key was stuck on, the only way it would reboot was in "Safe" mode. "Safe" as in "Safe from getting any actual work done."

So I dutifully hauled it to the Genius Bar where the Genius took it apart and confirmed that it did, in fact, have coffee in the keyboard. Repairs will cost somewhere between $200 and $800, depending on whether just the keyboard needs to be replaced, or if they need to replace the entire main logic board. And at $800, I'm just getting a new laptop.

The rest of my day was spent mostly recovering files from backup onto an old spare laptop we have, so I have something to use while waiting for my repair or replacement, and handling what I can from my iPad.

All told, it has been a spectacularly unproductive day.

So don't spill coffee on your laptop.

Dunning-Kruger Effect

You know that thing everyone always talks about how most people think they are above-average drivers?

As I learned today, it turns out this has a name. It's called the Dunning-Kruger Effect, after a pair of psychologists who first described it in 1999.

(Of course, the rest of us have been describing this effect for decades longer than that, but Dunning and Kruger were the first to describe it using actual, you know, research.)

The Dunning-Kruger Effect is a cognitive bias where people who are bad at something believe they are above average. For example, in one study, participants in the bottom quartile for a particular skill (that is, the people who were worse than at least 75% of the participants) estimated that they were, on average, in the 62nd percentile--well above average.

Anyone with any level of skill in anything has probably seen this in others. And--let's be honest here--probably every one of us is guilty of thinking that we're above-average in some skill we're actually really bad at.

The problem seems to be that if you're really bad at something you don't know how to evaluate your skill level. You don't know what it means to be good at it.

Dunning and Kruger also found that their effect can be reversed through some basic training. This training serves to both increase skill level and also help people recognize their mistakes. Ironically, the people who are the most skilled tend to underestimate how good they are--probably because they are both more sensitive to their own mistakes, and they are likely to be surrounded by other experts.

I'm convinced that the Dunning-Kruger Effect applies to companies just as much as it applies to people. In the Customer Experience world it seems that the companies with the worst customer experiences are also the ones which declare the loudest that they do a great job serving their customers.

(This is often followed up with a statement that they don't feel the need to improve their customer experience, because it's already so good.)

Stirring a little confirmation bias in with Dunning-Kruger gives a particularly toxic brew: a company which is bad at something, but thinks it's above average, and actively ignores evidence to the contrary.

I can name a few companies off the top of my head which provide terrible customer service, but think they're pretty good and refuse to listen to anyone who tells them otherwise. You can probably name a few, too.

So how do you overcome this? Fortunately Dunning-Kruger is relatively easy: you learn a little about what good service looks like and improve the company's self-evaluation skills. It's getting to that first step which can be hard. Confirmation bias makes people think that little bit of information isn't needed and would be a waste of time or money. So here's a three-pronged strategy:

  1. Provide credible feedback. Make an effort to acknowledge all limitations of the feedback, and focus on individual customers' stories rather than statistics (at least initially).
  2. Be positive. The message should be "there's always room to improve," rather than "you stink."
  3. Keep at it. Over time people's opinions will shift with a persistent stream of contrary evidence.
Syndicate content