The Customer Service Survey

Vocalabs' Blog

Blogs

Rexthor, the Dog-Bearer

Statistical analysis is a powerful tool, and like any other power tool, it can cause a lot of damage if not used properly.

So without further comment, here is today's XKCD comic:

"Smart" Products vs. Services

It used to be that when you opened up your wallet, you knew whether you were buying a product or a service. New pair of shoes? Product. Haircut? Service. Trip to Disney World? Service. Mickey Mouse T-Shirt? Product.

Knowing what you're getting is important for setting your expectations as a customer. If you're buying a product you don't expect the seller to do much other than deliver it and fix it if it breaks, and you get to continue enjoying the product until it's either used up or worn out.

When you buy a service you expect the seller to do something for you. This could be one time or an ongoing basis, but you generally expect to keep paying as long as the seller is providing the service. Once you stop buying the service (or the seller has fulfilled their obligation) you aren't entitled to keep getting the service.

One of the joys of living in the 21st century is that we now have consumer products that are both products and services, thanks to the Internet and smart devices. That lightbulb you just bought from Home Depot might actually be a "smart" lightbulb which requires ongoing services provided by the manufacturer in order to deliver all the features listed on the box. And if the manufacturer decides to stop providing the services, the lightbulb stops working.

That may sound like a made-up example, but it literally happened this summer. "Connected by TCP" brand lightbulbs, sold by Home Depot among other places, needed to connect to an online service provided by the manufacturer in order to do all the coll things the box said they would do. TCP decided it was no longer worth providing the service, so they shut it down on June 1st, and all those expensive "smart" lightbulbs (as of today, still for sale for $137/pair) instantly became a whole lot dumber.

This is a customer experience nightmare. The company has done a terrible job of setting expectations about what exactly customers are buying. It sits on the shelf at Home Depot just like any other lightbulb, looking and acting like a consumer product, but what customers are actually getting is a product plus an embedded service, and neither the product nor the service is fully functional without the other.

This is also not the first time a "smart" product was crippled or completely disabled when the manufacturer decided to stop providing the hidden service that made it work. Nor will it be the last. As these incidents keep happening, I expect the negative customer experiences to taint the whole market for adding smarts to ordinary consumer products. Most consumers will not be willing to buy "smart" lightbulbs, thermostats, refrigerators, and other devices when they know the manufacturer might not be willing to keep supporting the product for its entire expected lifespan.

To get past this, manufacturers of "smart" products need to make sure that customers are guaranteed a good customer experience even if the company isn't around to provide it. That probably means a combination of setting customer expectations ("Comes with five years of service included!" makes it clear that the service is part of the product), and making better choices about how to deliver the embedded services.

Because the only thing more disappointing than buying a product that breaks, is buying a product that breaks because the manufacturer broke it.

The Tool is No Better Than the Hand That Wields It

Customer experience failures have many causes. Poor employee training and morale, rigid adherence to policy, broken processes, understaffing, bad design, a culture of indifference, and occasionally--very occasionally--lack of some critical piece of IT infrastructure.

But even when lack of technology isn't the problem, often the first solution a company will reach for is technology.

I think this is just human nature. Internal problems are hard for organizations to solve. Root causes can be buried deep under years of corporate politics and history that nobody wants to unearth. It's much easier to leave the skeletons in the ground and look for the technological quick-fix.

And so the company reaches for the latest state-of-the-art buzzwords and implements Big Data, EFM, Analytics, or (in an earlier era) CRM, ERP, WFM, or some other technology to solve what is fundamentally a problem with execution.

There's no doubt that these technologies bring value and have an important role in any company's infrastructure. But the technology can't solve a problem where the root cause is people and process. Technology is just a tool, and like any tool can be used well or poorly.

For example, if you are delivering poor customer experience because your employees are not empowered to solve customers' problems, implementing Enterprise Feedback Management will not solve that problem. At best, it might make it more obvious that there's an issue with corporate policies but you're still going to have to drain that swamp to fix things.

On the other hand, EFM can be a valuable tool when your organization is ready and able to make better use of customer feedback from top to bottom.

The mistake is in thinking that the tool will, by itself, drive the needed organizational changes. Instead, implementing the technology is something companies often do because it's easier than addressing the real problems.

Technology vendors are happy to encourage this thinking: it's easier to sell a product that the customer thinks will solve all their problems. But the net result is disappointment when the big technology projects fail to deliver the hoped-for results.

We saw this a generation ago, when CRM was the hot new thing. Failed CRM implementations were so common that some pundits went so far as to predict that CRM itself would prove to be just a fad. Of course CRM wasn't a fad: eventually we figured out what CRM is useful for (keeping track of customers), and what CRM couldn't do all by itself (increase sales, make your customers happier and more loyal). Today nobody questions the value of CRM, but we also have much more realistic expectations and nobody begins a CRM project thinking it will fix deep-seated organizational problems.

In the Customer Experience world, we need to keep in mind that our problems are often not solvable by technology. Technology can help, but the root causes are usually leadership, culture, people, and processes.

The good news is that it may be difficult and slow, but the problems are solvable with the right commitment.

And you might need to dig up a few skeletons along the way.

Dark Patterns in Customer Experience

Harry Brignull has spent some time collecting and contemplating "Dark Patterns" in online commerce, the unethical, manipulative, and sometimes outright deceptive things companies do to try to manipulate customers into things they probably didn't want to do.

For example: the form that has a "sign me up for email marketing" checkbox in tiny type at the bottom you need to find and uncheck if you don't actually want to sign up for spam. Brignull has examples that take this to an appalling (and hilarious) extreme.

The full 30-minute video shows the evolution of Dark Patterns and offers some insights into why companies persist in these customer-unfriendly (and sometimes illegal) tactics. Brignull maintains a website with a catalog of Dark Patterns he's collected over the years, and it's well worth browsing.

Coming from a Customer Experience perspective, I think it's especially useful to think about how Dark Patterns come to be, and how they illustrate some of the challenges in trying to design and implement outstanding CX. Dark Patterns almost always involve either misleading the customer or making it hard for the customer to do something (like cancel a subscription). The end result is likely to be an upset customer (or former customer) and very poor CX.

By the time the customer gets upset--when she discovers she's been fooled or trapped--the company already has what it wants. It's too easy for the company to think that CX doesn't matter, because the short-term metrics (sales, newsletter subscriptions, etc.) don't adequately capture the damage that's being done to the customer relationship.

So while you may be able to boost the percentage of customers who buy travel insurance over the short term, those customers aren't going to be happy when they discover the extra charge. They probably won't be fooled a second time, and may take their business elsewhere or warn their friends. Some may call and demand refunds.

In extreme cases, government regulators may get involved.

In most cases, CX professionals who are doing their jobs properly will be working against Dark Patterns. The challenge, as it so often is in the Customer Experience world, is to help the rest of the organization understand that while manipulation, deception, and intentional barriers may sometimes improve short-term metrics, they rarely pay off in the long run.

Newsletter #99: Errors About Margin of Error

We just published the 99th issue of our newsletter, Quality Times. This month I discuss what the Margin of Error in a survey means (and more importantly, what it doesn't mean). You can also read about what happened when one company's survey left a very poor brand impression for some of its customers.

This newsletter is one of the ways we get to know prospective clients. So if you find this useful and informative, please help us out by forwarding this to other people who might also enjoy it, and encourage them to subscribe. You can subscribe to this newsletter on our website.

 

We Want Every Customer to be 10% Satisfied!

It's a typo to be sure, but sometimes mistakes can hold a kernel of truth.

I can think of a couple companies where they probably really do strive for 10% satisfaction.

Falling Through the Cracks

Most companies can handle the ordinary customer service issues just fine. Often, the difference between a company with terrible service and great service is what happens when things go a little wrong. Do customers get their problems resolved quickly and painlessly, or do they fall through the cracks and get ground up by the machine?

Last week I had the experience of being ground up by the Verizon machine.

I have a prepaid Verizon SIM for my iPad, and I was on vacation in a rustic camp on an island in Northern Minnesota. The iPad was my only reliable Internet service, and when I used up my data allotment I needed to refill my account.

At first I tried to use the "Manage my Verizon Account" feature on the iPad, but that gave me an error message and instructed me to call a toll-free customer service number. I called the number and explained my problem to the customer service rep, who told me that I needed to log on to my Verizon account through a web browser to refill it.

This presented a problem, since the iPad's data allotment was used up and there was no other reliable Internet connection available. I explained this to the Verizon rep, who gave me a verbal shrug and told me there was no other way.

So I sat at the end of the dock, where I could just barely get an Internet connection on my T-Mobile phone, to slowly and painfully navigate the Verizon website. After probably a half hour of this, I was told that I could not reactivate my account online, and I was given a different customer service number to call.

I called that number, explained my problem to the rep, who forwarded me to a different department and another rep, who forwarded me to another rep in another department, who forwarded me to another department. Of course I had to explain my problem all over again with each rep and read off the same IMEI and ICCID numbers each time.

For those playing along at home, the IMEI is 15 digits and the ICCID is 19 digits. You can imagine how much fun it was reading 34 digits to each of four different Verizon reps, all while sitting at the end of a dock with the wind blowing across the microphone of my phone.

The final rep, who was apparently in the same department as the very first person I spoke to, chastised me for calling back after I'd already been told I needed to go online (seriously!). When I explained to her that I had managed to go online and that hadn't worked either, she offered to transfer me into an IVR where I could refill my account through the phone.

After punching in the same 34 digits into the IVR, the prerecorded voice pleasantly informed me that I could not refill my account through the IVR and I would need to call customer service.

It was at that point (after two hours, five reps, and three different self-service channels has yielded nothing) that I gave up on trying to give Verizon any of my money. There is a happy-ish ending, though, in that I managed to find a particular spot on the island where I could leave my T-Mobile phone, activate the hotspot, and get a solid Internet connection in the cabin. T-Mobile was a lot cheaper, too.

I'm not sure what crack I fell through at Verizon that day, and I don't think I want to find out. Whatever the situation, it was clear that none of the normal service channels were able or willing to help me.

Verizon likes to talk a lot about how they have the best coverage, and there's no question that Verizon's coverage on the island was much better than T-Mobile's. But in the end, Verizon's coverage didn't matter because T-Mobile made it so much easier to be their customer.

The lesson from that day is that it doesn't matter how great your core product or service is if your overall customer experience is bad enough.

Surveys Leave Brand Impressions

Surveys don't just collect data from participants. Surveys also give the participants insights into what your priorities are, and this can impact your brand image.

Computer game company Ubisoft learned this the hard way recently, when they sent a survey to their customers. The first question asks the customer's gender. Customers who selected "Female" were immediately told that their feedback was not wanted for this survey.

While I'm sure this was not the intended message, it definitely came across to some Ubisoft customers as insensitive to women who enjoy playing games like Assassin's Creed (such people do exist). The company quickly took the survey down and claimed it was a mistake in the setup of the survey.

Whether this was a genuine mistake or an amazingly bad decision by a market researcher who got a little too enthusiastic about demographic screening, it definitely reinforces the image of the game industry as sexist and uninterested in the half of the market with two X chromosomes.

While this might be a particularly egregious example, it's important to remember that customer feedback really is a two-way street. While your customers are telling you how they feel about you, you are also telling your customers a lot about your attitudes towards them. For example:

  • Do you respect the customer's time by keeping the survey short and relevant?
  • Do you genuinely want to improve by following up and following through on feedback?
  • Do you care about things that are relevant to the customer?
  • Do you listen to the customer's individual story?

The lesson is that you should always think about a survey from the customer's perspective, since the survey is leaving a brand impression on your customers. While your mistakes might not be as embarrassing as Ubisoft's, you do want to make sure the impression you leave is a positive one.

Mistakes about Margin of Error

Pop quiz time!

Suppose a company measures its customer satisfaction using a survey. In May, 80% of the customers in the survey said they were "Very Satisfied." In June 90% of the customers in the survey said they were "Very Satisfied." The margin of error for each month's survey is 5 percentage points. Which of the following statements is true:

  1. If the current trend continues, in August 110% of customers will be Very Satisfied.
  2. Something changed from May to June to improve customer satisfaction.
  3. More customers were Very Satisfied in June than in May.

Answer: We can't say with certainty that any of the statements is true.

The first statement can't be true, of course, since outside of sports metaphors you don't ever get more than 100% of anything. And the second statement seems like it might be true, but we don't have enough information to know whether the survey is being manipulated.

But what about the third statement?

Since the survey score changed by more than the margin of error, it would seem that the third statement should be true. But that's not what the margin of error is telling you.

As it's conventionally defined for survey research, the margin of error means that if you repeated the exact same survey a whole bunch of times but with a different random sample each time, there's an approximately 95% chance that the difference between the results of the original survey and the average of all the other surveys would be less than the margin of error.

That's a fairly wordy description, but what it boils down to is that the margin of error is an estimate of how wrong the survey might be solely because you used a random sample.

But you need to keep in mind two important things about the margin of error: First, it's only an estimate. There is a probability (about 5%) that the survey is wrong by more than the margin of error.

Second, the margin of error only looks at the error caused by the random sampling. The survey can be wrong for other reasons, such as a bias in the sample, poorly designed questions, active survey manipulation, and many many others.

Margin of Error Mistakes

I see two very common mistakes when trying to understand that the Margin of Error in a survey.

First, many people forget that the Margin of Error is only an estimate and doesn't represent some magical threshold beyond which the survey is accurate and precise. I've had clients ask me to calculate the Margin of Error to two decimal places, as though it really mattered whether it was 4.97 points or 5.02 points. I've actually stopped talking in terms of whether something is more or less than the margin of error, instead using phrases like "probably noise" if it's much less than the margin of error, "suggestive" for things that are close to the margin of error, and "probably real" for things that are bigger than the margin of error and I don't have any reason to disbelieve them. This intentionally vague terminology is actually a lot more faithful to what the data is saying than the usual binary statements about whether something is statistically significant or not.

Second, many people forget that there's lots of things that can change survey scores other than what the survey was intended to measure, and Margin of Error doesn't provide any insight into what else might be going on. Intentional survey manipulation is the one we always worry about (for good reason, it's common and sometimes hard to detect), but there are many things that can push survey scores one way or another.

It's important to keep in mind what the Margin of Error does and does not tell you. Do not assume that just because you have a small margin of error the survey is automatically giving accurate results.

One Picture that Captures the Essence of the CX Challenge

The challenge of trying to promote good Customer Experience practices can be summed up by one picture I ran across today. If you can't read the image, it's a sign from a clinic that reads, "You are not a number to us. Our goal is to ensure you have the best experience possible. Please take a number to help us serve you better."

So, um, yeah, about that...

CX is often in tension with other parts of an organization, which can make it challenging to go from saying the right things to doing the right things. No matter how much effort and research the Customer Experience team puts into creating a better experience, there's always someone else who thinks some other way will be better.

The result can be, like the sign, a jarringly obvious reminder that the organization doesn't really believe its own customer-friendly hype. The person who made that sign probably didn't see the obvious disconnect; chances are it seemed like a perfectly reasonable message for customers.

Sometimes it takes that customer's outside perspective to break through the company's internal blinders.

(By the way, I tried to find the original source of this picture and couldn't, but it has been posted in several other places. Here's a reverse image search if you want to see where it's been.

Spurious errors

I consider it something of a professional responsibility to take surveys when they're offered. I don't do every single survey (I tried that once a few years ago, and it wound up consuming way too much of my time), but I try to do them when I can.

A distressing theme is the number of bugs and errors I see in customer surveys. I haven't tracked it formally, but I would guess that at least one in ten online surveys I attempt is broken in some way. It's alarming how many companies are apparently unaware that their customer surveys don't work.

Today's example comes from MIcroCenter, where the survey form told me, "The errors below must be corrected before you can proceed." Which is all well and good if there had been any errors in the form, but there weren't.

So I guess MicroCenter won't benefit from my important perceptions and observations.

It's Your Duty to Complain

I talk a lot in this blog about how companies need to listen to their customers, collect fair and accurate feedback, and take action to improve. But do customers have a responsibility, too?

In a thought-provoking and well-reasoned article, Cory Doctorow argues that customers have a duty to complain, because that's how companies will know what's wrong and how to fix it.

Most people don't want to complain. Complaining is confrontational. Complaining risks being labeled a "complainer," like those crazy people everyone who works in retail or customer service know so well. And there are, unfortunately, a lot of companies where complaints tend to get lost in the machinery.

So rather than complain, many customers quietly take their business elsewhere, or suffer in silence if switching isn't an option.

As uncomfortable as it may be, complaining before taking your business elsewhere is almost always a better option for both the customer and the company. Complaining gives the company a chance to fix the problem, giving the customer better service and saving the company a customer. This is practically the definition of win-win.

In many cases companies will be grateful to get thoughtful and meaningful customer complaints. To make an effective complaint you should:

  1. Be calm. Remember that the person you are complaining to is another human being and is probably not personally responsible for your problem. Nobody likes dealing with angry customers.
  2. Be reasonable. You should ask the company to fix the problem and deliver the product or service you paid for, but a minor inconvenience does not usually entitle you to financial compensation (though many companies will do something to apologize). Don't make threats.
  3. Remember not everyone can fix your problem. Most people in any given company have only limited authority, and if your complaint is complex, the first person you complain to might not be able to help. If that's the case, ask to talk to someone who can help. If you're complaining about a general company policy, it may be a slow process (with lots of complaints from lots of customers) to make a change.
  4. Be prepared. Have the facts and your story straight. Be ready to explain in clear, concise terms exactly what happened, why you felt this was wrong, what impact it had on you, and what the company needs to do.

As a Customer Experience professional, I know that a rational and well-presented customer complaint is a valuable gift. The worst thing that can happen (from a CX perspective) is when a customer leaves without complaining and then tells friends (in person or through social media) about how terrible the experience was.

So while I wouldn't say that customers have a duty to complain, I do think everyone would be much better off if customers did it more often.

Issue #98 of Vocalabs' Newsletter

We just published the 98th edition of Quality Times, our newsletter about customer experience and customer feedback.

This newsletter is one of the ways we get to know prospective clients. So if you find this useful and informative, please help us out by forwarding this to other people who might also enjoy it, and encourage them to subscribe. You can subscribe to this newsletter on our website.

This month's newsletter is all about the "why" of collecting customer feedback. My first article talks about the tradeoff between getting positive feedback and honest feedback, and making sure you know which you prefer. The second article is about that too-often overlooked piece of survey design, understanding the goals of the program.

As always, I hope you find this useful and informative.

Sweating the Big Stuff

"Don't sweat the small stuff" as the old saying goes. Meaning: don't waste your time on things that are unimportant.

Unfortunately there's a lot of sweat-soaked small stuff in the world of customer feedback. Here are some things that often suck up a lot of time and effort relative to how important they are:

  • The exact wording of survey questions.
  • The graphic design and layout of reports and dashboards.
  • The precise details of how metrics are calculated (spending too much time on this is a warning sign that you might have fallen into the Metric-Centric Trap).
  • Deciding whether individual surveys should be thrown out (this is another warning sign of the Metric-Centric Trap).
  • Presenting survey trend data to senior leadership.

Instead, you should be focusing on some of these things that don't get nearly as much attention for the amount of benefit they bring:

  • Closing the loop with customers.
  • Providing immediate coaching of employees based on customer feedback.
  • Regularly updating the survey to keep it relevant to the business.
  • Doing root cause analysis of common customer problems.
  • Keeping senior leadership actively engaged in advancing a customer-centric culture.

The difference between the small stuff and the big stuff is that the small stuff tends to be (relatively) easy and feels important. The big stuff often requires sustained effort and commitment but pays off in the end. It's the difference between an exercise bicycle and a real bike: both take work, but only one will get you anywhere.

Vocalabs' Newsletter #97 is Published

We just published the latest edition of Quality Times, Vocalabs' newsletter about measuring customer service quality.

This month's newsletter features our hot-off-the-presses whitepaper, The Metric-Centric Trap, which I wrote in collaboration with Raj Sivasubramanian of eBay. I also write about the use of incentive-based pay in the customer experience world (spoiler: I'm not convinced it's a good idea).

This newsletter is one of the ways we get to know prospective clients. So if you find this useful and informative, please help us out by forwarding this to other people who might also enjoy it, and encourage them to subscribe. You can subscribe to this newsletter on our website to automatically get each new edition when it is published.

As always, I hope you find this useful and informative.

The Metric-Centric Trap

In my work I've found a lot of companies that genuinely want to improve their customer experience and invest a certain amount of time and resources into the effort. Often they go to great lengths to track and measure the voice of the customer, yet the customer experience never seems to improve much.

The problem is that these companies are investing their time and energy in relatively low-value activities like tracking survey metrics, and ignoring things that can move the needle like closed-loop feedback and actively training employees with the voice of the customer. It's an easy trap to fall into, since senor leadership teams often like to see numbers and metrics, but it can make it hard to really understand and improve the customer experience because the statistics obscure all the individual customers' stories.

I've had the great pleasure of collaborating with Raj Sivasubramanian, the Senior Manager of Global Customer Insights at eBay, to develop these ideas into a new whitepaper, The Metric-Centric Trap: Avoiding the Metric-Centric Trap on the Journey Towards Becoming Customer-Centric.

In this whitepaper we explore our experiences with companies that have wanted to be customer-centric but wound up metric-centric instead, discuss what's so bad about being metric-centric, and share some ways to break out of the trap and make more effective use of customer feedback.

You can download the whitepaper from our website.

Bruce Temkin on Qualitative Data

Bruce Temkin posted a short video recently making the case that we need more qualitative research in the customer experience world.

The problem is that boiling down customer feedback to statistical metrics obscures the fact that there are thousands of individual customers behind those numbers. Each customer has their own story and their own experience, and it's important to not lose those individual experiences in the analysis.

The way to not lose sight of customers is through qualitative data that captures their stories. That doesn't mean not calculating metrics. It means making sure you are including that qualitative feedback as part of the process and then using it.

Too often when companies collect qualitative feedback they either don't use it at all, or they try to boil down the customers' stories into statistics using analytics tools.

What needs to happen instead is someone needs to read or listen to individual customers' feedback to develop understanding for customers' emotions and experiences.

Statistics have their place of course. But statistics don't readily lead to empathy and understanding.

Do You Want Positive Feedback or Honest Feedback?

Which would you rather have: Positive customer feedback, or honest customer feedback?

Most people would probably say "both," but it's not always possible to have both. If the honest customer feedback isn't positive, then you can only get one or the other.

So when forced to choose, I would generally prefer honest feedback over positive feedback. As long as the person giving me feedback isn't being cruel or demeaning, I would rather hear about ways I can improve than get my ego stroked. At least that's what I say, and what the rational part of my mind thinks. In reality, hearing negative feedback can be hard, even though it's also much more valuable.

I think most business leaders would probably agree with me: it's better to get honest feedback from customers than positive feedback.

But the real-world incentives at most companies don't support this. Incentives based on customer feedback are always designed to encourage positive feedback, not honest feedback. That's because it's easy to measure how positive the customer feedback is, but almost impossible to measure how honest it is.

Where companies base bonuses and compensation on customer surveys, this leads to a perverse incentive to get customers to give higher scores no matter the customer's actual opinion. Is it any wonder that survey manipulation is so common?

The problem is that when customers don't give you the straight story, the feedback has no value. You might as well not do the survey at all if you can't get honest feedback.

So what's the solution?

The first step has to be to stop undermining yourself. If you give employees incentives to only deliver positive feedback, stop that.

That probably means you shouldn't be using survey scores for employee bonuses at all, since it's difficult-to-impossible to create an incentive system that encourages honest feedback.

The next steps are much harder. The goal is to create a culture where customer feedback is seen as constructive criticism and an opportunity to improve. That will require significant effort coaching employees in how to listen to customers without becoming defensive or negative.

But if your customer surveys are biased towards giving you positive, rather than honest, feedback, then you're not getting any value anyway.

Join the CXPA

I don't often plug other organizations in this blog, but I'm going to make an exception for the Customer Experience Professional's Association (CXPA). We have an amazing local chapter here in Minneapolis that holds meetings every other month, and the CXPA's annual conference is coming up in early May in Atlanta.

What makes CXPA different from other groups and events in this space is that CXPA is a non-profit professional association run by and for CX practitioners. That means that it stays focused on helping CX leaders connect with each other to share ideas and best practices. Since Customer Experience is much more an art than a science, finding out what's worked for others in the field can have tremendous value.

Any time someone asks me the best way to learn more about Customer Experience, I always tell them to join the CXPA and become active in their local and online communities. And now I'm telling you.

Vocalabs Newsletter: NCSS Results for Communications

We just published issue #96 of Vocalabs' Newsletter, Quality Times. This month we focus on our recent NCSS results for the Communications industry, showing that T-Mobile has continued its improvements since 2012, while Comcast still brings up the rear. The compete Executive Summary is available for download.

As usual, I hope you find our newsletter interesting and enjoyable. If you like it, please feel free to subscribe via email. Subscribers are the first to get new issues when they are published.

Incentives

A few weeks ago the Harvard Business Review published an article with the provocative title, Stop Paying Executives for Performance. The main thesis was, as you might expect, that companies should stop paying executives for performance. The authors give a number of reasons:

  • Performance-based pay only really improves performance for routine tasks, not activities which require creativity and out-of-the-box thinking.
  • The best performance tends to come when creative professionals focus on improving their skills rather than focusing on outcomes.
  • Intrinsic motivation is more powerful than extrinsic motivation, and performance-based pay tends to only drive extrinsic motivation.
  • Performance-based pay encourages people to cheat to hit bonus thresholds.
  • No performance measurement is perfect.

You can argue about whether the authors' conclusion is right (and in the comment section of the article several people argue that they are wrong), but there's no question they make a strong case for an interesting and counterintuitive idea.

To me as a Customer Experience professional, what's most interesting about this is that the same arguments can be made about performance-based pay for many customer-facing employees, since delivering a great customer experience often requires employees to go beyond the rote skills and find creative solutions to the customers' problems.

A great example of this is the declining use of Average Handle Time (AHT) in many call centers. Over the past ten years I've spoken with a lot of call center managers who have stopped evaluating (and compensating) their customer service reps based on how long they spend on the average call. Every single one has described significant improvements in the customer experience. Several also said that their AHT didn't even go up as a result.

Removing AHT from the employee's compensation meant that reps were freed from having to worry about the mechanical part of their job--how quickly they could get someone off the phone--and instead think about the best ways to solve the customer's problem. The other problems with performance-based pay also show up in customer service, since focusing on AHT can lead employees to cheat (for example, by occasionally "accidentally" hanging up on a customer at the beginning of a call). AHT turns out to not be as tightly tied to operational cost as most managers expect, because customers will often call multiple times if they don't get the service they need the first time.

Moving away from performance-based pay would be an enormous cultural change in today's business environment where the implicit assumption is that money is the best, or even only, way to motivate employees. But there's decades of research in behavioral economics showing that money is only a mediocre way to drive performance in many circumstances.

If you're using performance-based pay to try to improve your customer experience, it's worth thinking about whether you're really driving the behavior you want.

NCSS Results: T-Mobile Keeps Getting Better, Comcast Still Trails

in

We just published the 2015 results for the National Customer Service Survey (NCSS) of Communications companies. This ongoing survey tracks customer service quality at AT&T, CenturyLink, Comcast, DirecTV, Dish Network, Sprint, T-Mobile, Time Warner, and Verizon.

In this year's data we find that T-Mobile has extended its record of improving customer service going back to 2012, making substantial gains in many of our survey metrics and the company is now leading the pack in eight of the nine scores we track.

Back in 2012, T-Mobile was performing poorly on our survey. T-Mobile had just come out of a failed attempt to merge with AT&T (abandoned in December 2011) and its scores had been sinking. Since then, however, T-Mobile has rebranded itself as the "un-carrier" and made deliberate efforts to be more consumer-friendly. This has been successful, as shown by the fact that T-Mobile's moves to abandon such hated industry practices as two-year contracts and overage charges have now become the norm in the mobile phone industry.

Our data shows that T-Mobile's efforts have extended to improving its customer service more generally, with sustained multi-year improvements across the board in our customer-service metrics. While we don't have any insight into T-Mobile's internal operations, from our data it appears that the company has been making a significant and sustained effort to improve its customer service operations.

Comcast, meanwhile, has posted some small gains over the past two years (CenturyLink, Comcast, DirecTV, Dish, and Time Warner were added to our survey in 2013), but not enough to pull it out of last place in our survey rankings. In 2015 Comcast held the bottom slot in six of our nine metrics: better than 2013 when Comcast was last place in eight of nine metrics, but still not a stellar performance.

The complete survey data is available to NCSS subscribers, and the Executive Summary can be emailed to you through our website:

Download NCSS Communications 2015 Executive Summary

You're Not Managing If You Only Measure

"You can't manage what you don't measure" is the old business saw.

But measurement is only the first step. Once you have the data you need to put it into action.

A lot of customer feedback programs fail to go from measurement to management. They collect data and metrics but don't follow through with the actions needed to improve the customer experience.

The problem is that it's too easy to think that survey metrics are telling the whole story, and that improving them is the goal.

But neither of those statements are true. Survey metrics do not tell the whole story, they give a statistical aggregation of hundreds or thousands of unique customers each with their own experiences and problems. And improving survey scores should never be the goal, but rather an expected side-effect of the true goal of making thousands (or millions) of individual customers have better opinions about the company.

Using survey data creates a dilemma: on the one hand it's necessary to boil down the data into a handful of numbers in order to make sense of the thousands of individual customers' opinions and get a high level view of how you're performing. On the other hand, those statistical measurements also hide the individual customers' experiences and opinions, making it hard to understand what the numbers actually mean.

The solution is to being individual customers back into the management part of the equation. Survey scores are always going to be necessary to track progress, but when it comes to actually making changes you should focus on individual customers.

That means using feedback from individual customers (especially the open-ended parts of the survey) for coaching and training, spending time reading (or listening to) survey comments when looking at process changes or improvements, and making individual customers' stories (not statistics) the core of survey reporting.

So while it's true that you can't manage what you don't measure, you also have to be careful not to let the measurement get in the way of the management.

Vocalabs Newsletter #95

We just sent out the latest issue of Quality Times, our mostly-monthly newsletter about customer feedback and the customer experience. Featured in this month's newsletter are the National Customer Service Survey (NCSS) results for the banking industry in 2015. Next month we plan to publish the results for Communications, so stay tuned for those.

As always, I hope you find the newsletter useful and informative. You can subscribe to receive future issues via email.

NCSS Results: Chase on top, Bank of America Improves

in

We just published the results for the National Customer Service Survey (NCSS) on Banking for 2015. This is our ongoing syndicated research program comparing the customer service at four major consumer banks: Bank of America, Chase, Citi, and Wells Fargo.

For the NCSS we interview customers within a few minutes of a customer service call to one of the companies we track. This is very different from other published research, where participants are asked to recall a customer service experience which may have happened months ago. As a result, we are able to get very reliable, detailed survey data about what's actually happening when a customer picks up the phone.

In 2015 we saw Bank of America make significant improvements in our survey. In one year, BofA's score for Call Resolution went up 13 points, its score for Ease of Reaching and Agent went up 11 points, and overall satisfaction with the customer service call was up 13 points over the past two years.

Chase took the honors for best scores overall, even though it didn't have as dramatic an improvement as Bank of America. Chase had the highest scores in seven of the nine key metrics we track in our report, and generally continued the upward trajectory it has been on since we started our survey in 2011.

Meanwhile Citi took a beating, losing 13 points in overall satisfaction with the company, 12 points in satisfaction with the customer service call, and claiming the bottom slot in eight of the nine metrics.

The 2015 results represent a reversal for both Bank of America and Citi. When we started the survey in 2011, Chase and Citi were posting lower survey scores than Wells Fargo and Bank of America. But Chase and Citi made several years of improvement, while Bank of America's scores were generally flat. This year, though, Bank of America is back in the middle of the pack with its gains, and Citi's scores are behind its competitors.

It's hard to speculate on what might be driving these major changes this year. Improving the customer experience is a process, not a project, and it's possible that Citi has been distracted with other priorities.

You can get a copy of the Executive Summary sent to you through our website:

National Customer Service Survey on Banking, 2015 Executive Summary

Syndicate content