The Customer Service Survey

Vocalabs' Blog

Blogs

Vocalabs Newsletter #95

We just sent out the latest issue of Quality Times, our mostly-monthly newsletter about customer feedback and the customer experience. Featured in this month's newsletter are the National Customer Service Survey (NCSS) results for the banking industry in 2015. Next month we plan to publish the results for Communications, so stay tuned for those.

As always, I hope you find the newsletter useful and informative. You can subscribe to receive future issues via email.

NCSS Results: Chase on top, Bank of America Improves

in

We just published the results for the National Customer Service Survey (NCSS) on Banking for 2015. This is our ongoing syndicated research program comparing the customer service at four major consumer banks: Bank of America, Chase, Citi, and Wells Fargo.

For the NCSS we interview customers within a few minutes of a customer service call to one of the companies we track. This is very different from other published research, where participants are asked to recall a customer service experience which may have happened months ago. As a result, we are able to get very reliable, detailed survey data about what's actually happening when a customer picks up the phone.

In 2015 we saw Bank of America make significant improvements in our survey. In one year, BofA's score for Call Resolution went up 13 points, its score for Ease of Reaching and Agent went up 11 points, and overall satisfaction with the customer service call was up 13 points over the past two years.

Chase took the honors for best scores overall, even though it didn't have as dramatic an improvement as Bank of America. Chase had the highest scores in seven of the nine key metrics we track in our report, and generally continued the upward trajectory it has been on since we started our survey in 2011.

Meanwhile Citi took a beating, losing 13 points in overall satisfaction with the company, 12 points in satisfaction with the customer service call, and claiming the bottom slot in eight of the nine metrics.

The 2015 results represent a reversal for both Bank of America and Citi. When we started the survey in 2011, Chase and Citi were posting lower survey scores than Wells Fargo and Bank of America. But Chase and Citi made several years of improvement, while Bank of America's scores were generally flat. This year, though, Bank of America is back in the middle of the pack with its gains, and Citi's scores are behind its competitors.

It's hard to speculate on what might be driving these major changes this year. Improving the customer experience is a process, not a project, and it's possible that Citi has been distracted with other priorities.

You can get a copy of the Executive Summary sent to you through our website:

National Customer Service Survey on Banking, 2015 Executive Summary

Sharing Feedback, Constructively

Back when I was is college I took a creative writing class, and part of this class was to critique each other's work: read what the other students wrote, and offer constructive criticism and feedback. It turned out that was one of the hardest things to do effectively, because most people instinctively get defensive about any negative feedback. They're just too emotionally invested in their work to accept even mild criticism dispassionately.

The same thing can happen when you share negative customer feedback. Often, an employee's intense and emotional reaction is that someone is trying to tell them that they're bad at their job, and they react defensively. At the company level, most people take a lot of pride in the organization they work for (even when it's not justified) and have a hard time hearing that something might be broken. Breaking through this takes a lot of finesse and you have to be careful about how you present and frame the feedback.

I did eventually get pretty good at giving and receiving constructive criticism, and that's turned out to be really helpful professionally. Here's my suggestions for making negative feedback a positive experience:

  1. Most important, always have the attitude of constructive criticism. This is about problem solving, not assigning blame. A customer had a bad experience, that does not mean the company is bad at CX (even if you think they actually are, don't let that be part of the message). Everyone makes mistakes, and the goal is to identify the mistakes so they are less likely in the future.
  2. Present positive feedback along with negative, and lead with the positive. This helps set the tone of, "We're generally doing a good job and we'd like to find ways to do even better."
  3. Focus on the customer's perceptions. For example, if a customer complains about a late shipment, this should be framed as "A customer felt his delivery expectations were not met. Let's try to figure out why the customer felt this way," rather than, "We're really dropping the ball on deliveries!" There can be a lot of reasons for a negative perception, not all of them related to what actually happened.
  4. Select the feedback you choose to present carefully. Not all negative feedback is credible, but you should reinforce the customer feedback with other data that supports that this is a problem worth paying attention to (for example, "We're seeing more complaints about late deliveries this quarter. This customer's experience is similar to a lot of other complaints"). Share feedback that's articulate, believable, and relatable. Don't share the crazies, as entertaining as they may be.

Sharing customer feedback, both with individual employees and the organization as a whole, is a powerful way to motivate action but needs to be done carefully to inspire the right action and avoid negativity.

Newsletter #94 Is Published

We've published issue #94 of Quality Times, our newsletter about customer experience and customer feedback programs. 

This month, rather than doing the usual thing of writing about the industry trends for the new year, I wrote about the Non-Trends. These are the basic truths of Customer Experience work which were true last year, will be true this year, and will still be true in 2017. Most of these are much more important than the hot trends for 2016.

This newsletter is one of the ways we get to know prospective clients. So if you find this useful and informative, please help us out by forwarding this to other people who might also enjoy it, and encourage them to subscribe. You can subscribe to this newsletter on our website.

Goodhart's Law

In the field of macroeconomics, Goodhart's Law states that "When a measure becomes a target, it ceases to be a good measure."

As an economic theory this is the rough equivalent of Murphy's Law, though with a kernel of deep truth at the core. Macroeconomic measurements distil an enormously complex system down into a handful of simple numbers that require considerable effort to measure. For example, in the mid-20th century in the United States, we had a problem with inflation. Low inflation is desirable because it tends to correlate with economic stability and predictability and encourages the middle class to save and invest for the future. But when policymakers initially tried to slow inflation through wage and price controls rather than addressing the underlying problems in the economy, the result was an unbalanced economy and (eventually) the stagflation of the 1970's. Of course this is a grossly oversimplified summary of 20th century economic history, but the point is that by trying to force inflation to hit a target, the inflation rate stopped being a good proxy for economic stability and predictability.

Goodhart's Law in Customer Experience

Goodhart's Law applies in the world of Customer Experience, too.

Most of the core metrics in any CX effort (for example, survey scores like Net Promoter or Customer Satisfaction; or internal metrics like Delivery Time) are used because they are strongly correlated with customers' future behavior, positive word-of-mouth, and long-term growth of the company.

But if you try to turn a CX metric into a target, it may no longer be useful as a measure of the customer experience. That's because the things you really want to change (customers' future purchases, positive word-of-mouth, long-term growth, etc.) are the result of many complex interactions inside the company and between the company and its customers. And its often easier to hit a goal by gaming the system than it is to fix the underlying problems.

For example, in the case of ABRA Auto Body I blogged about a couple days ago, the company almost certainly did not set out to create a survey which would yield inflated, meaningless scores. Instead, they most likely determined that high survey scores were often strongly correlated to repeat business and new customers through recommendations.

But rather than explore the root causes of high (or low) customer satisfaction and address those, the company probably decided to simply give managers an incentive to hit a certain survey score and let them figure out how to do it.

The result is that it's much easier for a manager to print off a bunch of fliers instructing customers on how to answer the survey, than it is for them to think about how the customer journey might be improved. (It's possible that ABRA doesn't even give managers the authority or budget to change the things that might matter, in which case the manager may have no choice but to try to game the survey.)

The lesson should be obvious: If you want your CX metrics to be useful measurements of your customer experience, then you need to be very wary of how incentives invite manipulation.

ABRA Is Not Subtle About Survey Manipulation

Fellow CX professional Jason Kapel told me about a recent experience at ABRA Auto Body. He had his car repaired, and while the experience itself was fine, he found the flier to the right attached to his receipt.

If there was a Hall of Fame for the Hall of Shame, this would have a good chance of winning Most Blatant Example of Survey Manipulation. Not only does it tell the customer exactly how they want each question answered, at the bottom of the flier it instructs the customer not to take the survey if there was some problem with the repair.

Needless to say, this survey is not likely to get much honest and unbiased feedback from customers, nor is it going to identify ways to improve the customer experience. Pretty much the only thing this survey will do is allow the manager to boast about his great survey scores and claim whatever reward (or avoid whatever punishment) results from hitting his numbers.

All of which begs the question, what's the point of doing this survey?

I have to assume that either ABRA is unaware that their survey is being blatantly manipulated, or they don't care. Neither possibility speaks well to the level of commitment and attention the company is paying to improve their customer experience.

Customer Experience Non-Trends for 2016

It's the beginning of a new year, which means it's time for pundits and prognosticators to pull out their crystal balls and make predictions about the twelve months to come.

Bruce Temkin, for example, has published his 11 Customer Experience Trends for 2016 (why 11? Presumably because it's one better than ten). He has identified such things as Journey Designing, Empathy Training, and Predictive Analytics as areas to watch, and declared that 2016 will be The Year of Emotion.

Who am I to disagree?

But in my view, such trend articles miss the bigger picture, which is that the important facts of the Customer Experience profession will be pretty much the same in 2016 as they were in 2015 and earlier years. These are the non-trends, the things that don't change, and most of them are more important than the trends.

So here I present my Customer Experience Non-Trends for 2016. Not only are most of these non-trends more important to the average CX professional than the Trends, you can read these safe in the knowledge that in January 2017 I can just republish the same article with a different date.

Non-Trend 1: Engaged Leadership Is The Single Most Important Element in CX

The companies delivering a great customer experience almost always have leadership actively engaged in continuously trying to deliver a better experience. Conversely, companies where leadership views CX as a one-time project, or something to delegate, generally don't succeed in delivering a superior experience.

The lesson here is simple: if you want to improve the customer experience in your organization, the most important thing you can do is get the senior leadership to care and make it a personal priority.

Non-Trend 2: Great CX Is About Getting a Thousand Things Right

Sweat the details. A grand strategy or a new piece of technology will not, by themselves, move the needle on your customer experience (though the right strategy and tools definitely make the job easier).

Unfortunately, "sweat the details" is not a sexy message and it doesn't help sell software and services. Many vendors make the empty promise that their solution will, by itself, transform your CX effort. Don't believe it. There is no magic bullet.

Non-Trend 3: Customer Experience Professionals Often Have a Tough Job

The field of Customer Experience has made great strides over the last decade or so, but it's still not easy. We've finally gotten to the point where most companies will at least say that the Customer Experience is a priority, but many of them have yet to internalize it. The leadership doesn't yet care enough to dedicate the needed resources, or they think that because they have a CX team the problem is solved and they can mostly ignore it.

So in a lot of places, the role of the CX professional will continue to revolve around getting leadership attention, finding the easy wins, and internal evangelism. This, unfortunately, is not likely to change any time soon.

Non-Trend 4: Great CX Drives Customer and Employee Passion, Which Creates Better CX

The sweet spot of customer experience is when your whole organization is focused on creating a better experience for customers, which makes customers want to do more business with you, and that makes employees want to help customers even more. Customer Experience becomes a positive feedback loop.

The unacknowledged truth is that most employees genuinely want to do a good job and have a positive impact on their customers. It's one of the most satisfying things we can do in our careers. A strong focus on CX creates not just more satisfied customers but also more satisfied employees.

Here's hoping for a terrific 2016!

What Are Your Goals?

Before you get into the nuts and bolts of designing a survey program, spend some time sharpening up what you hope to accomplish. A good understanding of the business goals of the survey will really help figure out the right sampling, questions, channel, and reporting. A lot of the time when I hear companies say they want to do a survey for the purpose of collecting customer feedback, it really means that they haven't thought a lot about what they plan to do with the feedback once it's collected. It's like saying you want to do a survey for the purpose of conducting a survey.

The basic ingredients are straightforward. Most surveys have as their goals some combination of:

  • Tracking metrics: Requires using a very consistent set of survey questions with a random sample selected to give an acceptable margin of error for calculating metrics. 
  • Improving the performance of individual employees: Requires targeting the survey sample to collect adequate feedback on each individual employee, asking open-ended questions about the experience, and delivering the feedback to front-line supervisors in real time. Recorded customer interviews are particularly valuable.
  • Identifying customer pain points: Requires a lot of open-ended questions and potentially additional follow-ups. Customers should be invited to tell their stories.
  • Testing or validating changes to the customer experience: Requires careful attention to test and control group samples, and a consistent set of metrics for the different test cases (see A/B Testing for Customer Experience).
  • Persuading the organization/leadership to make a change to the customer experience: Requires collecting a valid statistical sample that supports the proposed change, as well as persuasive customer stories which will carry emotional weight with others in the organization. Recorded customer interviews are particularly valuable.
  • Providing individual customers a chance to be heard: Requires offering the survey very broadly, even if that means a low response rate or far more completed surveys than would otherwise be needed. A robust closed-loop process is not optional

So for example, if you've never done any transactional feedback before, your goal is probably going to be mostly about identifying customer pain points (i.e. trying to find out what you don't know) with a dash of tracking metrics thrown in. That probably means asking a couple of tracking questions and a lot of open-ended questions, and a random sample in the range of 400 completed surveys per reporting segment (enough to get a 5-point margin of error).

But if your goal is more directed to improving employee performance, things will be different. You will want to bias the survey sample to ensure each employee gets enough feedback to be useful (which also means un-biasing the sample to calculate metrics). You will probably also want to use customer interviews rather than automated surveys, since a recorded interview with the customer is much more effective at changing behavior than written comments and statistics.

Whatever your goals are, the most important thing is to have them. Surveys done for the sake of doing surveys tend to not be very useful.

Customer Survey Mistakes Almost All Companies Make

It's easy to do a survey, but it's hard to run an effective customer feedback program that leads to changes in a company's actions and improved customer experience. There are a number of common mistakes: so common that nearly all companies make at least one of these mistakes, and a lot of companies manage to hit the entire list:

Not Understanding the Purpose of the Customer Survey

If you don't know what you expect to accomplish through a customer feedback program, it's hard to structure it in a way that will meet your goals. For example, a survey designed to help improve the performance of customer-facing employees will be very different than one merely intended to track metrics. When I ask companies why they are running a survey, often I hear answers like, "To collect customer feedback," or "Because it's a best practice." Answers like that tell me that they don't have a clear sense of why they need a survey, other than for the sake of having a survey.

Asking Too Many Questions

Long surveys generally have a poorer response rate than shorter surveys, can leave the customer with a bad feeling about the survey, and often don't produce any more useful feedback than shorter surveys. In many cases, there is no good reason to ask a lot of questions, other than a need to appease a large group of internal stakeholders each of whom is overly attached to his or her favorite question or metric. It's easy to find the questions you don't need on your survey: go through all the questions and ask yourself, "Have we ever actually taken any action based on this question?" If the answer is no, the question should go.

Focusing on Metrics, Not Customers

Metrics are easy to fit into a numbers-driven business culture, but metrics are not customers. At best, metrics are grossly oversimplified measurements of your aggregate performance across thousands (or millions) of customer interactions. But behind those numbers are thousands (or millions) of actual human beings, each of whom had their own experience. Many companies focus solely on the metrics and forget the customers behind them. Metrics make sense as a progress marker, but the goal is not to improve metrics but to improve customer experiences.

Not Pushing Useful Data to the Front Lines Fast Enough

In many cases, creating a great customer experience isn't about installing the right platform or systems, it's making sure that thousands of little decisions all across the company are made the right way. Those people making those decisions need to know how their individual performance is helping contribute to the overall customer experience, and the best way to do that is give them access to immediate, impactful feedback from customers. Too often, though, customer feedback gets filtered through a centralized reporting team, or boiled down to dry statistics, or delivered in a way that masks the individual employee's contribution to the whole.

Not Closing the Loop

Closed-loop feedback is one of the most powerful tools for making sure a customer survey inspires action in the company, yet even today most companies do not have a formal system in place to close the loop with customers. There are actually three loops that need to be closed: you need to close the loop with the customer, with the business, and with the survey. If you're not closing all three loops, then your survey is not providing the value you should be expecting.

Always Using the Same Survey

Companies change and evolve. Markets shift. Customer's expectations are not static. Entire industries transform themselves in just a few years. So why do so many customer surveys remain unchanged for years (or decades)? Surveys should be structured to respond to changing business needs and evolve over time, otherwise you're not collecting feedback that's relevant to current business problems. Surveys that never change quickly become irrelevant.

Not Appreciating Customers' For Their Feedback

Finally, a lot of companies forget that when they do a survey they are asking a customer--a human being--to take time out of their day to help out. And they're asking for hundreds or thousands of these favors on an ongoing basis. But when the reports come out and the statistics are compiled, all those individual bits of human helpfulness are lost in the data machine. I know it's not practical to individually and personally thank thousands of customers for doing a survey, but it's not that hard to let customers know that you're listening to them and taking their feedback seriously. All too often the customer experience of completing a survey involves taking several minutes to answer a lot of questions and provide thoughtful feedback, and then it disappears into a black hole. You don't need to pay customers for taking a survey (in fact, that's often a bad idea), but you should at least stop and think about how helpful your customers are being and appreciate their efforts.

Issue #93 of Quality Times is Published

We just published the 93rd issue of Quality Times, our newsletter about measuring the customer experience. Email subscribers should be receiving their copies shortly, and you can read it on our website.
This month's theme is making sure you're putting your customer service efforts in the right places. Our first article is about how collecting more data isn't always a useful activity if it isn't the right data. Then we have an article about which customer experience efforts actually make a difference and why so many companies seem to focus on the low-value ones.
As always, I hope you find this useful and informative.

Treating Customers as Human

In a nice counterpoint to AT&T's "cease and desist" approach to customer suggestionsConsumerist has the aww-cute story of Delta's response to an 8-year-old's mailed suggestion. Rather than trot out the lawyers, Delta sent a friendly, personalized letter from an executive along with some Delta swag.

So instead of annoying a loyal customer and generating a slug of bad PR the way AT&T did, Delta gets some goodwill with a future customer and his entire family and just the sort of heartwarming spirit-of-Christmas story the media love to run this time of year.

All because Delta chose to respond to a customer's suggestion at a human level rather than as a legal threat to be promptly squashed.

I think the lesson is clear.

Apple Gets 998 Things Right and Two Things Wrong

Over the past few days both my wife and I managed to drop our iPhones and shatter the screens. I headed to Apple's website and clicked through the "Support" pages and easily found a price list for iPhone screen replacement showing that fixing our phones would cost only $109 for each phone.

Relieved that the price wasn't going to be $200 or $300 each, I clicked the "Start a Service Request" button. Apple's website stepped me through a series of easy-to-understand and beautifully-designed web pages asking for more information about my phone. Then came the page with the Repair Estimate: over $300 including tax and shipping to fix each phone.

Wait, Whaaaa?

I went back to the beginning to make sure I didn't miss any fine-print. No fine print visible, and the page still said $109 to fix a screen. It also clearly stated that broken screens are not covered by warranty, so that's not the issue. So I tried again, started a new service request, put in my info again...over $300 again.

Back to the beginning but this time I clicked the "text chat" button. It popped a new browser window which gave a mysterious error. Tried a couple more times with that, and got the same result each time.

Finally I clicked the "Have someone call me" button. My phone rang immediately and within a minute I was talking to a technician. I explained that I had to replace the shattered screen on my iPhone and asked how much it would cost.

"Let me look that up," he said. "One hundred nine dollars plus tax."

"That's what the website says, but every time I try to start a service request it says the price will be three hundred bucks."

"Yes, that's confusing and I get a lot of questions about that," the technician explained. "The way it works is that if your phone is repairable we will fix any problem for no more than $299 plus tax and shipping. If it costs less to repair we refund the difference. So if the only thing wrong with your phone is the screen is broken it will be $109 and you would get a refund of a little under $200. But if something else needs to be fixed the technician will repair that too and it will be more expensive."

This process was explained nowhere (that I could find) on Apple's website or as part of the service request, and from what the support technician said I was far from the only confused customer. But because of my confusion I spent far more time on this than I had to, and Apple paid for a support call that should have been avoided. I also came away feeling that this experience, while maybe not awful, wasn't as smooth as it should have been.

998 Things Right, Two Things Wrong

Apple, of course, is famous for its outstanding customer experience. But as I'm fond of saying, good customer experience is about doing a thousand little things right.

Here are some of the many things Apple did right in my support experience:

  • It was easy to find the price for replacing an iPhone screen.
  • The price was very reasonable and less than I expected.
  • It was simple to start the repair process from the website.
  • Apple collected my phone information easily with minimal effort on my part.
  • I was given multiple support options, and Apple didn't try to bury the option for phone support.
  • I was able to talk to a technician within a minute or two.
  • The technician was personable, had good phone skills, was patient, and was knowledgeable.
  • The technician was able to give me different support options and solve my problem quickly.
  • I am confident my phone will be fixed promptly at the price I was quoted.

Here's what went wrong:

  • The price prominently displayed on the website did not match what I was asked to pay and no explanation was offered.
  • Text chat wasn't working.

But even though almost everything about this support experience went right, those couple of support misfires wound up costing me time, costing Apple money, and added up to a meaningful amount of un-Apple-like frustration.

What's more, it was clear from the technician's comments that at least some people inside Apple know that this is a problem. So it's a little mysterious to me why Apple, again famous for its customer experience and attention to detail, hasn't noticed that the support center is fielding calls from customers confused about the cost to fix an iPhone screen and done something to relieve the confusion.

Given my general satisfaction with Apple, as a customer I'm not going to hold this against them for long (unless similar things keep happening in the future).

But even a company like Apple sometimes doesn't always get all the details right. And as my experience shows, getting even a couple details wrong can be costly in time, money, and customer frustration.

Cease And Desist From Your Customer Feedback

A couple weeks ago David Lazarus of the LA Times wrote a column about an AT&T customer who emailed the company president with a couple of suggestions (one being unlimited data for DSL customers, the other being a 1,000 text message bundle for $10). AT&T responded with a legalistic letter from the Chief Intellectual Property Counsel which, while not technically rude, didn't really match the spirit of the customer's suggestions:

AT&T has a policy of not entertaining unsolicited offers to adopt, analyze, develop, license or purchase third-party intellectual property ... from members of the general public.

Therefore, we respectfully decline to consider your suggestion.

When contacted by Lazarus, an AT&T spokesperson doubled down on this customer-hostile response, stating, "In the past, we've had customers send us unsolicited ideas and then later threaten to take legal action, claiming we stole their ideas. That's why our responses have been a bit formal and legalistic. It's so we can protect ourselves."

In other words, it's policy. Send a suggestion to the president, get a hostile response from the lawyer.

I have no doubt that some small number of mildly deranged AT&T customers have in fact threatened legal action in this kind of scenario. A company with as many millions of customers as AT&T has gets legal threats on a daily basis. But a legal threat is a long way from an actual lawsuit, and filing a lawsuit is a long way from actually winning damages.

But AT&T's response tells us a lot about the company's culture. From the outside it appears that AT&T management is so focused on the slight chance that a customer might file a frivolous lawsuit that they're willing to annoy or anger a lot of customers to mitigate the risk. Remember that these are customers who are trying to be helpful. And a company spokesperson--someone specifically given the job of communicating to the media--apparently didn't see anything wrong with this policy.

These actions seem to indicate a culture where customers are viewed as potential liabilities, not assets.

I'm sure that from inside AT&T the company views this entirely differently. AT&T leadership and employees probably genuinely believe that they value customers and manage them as assets, and that this kind of customer-hostile policy is a reasonable response to some bad things that happened in the past.

That just highlights why being a customer-centric organization is so hard. Remember that Managing Customers as Assets is one of the five key competencies required to be customer-centric. But it can be hard when you're steeped in a company's culture and constantly exposed to the internal logic that drives customer-hostile decision making.

Vocalabs Newsletter #92 is Published

The October issue of our newsletter, Quality Times, has been published and sent to email subscribers.

We lead off this month with an introduction to A/B testing in the context of customer experience, and some things to keep in mind if you want to use this powerful tool to help improve your Customer Experience (CX). Our second article is about an interesting attitude we've noted in a few of our clients, that no customer problem is unimportant or unfixable. Of course in the real world some problems really are more important or easier to solve than others, but approaching CX from that direction leads to some interesting places.

As always, I hope you find this useful and informative.

Big Data is the Industrial Byproduct of the 21st Century

I read a thought-provoking and contrarian perspective on Big Data a few days ago by Maciej Cegłowski, Haunted by Data. Maciej argues that data is like radioactive waste, in that it's extremely persistent and dangerous if leaked. He draws parallels between the hype and promises of big data today and the hype and promises of radioactivity a hundred years ago when people sold products like radium cigarettes and radioactive underwear.

Personally, I think this analogy is extreme. A more apt metaphor is that Big Data is the industrial byproduct of the 21st century. Like the sludge that spewed from factories in the 20th century, vast quantities of data are produced by almost every commercial activity today. Some of this data is valuable, but the overwhelming majority is worthless and potentially dangerous. And we are only beginning to appreciate the risks of these storehouses of data.

Unlike the physical kind of toxic goo, data is cheap to store and easy to destroy (as long as it remains contained). So there's a strong temptation to hold on to all data just in case some value is discovered in the future, but in many cases the responsible thing to do is get rid of it.

The problem with having all this data lying around is that, while any single piece of information may be fairly innocuous, we're finding out more and more often that it's possible to piece together lots bits of data to learn remarkably personal things. Anyone who knows your recent purchases can figure out not just your hobbies and interests, but also knows your medical condition including whether you or your partner is pregnant and whether you suffer from a particular illness. Anyone who has your list of Facebook friends also knows your sexual orientation and marital status and can probably figure out how faithful you are.

And let's not even think about what someone can figure out from your search history, the websites you've visited over the years, or the GPS tracking of your phone.

Fortunately there is a middle ground that lets companies find the value in their customer data and dramatically mitigate the risk of uncontrolled leakage: statistical sampling. We use it all the time in customer feedback, since it's usually not practical to try to survey every single customer.

It only requires a surprisingly small random sample of data to find a result that's remarkably close to what you would get if you look at all the data. Sampling 10,000 customers out of a population of a hundred million--looking at only 0.01% of the data--will almost always get within 1% of the result of looking at all the data. That means you can throw out 99.99% of the data and not get a meaningfully different analysis.

Of course the details of the statistical sampling matter, and it needs to be designed to meet the requirements of the particular analysis. But the key point remains that companies which keep all data just in case it might be useful someday are holding far more than they actually need, and creating a lot of risk to themselves and their customers in the process.

So think before you hold on to data. If it doesn't have a well-defined reason to be kept, you are probably just creating industrial waste.

And don't come back!

Another report of a car dealership's bad behavior in a customer survey: this time, a Ford dealer banned a customer because the customer gave honest but negative feedback.

It's as though the dealer doesn't actually care about providing a good experience, just getting a good survey.

Sadly, stories like this aren't even surprising anymore. The auto industry's survey processes are so heavily abused that it's almost more surprising if a dealer doesn't try to manipulate the survey.

I've written in the past that in situations like this, the company should just stop doing surveys rather than trying to fix such a broken process. The survey clearly isn't providing an honest measure of customer satisfaction, and all the cheating and manipulation is actively making the customer experience worse.

The Ford dealer who banned a customer for a bad survey is a great example of a company which has fallen into the "Metric-Centric Trap." The Metric-Centric Trap catches companies which, in an effort to become more customer-focused, become so caught up in measuring the customer experience that they lose sight of the actual customer.

Companies caught in the Metric-Centric Trap tend to focus their energies on gathering and improving their metrics rather than trying to understand and improve the customer experience. The problem with being Metric-Centric is that people are extremely complicated, and there is no way to directly measure all aspects of the customer experience. So any set of metrics is, at best, an approximate measurement of what's really going on.

Metric-Centric companies also tend to put heavy incentives on employees to improve their metrics. That can have the perverse incentive of encouraging employees to focus on the specific activities which are being measured, and ignore the things which aren't measured. And if it happens to be easier to "fire" an upset customer than train employees to do a better job, you get the situation with the Ford dealership.

Breaking out of the Metric-Centric Trap is not easy, and requires a significant cultural change. But companies caught in this situation often waste considerable time and money spinning their wheels on customer experience, and may even be making things worse despite as a result of the effort.

Vocalabs Newsletter #91 is Published

We just published the 91st issue of Quality Times, Vocalabs' newsletter about customer experience and surveys.

This month the theme is making the right decisions in Customer Experience. I discuss how good CX is about enabling and encouraging people throughout the organization to make a lot of little decisions the right way, as opposed to making a handful of big strategic decisions. That leads into one of the big strategic decisions many organizations get wrong: collecting survey data for the sake of collecting data.

This newsletter is one of the ways we get to know prospective clients. So if you find this useful and informative, please help us out by forwarding this to other people who might also enjoy it, and encourage them to subscribe. You can subscribe to this newsletter on our website.

Retention Departments Should Die

Retention departments can be some of the worst cesspits of customer experience, and it's not hard to see why. By definition, the role of a retention agent is to take a customer who has already decided to leave and make that customer not leave.

Rather than serving customers, a retention center tries to stymie the customers' express wishes.

And when retention agents aren't empowered to give customers what they want and are incentivized to maximize retention anyway, the result can be a toxic brew.

Witness, for example, the letter published in the Dallas Morning News from someone claiming to be an AT&T retention agent (which I heard about via Consumerist). The letter claims, among other things, that:

  • Each rep is given only allowed to offer a limited number of promotions per week, which are generally gone by the end of Monday.
  • Reps are expected to not only retain customers but upsell them, too.
  • The only way to meet retention quotas is to lie to customers and be a sleaze.
  • The culture promotes promising to do things for customers (such as cancel their service or provide a discount) and simply not doing it in order to meet retention targets.
  • Senior managers turn a blind eye to these practices.

AT&T's response to these allegations is basically that they don't know the writer is really an AT&T employee, and they train all their call center agents. Which is not really a response.

But to me, the claims seem entirely credible. I've heard of enough instances like this that none of the claims surprise me. The toxic combination of impossible retention targets, inadequate agent empowerment, aggressive accountability, and indifferent management is going to lead to problems every time.

This goes beyond just a bad experience. The reporter asked Daniel Lyons, a law professor with experience in telecommunications, to offer an opinion. Said Lyons, "if a company promises a customer incentives to either sign up for service or renew an existing contract and those incentives are not delivered, in many cases, that’s fraud."

So what's the alternative?

The key is that trying to increase market share by aggressive (or possibly even fraudulent) customer retention efforts is almost certainly counterproductive in the long run. That's because today's former customer might be tomorrow's new customer, but customers who feel they were mistreated are far less likely to ever want to come back.

For proof, just look at the experience of the mobile phone industry in the U.S. when number portability was introduced in 2003. Suddenly it was much easier for customers to change carriers, but the mobile phone companies quickly adapted with new ways to reduce churn (like the much-hated two-year contract). In the end, customers could leave without talking to a retention specialist and the industry did just fine.

Customers have lots of reasons for wanting to take their business elsewhere: it's not always about price. So if a customer wants (or needs) to leave, isn't it better to make that a positive experience rather than a negative one?

Bigger Data Is Not Always Better Data

"When in doubt, collect more data."

That could easily be the guiding principle of business in the year 2015. Collecting data is easy and storing it is cheap. You never know what insights might be gained from just a bit more data.

But like any simple idea, reality turns out to be more complicated. Not all data is useful, and while storing data is cheap, the tools and expertise to find those hidden insights turn out to be fairly expensive. And, as companies occasionally discover to their regret, data can be a liability as well as an asset.

I see this attitude in the customer experience world, too. Often it's a lot easier to just do more customer tracking or conduct more surveys than take action. There's a lot of data collection for the sake of data collection going on.

To be effective, customer surveys should have an underlying purpose. For example: to answer a specific question (i.e. "How many customers call us after logging in to the website?"), or to support a specific business activity (such as coaching employees, or tracking customers' satisfaction with their purchases).

Often, however, I see surveys designed backwards. Rather than starting with the goal of the survey, people will begin by thinking of all the things they might possibly find interesting and add all of them to the survey.

The result is usually a mess: a long survey where most questions are never really used for anything. This tends to drive the response rate down and make it harder to take action based on customer feedback.

So before you collect more data--whether that's enlarging a survey sample or adding more questions--take a few minutes to ask yourself:

  1. Is the new data likely to tell me something I don't already know?
  2. Do I know what I'm going to use the additional data for?
  3. When I consider all the costs of collecting additional data, including reduced survey response, customer goodwill, and the effort to analyze the results, is it worth the expected benefit?

If you can answer Yes to all three questions, not only is the data probably worth collecting but you've also got a good start on taking action based on the results. But if you answered No, it may be that you're collecting data for the sake of collecting data.

Does Your Effort Have Value?

There's a lot of different activities that go into an effective customer feedback program, but not all of those activities have equal value.

Based on my experience, some of the high-value activities are:

  • Closing the loop with individual customers
  • Coaching and training individual employees using voice-of-the-customer data
  • Discovering and improving customer pain points and broken processes
  • Disseminating customer feedback throughout the organization in a way that's relevant to each business user
  • A/B testing of different ideas for business processes

Some of the activities which tend to have less value include:

  • Calculating and tracking survey metrics
  • Paying bonuses based on survey scores
  • Disciplining employees for poor survey scores
  • Building high-level survey dashboards
  • Collecting survey data with little or no free-response feedback from customers

Keep in mind that less value does not mean no value. There is certainly some value in tracking survey metrics. But there's a lot more value in having a closed-loop process--and ideally a feedback process should have both.

And yet many companies seem to spend almost all their effort in low-value activities, completely ignoring the things which are most likely to lead to better customer experiences and a more efficient business. These companies have little to show for the effort they put forth to improve their customer experience.

The common thread among the low-value list is that they are all centered on improving customer survey scores as a goal in itself. A company which focused on improving metrics but ignores the underlying customer experiences has fallen into the trap of becoming metric-centric instead of being customer-centric.

It's easy to fall into this trap. Survey scores are concrete, quantitative, and measurable. Executives and managers can make the mistake of thinking that improved survey scores should be the goal, instead of a side-effect of achieving the true goal of becoming customer-centric.

In contrast, the activities on the high-value list are all focused on using specific customer feedback to directly improve the customer experience. Some activities, like closed-loop processes and coaching to the voice of the customer, work at the level of the individual employee or customer. Others, like A/B testing and identifying pain points, are higher level.

So just because your organization puts a lot of effort into the customer experience does not mean anything is likely to improve. You need to ask whether your activities are truly helping to understand customers' stories and improve the experience, or whether you've fallen into the trap of becoming metric-centric and spending a lot of effort on low-value activities.

What does it mean to be customer-centric?

Business writers like to talk about the benefits of being customer-centric. But what does it mean, and how do you know whether a company is customer-centric or not?

Like any other aspect of organizational culture, being customer-centric can be hard to define. There's no simple test or checklist that says you're customer-centric.

Being customer-centric is about considering the impact on customers in every decision the company makes. A customer-centric organization will:

  • Prioritize efforts that remove pain points for its customers.
  • Consider the impact on customers on decision-making throughout the organization, not just in the traditional areas of customer service and sales.
  • Train employees in all departments that the decisions they make can affect customers, including back-office functions.
  • Have leadership that takes an active interest in customer issues, both in aggregate and also individually.

These are all organizational outcomes, they are the things that come naturally to a customer-centric organization  as part of its culture.

Getting there is another matter. That's where the five competencies Jeanne Bliss talks about in Chief Customer Officer 2.0 come into play.

Amateurs Talk Strategy, Professionals Talk Execution

Amateurs Talk Strategy, Professionals Talk Logistics

That's an old military quote that sometimes gets pulled out at business leadership conferences. Strategy is the easy part. The hard part, the stuff the pros worry about, is the nuts and bolts of getting everything lined up and in the right place at the right time so the strategy can work.

It's an important message for customer feedback programs, too.

Developing a survey strategy is easy, and a lot of people have a lot of opinions on how to do it (some better than others).

But actually building an effective feedback program requires a lot of attention to detail. You need to:

  • Determine who to ask to participate in the survey
  • Decide what questions to ask
  • Determine the right time and channel to invite the customer to take the survey
  • Offer the survey to the customer in a way that makes the customer want to help
  • Route the survey responses to service recovery teams in real-time when appropriate
  • Coach front-line employees based on their individual survey responses
  • Deliver data to business users throughout the organization in a way that's timely and tailored to their individual needs
  • Monitor the survey process for signs of manipulation or gaps in the process
  • Adjust all aspects of the process on an ongoing basis as business needs change
  • Focus the entire organization on using customer feedback as an important tool to support both operational and strategic decision making

(As an aside: one thing not on this list is "Track your metrics and set goals," because tracking metrics is both easy and low-value. Everyone does it, but many organizations stop at that point in the mistaken belief that improved customer experience will magically follow.)

So just as military pros understand that wars are won and lost in the unglamorous details of moving people and supplies to the right place at the right time, survey pros understand that the effectiveness of a feedback program is built on the nitty-gritty of collecting and delivering the right data to the right people at the right time to help them do a better job.

What the amateurs don't recognize is that you can't just move an army on a whim, or improve customer experience by throwing some survey metrics at it.

So we circulated a Word doc...

So someone emailed around a Word doc with the survey design, and someone else edited it, then forwarded it to another person who copy-pasted it into the survey software, and the first person said it was good to launch so a fourth person uploaded the customer list and sent the invitations, and....wait, wasn't the Word doc already signed off? Why do we need to proofread it again?

Via The Daily WTF

No Customer Problem is Unimportant or Unfixable

In a couple of my clients, I've noticed an uncommon attitude towards the customer experience.

Where most companies often push back on trying to solve customer problems, these unusual companies take the opposite approach. They assume that No customer problem is unimportant or unfixable.

Compare that to the litany of reasons most companies give for not fixing their customer experience problems:

  • "Only a few customers are complaining about that."
  • "It would be very expensive to provide that level of service."
  • "That would require major investment of IT resources."
  • "That customer is just trying to get something for free."
  • "If we did that our customers would scam us."
  • "The way we're doing it now is better."
  • "You can't please every customer all the time."

What makes these excuses so insidious is that they are, very occasionally true. Some problems really do arise from freak circumstances (but usually if one customer complains, there are many others who have the same problem and aren't complaining). Sometimes systems are so big and outdated that it would be uneconomical to fix them (but at some point they will have to be replaced, and next time around you shouldn't let your systems get so far behind the curve). Some customers really are trying to scam you (but the overwhelming majority of customers are honest). And it is true that some customers will never be satisfied no matter what you do, but those customers are very rare.

Often one (or more) of those reasons is trotted out as a way to avoid taking a serious look at fixing some issue with the customer experience:

"What are we going to do about the complaints about how we verify customers' identities over the phone?"

"Only a few customers are complaining about that. Plus, if we changed the authentication then people would scam us."

"Oh, then I guess we shouldn't change that."

But if you take the attitude that No customer problem is unimportant or unfixable, then the conversation becomes completely different:

"What are we going to do about the complaints about how we verify customers' identities over the phone?"

"Only a few customers are complaining about that. Plus, if we changed the authentication then people would scam us."

"You might be right. But No customer problem is unimportant or unfixable, and this is definitely important enough to some of our customers that they took the time to complain. So we should at least explore some options and see if there's a better way to do things."

This attitude, that No customer problem is unimportant or unfixable, can dramatically shift a culture towards being customer-centric, especially when it comes straight from the top.

It's not an easy change, because it directly attacks the deep resistance to change in many organizations. But try making this your catch-phrase and see how it changes the discussion.

Doing a Thousand Things Right

Creating a good customer experience is often about doing a thousand little things right.

It's easy to lose sight of that fact when you're trying to think strategically about process improvement and engineering a better customer experience for your organization. Statistics can conceal the fact that behind every data point is a customer, and that customer received either a good experience or a bad one.

So while it's important to make sure the right processes are in place to enable a good customer experience, it's more important to make sure that the people who are part of those processes have the tools they need to make those thousand little decisions in the right way.

Every employee of every company is pulled in different directions by competing priorities. You have to balance things like working faster vs. more carefully; satisfying an upset customer vs. saving money; or solving a problem yourself vs. calling for help.

Even if a company says it cares about customer experience, what really matters is how employees are making those decisions on a day to day basis. To make the right decisions, a company needs to ensure:

  • Employees understand what customers expect and how to deliver it (you need good training)
  • Employees get regular, specific, and detailed feedback about how customers perceive the experience (you need a well-designed closed-loop survey)
  • Employees aren't pressured to make bad decisions (compensation needs to align with customer experience, or at least not pull the wrong way)
  • Employees know the leadership cares (customer experience needs to be an ongoing effort, not a one-time project)

This holds true for employees throughout the organization, not just the ones who deal directly with customers. A website designer or billing specialist can be subject to the same negative forces (work faster, save money, ignore the complaints) as a contact center rep or salesperson. If anything, back-office employees may be more susceptible to taking customer experience shortcuts since they don't have to deal with customers directly.

The good news is that most people genuinely want to do a good job, and if given the right tools and training and if shown that the company cares, they will be highly motivated to make the right decisions about customer experience.

If the leadership can just do a few big things right, it's not that hard for everyone else to do a thousand little things right.

Syndicate content