The Customer Service Survey

Issue 104 of Quality Times Peter Leppik Fri, 2017-03-31 13:04

We've published the 104th edition of Quality Times, Vocalabs mostly-monthly newsletter about customer feedback and related topics. In this month's issue we discuss how it could be that the American Customer Satisfaction Index is showing increased scores for several retailers who are closing stores and losing customers.

As always, I hope you find our newsletter interesting and informative. Subscribers should have already received their emailed copies, and if you want to subscribe too, you can sign up here.

Wells Fargo Isn't Alone vocalabs Wed, 2017-03-15 14:21

Back when news broke of Wells Fargo's fraudulent account scandal there was some speculation that we might be hearing of similar problems at other large banks. Wells Fargo is not the only large bank with aggressive sales goals and a punitive approach to employees who miss their targets.

Similar problems--though perhaps not as severe--are now coming to light among Canada's five largest banks. A CBC News investigation has led to thousands of reports from current and former employees detailing aggressive sales targets, punitive management practices, and pressure to sell customers products they didn't want or weren't in their best interests. A few laws may have been broken along the way.

Tending the Land

It may seem strange that banks, traditionally stewards of their customers' assets, have succumbed to high-pressure sales tactics more at home on a used car lot. But this is a natural consequence of viewing the customer relationship as something to extract value from, rather than something to nurture and grow over time.

It's similar to a farmer deciding what to grow each year. Not all crops are created equal. Some are more valuable, some extract nutrients from the soil, some require more pesticides, and so forth. A good farmer won't just plant the most profitable crop every single year, since that's likely to deplete the soil and encourage pests. Over the long term, it's better for the farmer's bottom line and the health of the land to look for a tradeoff between what's profitable now vs. what's going to keep the farm sustainable over the long term. So while you could make a little more money now, it comes at the expense of future earnings.

Banking relationships are often measured in decades, so it's natural to think that they should be carefully tended like a farmer's fields. But we are long past the time when a bank's owners were local civic leaders interested in the health of the community and willing to take a long-term view. Today's large banks are publicly-traded corporations owned by financial investors who are mostly interested in quarterly growth and are perfectly willing to replace management teams who don't deliver. The pressure to risk long-term customer relationships for short-term profitability can be hard to resist.

If It Feels Wrong, It Probably Is Wrong

Chances are that plenty of people have known for a long time that something was wrong in the five Canadian banks in the CBC report. I don't imagine that managers enjoy berating longtime employees over sales targets, any more than longtime employees enjoy being beat up over unrealistic goals.

It's likely that these banks will be paying the price for a long time, both in short-term loss of business and bad publicity, and in long-term customer distrust and regulatory oversight.

I don't have any magic prescription for avoiding these kinds of situations, other than to say that both employees and customers probably would have told the companies that things were going wrong if only they had been asked and the leadership had sincerely listened.

Instead, five major Canadian banks today find themselves issuing bland PR statements about employee codes of conduct and gearing up for government investigations.

Random CX Thoughts for a Wednesday vocalabs Wed, 2017-03-08 15:41

Just some random Customer Experience musings on a Wednesday afternoon:

  • Airlines won't improve their customer experience as long as customers are willing to pay extra for upgrades.
  • Should Facebook be able to enforce its "Terms of Service" agreement even if it knows that almost none of its users actually read the "Terms of Service" before clicking the box that says they read it?
  • Most business processes work to hide the fact that each customer is unique and individual.
  • Online "Terms of Service" agreements just teach people that sometimes the easiest way to get what you want is to lie.
  • In my pocket I carry a supercomputer wirelessly connected to most of human knowledge and society, and the battery only lasts one day. To me this is annoying, not miraculous.
  • Humans share 98% of our DNA with primates who shake tree branches and fling poo at each other to establish their dominance hierarchy. I'm not sure what this has to do with CX, but it does seem to explain a lot.
  • Almost no company wants to create a bad customer experience, and yet almost all of them do.
  • The less contact you have with customers, the harder it is to know what they want.

Circling the Drain with Happy Customers

by vocalabs on Fri, 2017-03-03 14:38

American Customer Satisfaction Index (ACSI) data for the retail industry was released this week, and the folks over at Consumerist noticed something, well, odd. Scores for Sears, JC Penny, and Macy's took huge leaps in 2016--despite the fact that those traditional department stores have been closing stores in the face of sustained sales declines and changing consumer tastes. In addition, Abercrombie & Fitch, a speciality clothing retailer, also posted a big ACSI gain despite struggling to actually sell stuff.

The idea that customer satisfaction scores would jump as the companies are losing customers seems counterintuitive to say the least. The ACSI analyst gamely suggested that shorter lines and less-crowded stores are leading to higher customer satisfaction and, yeah, I'm not buying it.

I'd like to suggest some alternate hypotheses to explain why these failing retailers are posting improved ACSI scores:

Theory 1: There's No There There

Before trying to explain why ACSI scores might be up, it's worth asking whether these companies' scores are actually improving, or whether it might just be a statistical blip.

Unfortunately, ACSI doesn't provide much help in trying to answer this question. In their report (at least the one you can download for free) there's no indication of margin of error or the statistical significance of any changes. They do disclose that a total of about 12,000 consumers completed their survey, but that's not helpful given that we don't know how many consumers answered the questions about Sears, JC Penny, etc.

With this kind of research there's always a temptation to exaggerate the statistical significance of any findings--after all, you don't want to go through all the effort just to publish a report that says, "nothing changed since last year." So I'm always skeptical when the report doesn't even make a passing reference to whether a change is meaningful or not.

It could be that these four companies saw big fluctuations in their scores simply because they don't have many customers anymore and the sample size for those companies is very small. There's nothing in the report to rule this possibility out.

Theory 2: Die-Hards Exit Last

We know that even though surveys like ACSI purport to collect opinions about specific aspects of customer satisfaction, consumer responses are strongly colored by their overall brand loyalty and affinity.

So as these shrinking brands lose customers, we expect the least-loyal customers to leave first. That means the remaining customers will, on the whole, be more loyal and more likely to give higher customer satisfaction scores than the ones who left.

In other words, these companies' survey scores are going up not because the customer experience is any better, but because only the true die-hard fans are still shopping there.

If this is the case, then the improved ACSI scores are real but not very helpful to the companies. They are circling the drain with an ever-smaller core group of more and more loyal customers.

This is a hard theory to test. If ACSI has longitudinal data (i.e. they survey some of the same customers each year) then it might be possible to tease out changes in customer populations from changes in customer experience.

Theory 3: ACSI Has Outlived its Usefulness

Finally, it's worth asking whether the ACSI is simply no longer relevant. The theory behind ACSI is that more-satisfied customers will lead to more customer loyalty and higher sales, all else being equal. But the details are important, and the specific methodology of ACSI was developed over 20 years ago, based on research originally done in the 1980's.

I know that ACSI has made some changes over the years (for example, they now collect data through email), but I don't know if they've evolved the survey questions and scoring to keep up with changes in customer expectations and technology. Back in 1994 when ACSI launched, not only did we not have Facebook and Twitter, but had only just been founded, and most people didn't even have access to the Internet.

So if the index hasn't kept up enough, it's possible that ACSI is putting too much weight on things that don't matter to a 21st century consumer, and missing new things that are important.

Interpreting Survey Data Is Hard

I'm only picking on ACSI because their report is fresh. The fact is that interpreting survey data is hard, and it's important to explore alternate explanations for the results. Even when the data perfectly fits your prior assumptions you may be missing something important without looking at competing theories.

It's entirely possible that ACSI did exactly that, tested all three of my alternate theories and others, and they have some internal data that supports their explanation that, "Fewer customers can lead to shorter lines, faster checkout, and more attention from the sales staff." But if they went through that analysis there's no evidence of it in their published report.

When the survey results are unexpected, you really need to explore what's going on and not just reach for the first explanation that's remotely plausible.

Issue 103 of our Newsletter

by Peter Leppik on Tue, 2017-02-28 15:43

We published the 103rd issue of our newsletter, Quality Times, today. We've published the 2016 results for the National Customer Service Survey for Banking, and this month's newsletter focuses on some of the changes we've seen this year. We also discuss some of the unique challenges in trying to apply the Net Promoter Score to a business-to-business survey.

This newsletter is one of the ways we get to know prospective clients. So if you find this useful and informative, please subscribe and pass it along to other people who might also enjoy it.

2016 NCSS Banking Results

by Peter Leppik on Wed, 2017-02-15 14:23

2016 NCSS Banking GraphsWe've published the 2016 results of the National Customer Service Survey on Banking. This ongoing research tracks the customer service quality at Bank of America, Chase, Citi, and Wells Fargo in a continuous data set going back to 2011.

The most striking change in our survey scores in 2016 is the continued decline in Citi's metrics, which have declined substantially over the past two years. Citi now ranks last in eight of the nine survey scores we track.

We don't know why Citi has seen such a drop in our survey, but we speculate that it might be related to Citi's partnership with Costco. It's been widely reported in the media that onboarding the Costco customers took longer and was more of a challenge than had been expected. If Citi had been diverting resources from its regular customer service to manage that transition, that would explain the results we're seeing. If that's the case, we would also expect to see Citi's numbers rebound now that the Costco transition is complete.

The other interesting result this year is a case of something we didn't see: We saw no decline in survey scores at Wells Fargo despite the company's fraudulent account scandal. We think this is because of two factors: first, news of the fraudulent accounts broke midway through 2016, meaning that about half our survey sample was collected before it became widely known. Second, our survey is intended to measure the quality of customer service and not reputation. Unless a specific Wells Fargo customer in our survey had been personally impacted by the company's fraud, he or she isn't likely to rate the customer service any lower as a result of the news.

The executive summary of this report is available for free download, and the full data including customer comments and interview recordings is available by subscription.


by Peter Leppik on Fri, 2017-02-10 14:40

Language Log has coined the word "Nerdview" to describe the common situation of writing in technical terms for a non-technical audience. Nerdviews are all around us, to the point where we hardly even notice them anymore.

For example, asking you to input the "CVV2" on your credit card to process an online payment is a classic nerdview, since unless you are in the payment industry you would have no reason to know what a CVV2 is or how to find it. This is so common that most people have figured out that the CVV2 is the three-digit security code on the back of a credit card--even though most cards don't label the code "CVV2" or give any other indication of what it's for.

Nerdviews almost always lead to a worse customer experience, and should be avoided whenever possible. But that can be a challenge because oftentimes the people who have to decide how to communicate important information to the general public are the same ones who are experts in their own narrow field. It can be hard to step outside your expertise and think like a novice.

My personal favorite nerdview was one I encountered many years ago at Vocalabs. One of our clients was an insurance company, and in their automated customer service system one of the prompts asked, "Do you want to know your withdrawal value or your redemption value?" Unless you are in the insurance business, chances are you don't know that a life insurance policy can have two different values, much less what they mean.

Finding and avoiding nerdviews is almost always worthwhile, but it can be a challenge. Nerdviews tend to become invisible to us once we figure out what they mean. Humans are adaptable, and even if you were mystified the first time a government form asked you for your "DOB", chances are the next time you remembered that it meant "Date of Birth" and hardly even noticed.

Here are some tips for uncovering nerdviews:

  • Don't assume your audience thinks like you, or that you can put yourself in your shoes. Get feedback from actual customers or users.
  • Remember that people adapt to nerdviews quickly, but may ignore or overlook important information if it's not clear. Don't assume that you're communicating well just because your customers or users seem to have figured things out.
  • Plain language is usually better than technical precision. If you feel compelled to be technically precise when communicating to a non-technical audience, you may be trapped in a nerdview.

And remember that even if you don't think you're a nerd, we are all nerds about something. Remember your audience, and stay away from the nerdview.

Helpful Feedback

by Peter Leppik on Wed, 2017-02-08 15:58

Imagine taking a college class, and at the beginning of the semester the Professor announces, "For this class, we're not going to be handing back any of your papers or exams, and we won't tell you any of your grades on individual assignments and tests. The only grade you'll get is your final grade at the end of the semester which will be an average of all your work."

You wouldn't expect to learn much in this class. In order to improve, you would want to know what you were doing well at and where you needed to improve throughout the semester. You would want specific feedback about specific things you had done.

And yet many customer feedback programs are structured just like this insane Professor's class. Somehow we expect employees to know how to improve despite only getting an average survey score every month or every quarter.

In order to make a survey program helpful, we need to give people the chance to connect specific customer feedback to specific things the employee did to garner that feedback. We also need to help employees think about the feedback as constructive criticism so they have the tools to apply the feedback to their daily customer interactions.

Here are some tips to help make this happen:

  • Deliver feedback directly to front-line managers and supervisors as soon as it comes in. Managers and supervisors should discuss the feedback with employees as soon as is practical, either for encouragement or for ways to improve.
  • Don't make the survey process so high-stakes that employees feel they must get good scores or else. This inhibits learning, and can also lead to survey manipulation.
  • Treat negative surveys as opportunities to improve, not mistakes to be punished. Always remember that each survey is only one customer's opinion, and while you want customers to have good opinions it's also not possible to please everyone.
  • Don't just ask customers for a rating, ask them to explain what happened and why they feel they way they feel. We learn more from stories than from statistics.

There are, of course, real concerns about managing the delivery of customer feedback to employees. But the solution is better coaching and supervision, not giving people so little feedback that it becomes useless.

Net Promoter Score (NPS) in B2B Surveys

by Peter Leppik on Fri, 2017-02-03 13:42

If you're thinking about using the Net Promoter Score (NPS) on a business-to-business survey, there's some extra factors you should consider before committing to this metric.

Net Promoter Score is a common, and somewhat controversial, measurement of a customer's relationship to a company. It's based on responses to the question, "How likely are you to recommend us to a friend or colleague?", and is calculated by subtracting the percentage of low responses from the percentage of high responses. It has the advantage of being a simple and standardized metric, but also gets heavily criticized for being too simplistic and oversold. NPS also tends to get shoehorned into situations where it doesn't make any sense, an unfortunate side-effect of being heavily (and wrongly, in my opinion) promoted as "the only survey score that matters."

What is NPS Measuring?

The Net Promoter question asks the customer to imagine a scenario where they might be asked to make a recommendation about some product or service. The idea is that telling a friend to buy from a particular company is something most people will only do if they feel strongly about that company. Customers willing to make that commitment are valuable and potentially a source of word-of-mouth marketing (in the jargon of NPS, they are "Promoters"); whereas customers who feel the opposite can actively damage a brand (these are called "Detractors"). Subtracting the promoters from detractors gives you a single simple number, while emphasizing the fact that you want to have a lot more Promoters than Detractors.

In the Business-to-Consumer world this all seems sensible. But in the Business-to-Business world there's additional complexity.

B2B NPS Challenges

A number of common situations challenge the validity of NPS in the B2B world:

  • Some companies prohibit their employees from making vendor recommendations to third parties. We regularly see B2B customers give "0" on the Net Promoter question, and give the reason as "I'm not allowed to make recommendations."
  • Buying decisions are much more complex in the B2B environment than B2c, and the recommendations of any one person might matter a lot or not at all. It doesn't matter if nine out of ten people are Promoters if the one Detractor is the CEO.
  • Relationships are much more complex in the B2B environment, and where many different people at the customer company interact with the vendor company, most of those people probably have only a limited view into the entire relationship.

In short, NPS is already simplistic in the B2C world, and trying to apply it to more complex B2B relationships is a challenge.

Fortunately, Most Customers Know What You Meant

This doesn't mean that NPS is useless in the B2B world. Fortunately, we've found that in the real world most people don't answer the Net Promoter question literally (though some do)--as demonstrated in the story from a few years ago about what happens when you try to get Promoters to actually recommend the product to a friend.

Instead, most people answer the Net Promoter question in a more generic way, providing a high level view of how they feel about the company overall. The question is being interpreted as something closer to, "How would you rate the company?" This is confirmed by the fact that on surveys where we ask both a Net Promoter question and a more generic Customer Satisfaction question, the answers tend to correlate almost perfectly.

This means we can get away with using Net Promoter even in situations where it might not strictly apply, because most customers will know what we're trying to get at and answer the question behind the question. We still get some people who try to respond to the literal question (the "I'm not allowed to make recommendations" crowd), but they are in the minority.

Practical B2B NPS

I would not generally recommend using NPS in a B2B survey--instead, I would ask a more tailored question that gets at what you're really trying to understand in the customer relationship. But if you do choose to use NPS for B2B (or if you're required to use it), keep these things in mind:

  1. Understand that in B2B, there's reasons to not make recommendations that have nothing to do with your business relationship. Look beneath the metric at the reasons customers give for their scores.
  2. Be aware of the fact that NPS will not capture the full complexity of the business relationship. It may be useful to look at separate scores by title or role; and a simple average NPS score could be very misleading.
  3. Account for the fact that individuals might not have any visibility at all into important aspects of the buying decision (price, for example, or backend integration). Make an effort to understand what factors do and do not influence each respondent's survey rating.

Paying attention to these will help you get the most out of NPS in your B2B survey.

How Can We Help You?

Let us put our expertise in customer feedback to work for you.

Please tell us about how we can help you improve your customer feedback process.