The Customer Service Survey

Vocalabs' Blog

Blogs

Vocalabs Newsletter #89

We published the 89th issue of Quality Times, our mostly-monthly newsletter. Email subscribers should already have their copies.

This month is a deep dive into survey methodology and process. In my first article I discuss a recent academic paper comparing the popular metrics NPS, CSAT, and Customer Effort and find that the research isn't all it's cracked up to be. Then I get into the subject of whether and when to pay an incentive to take a survey. My answer: "almost never."

As always, I hope you find this useful and informative.

Don't Treat Your Customers Like Mushrooms

"They treat me like a mushroom," as the old joke goes. "They keep me in the dark and feed me ****."

Nobody likes being treated like a mushroom, but some companies do that to their customers anyway. This week Spirit Airlines learned why this might not be a great idea.

Spirit, for those not familiar with the company, is the low-cost airline which has elevated poor customer experience to part of its brand image. In an industry where calibrated misery is an art form, this is saying something.

Over the past week, Spirit has been plagued with delays and cancelations. Customers didn't always buy the "bad weather" explanations offered by the gate agents. And let's face it, "weather" doesn't sound like a credible reason for a flight to be canceled when the plane is at the gate and both the departure and arrival airports are clear.

But Spirit makes it very difficult to talk to anyone at the company, and doesn't monitor social media (all part of that low-cost brand!). In the absence of any official word from the airline, rumors started to spread that there was some sort of pilot strike going on. Even when contacted by the media, at first Spirit didn't respond, leading some outlets to publish articles about the rumored labor action. "Pilot strike" is probably second only to "maintenance problems" on the list of things which will make passengers choose a different airline.

I have to think that this article (even when updated several hours later when Spirit finally gave an official explanation for the delays) did substantial damage to Spirit's bottom line. It creates the impression that the company doesn't care about its passengers even when stranding them far from home, and that the airline won't make an effort to keep customers informed when the schedule is becoming massively disrupted.

Surely it would have been cheaper just to communicate clearly in the first place.

Underpaid, Inexperienced Workforce

Bruce Temkin wrote a blog post recently about Walmart's plans to give a pay raise to many of its lowest-paid employees. Bruce's conclusion, after looking at this decision through the lens of customer experience, is that in the long run Walmart is likely to save money through lower staff turnover leading to better customer experience and more loyal customers.

I think Bruce is spot-on. The only part I disagree with is that this is surprising or counter-intuitive.

The simple fact is that employee turnover is expensive, inexperienced employees are more likely to make mistakes, and underpaid employees aren't likely to go out of their way for customers. People want to be valued in their jobs, and underpaying them to do menial work is a great way to communicate that they are not valued.

My guess is that by paying employees more, Walmart will save money through reduced turnover alone, nevermind any customer experience benefits. Over the longer term, better morale among employees could also pay dividends through better customer experience as a bonus.

We don't have to guess, though. Compare Sam's Club, a warehouse store run by Walmart, to Costco, a direct competitor known for relatively generous pay and benefits and low employee turnover.

Which has the better customer experience, higher customer loyalty, stronger reputation, and better financial results?

Dear Xcel Energy: Here's Why Nobody Takes Your Survey

Browsing the Xcel Energy website recently I was accosted by one of those ubiquitous popup surveys. You know the kind: one which asks if you'll take a survey about your browsing experience.

These surveys typically have an abominable response rate, and it's not hard to see why. Should you agree to take the survey you'll be asked to answer screen:

after screen:

after screen:

after screen:

after screen:

of questions. Thirty-two questions in all, of which 25 are required.

Clearly someone didn't get the memo about survey length.

What if you happen to notice a problem with Xcel's website, maybe a broken link somewhere, and you want to be a good samaritan and point it out to them?

Good luck with that. Because if you just write a short note in the comment box about the broken link and hit "Submit," here's what you get:

So much for being helpful (this is one of the reasons why Mandatory Survey Questions are Evil). If you're not interested in rating the "balance of text and graphics" on the site, or providing no less than three different ratings of the search function, Xcel doesn't want to hear from you.

Not that it would have mattered anyway: notice the text at the very bottom of the survey, "Please note you will not receive a response from us based on your survey comments."

To the customer that means, "Nobody will read the comments you write."

Xcel might as well save the bother of putting up a survey and just have a page that says, "We don't actually want your feedback but thanks anyway." It seems like a lot less trouble.

Shockingly Good CX from T-Mobile

It's often easier to find things to complain about in Customer Experience than examples of things that went well. But yesterday I had a customer service experience with T-Mobile that went shockingly smoothly.

Here's the setup: I needed to make a small change to my T-Mobile plan. I logged in to my account online and tried to make the change, but got an error message that said I needed to call customer service for help.

So I called customer service. Here's what happened:

  1. The speech recognition system offered "Representative" as the very first option.
  2. I was not told to go to the website for faster service.
  3. I was not asked to play Twenty Questions before being routed to a person.
  4. I was asked to authenticate myself by entering the last four digits of my SSN. When I was connected to the representative, she already knew I was authenticated and didn't ask me to authenticate again.
  5. The CSR was pleasant, friendly, and relaxed. She did not try to rush me, and took a moment to chat.
  6. The representative immediately understood my problem, fixed it, and explained how long it would take for the change to show up in my online account. She even saved me a couple bucks by tweaking the effective date of my plan change (something that's not possible online).
  7. A few hours later, as promised, my plan change was visible when I logged in online.

None of this is rocket science, but it's amazing what a difference paying a little bit of attention to the customer can make.

Customer Obsession is Mutual

We all know someone who is obsessed with a certain company or brand.

Some companies seem to attract this. I'm thinking of Apple, Zappos, Nike, and the like.

Those same companies that generate customer obsession are themselves obsessed. It turns out that obsession is a mutual thing. Customers are drawn to brands which really care about the same things they do.

Other emotions can also be mutual: apathy, contempt, hatred.

So how does your company feel about your customers?

 

Customer Experience Doesn't Matter...Until it Does

Consumer Reports is out with a survey today about cable TV companies. They had a couple of not-surprising findings:

  • Consumers hate their cable companies, and
  • More and more consumers are finding alternatives to cable TV

Both of these are well-known trends, and it's hard not to conclude that they're deeply related. For decades cable TV companies have enjoyed monopoly status in most markets, and since revenue growth was mostly a matter of selling bigger packages to existing customers and buying up smaller companies, cable companies have largely ignored their customer experience.

The result is a horrible service experience (which is not only bad for customers, but is also shockingly inefficient and expensive) and years of pent-up ill-will.

It's no wonder that when viable alternatives to traditional cable TV become available, customers started cutting the cord. This is especially true of younger consumers, who never got into the cable habit to begin with and don't see why they should spend hundreds of dollars a month for TV. For them, Netflix is much better and much cheaper.

Customer experience doesn't always drive revenue growth, as there are many factors which go into buying decisions. But customer experience does drive goodwill and loyalty.

In those industries where customers' choices are limited by things like the lack of viable alternatives and difficulty switching vendors (such as cable TV, airlines, banks, and mobile phone companies) it can be very tempting to under-invest in customer experience.

But that's a dangerous approach. If your customers don't like you, they'll head for the exits when an alternative becomes available. And it doesn't matter how deeply entrenched your business is: sooner or later there will be an alternative.

It's extremely difficult to turn around a bad reputation. So even if you think customer experience doesn't matter to your business, someday it will. When that day comes, will your customers be loyal? Or will they flee?

Newsletter #88 is published

We just published the 88th issue of Quality Times, our newsletter about customer experience and customer feedback. In this issue I discuss survey strategy with two articles, one about journey surveys, and another about doing transactional surveys in a B2B environment.

As always I hope you find this interesting and informative.

Survey Incentives

Incentive payments are a standard technique for increasing survey response rate. Whether a straight-up payment ("Get $5 off your next order for taking our survey") or entry into a drawing ("Take our survey for a chance to win $5,000!"), this pitch will be familiar to almost everyone.

The problem is that in many cases, incentives are deployed as a lazy and expensive way to "fix" a broken process without addressing the underlying issues.

If a survey has a low response rate, there's usually some underlying cause. For example:

  • The survey takes too long.
  • The survey isn't being offered to customers in a way that's convenient.
  • The process relies on customers remembering to take the survey (especially surveys printed on cash-register tapes or at the end of a call).
  • The company doesn't communicate that it takes the feedback seriously.
  • The survey is broken (for example, a web survey which returns errors).
  • The survey invitation looks too much like spam or a scam.
  • The survey gets in the way of something else the customer wants to do (especially pop-up surveys on web pages).
  • The survey doesn't respect whatever genuine desire to give feedback the customer may have.

Rather than trying to identify the underlying issue and fix it, often it's easier to just throw money at customers to try to boost response. What's wrong with that? Here are a few things:

  • Incentives can be expensive. I know of companies which spend more on the survey incentives than the survey itself.
  • Incentives motivate the customer in the wrong way. Feedback given out of a genuine desire to help is more likely to be sincere and detailed than feedback given to earn a few bucks.
  • Incentives are almost never necessary in a transactional feedback program. A well-designed process will normally give a high enough response rate without the use of incentives.

But the biggest sin of survey incentives is that they're often used to hide deeper problems with the survey, problems which make the entire process much less effective. Designing an effective transactional feedback program involves some tradeoffs, but those tradeoffs help ensure that the survey design is carefully focused.

For example, transactional surveys need to be reasonably short to get a good response rate. That means some (possibly difficult) decisions need to be made about which questions to ask and which questions not to ask. But that process also forces the company to carefully consider what the purpose of the survey really is, and what's important to ask. The result is almost always a better survey, precisely because it doesn't include all the things that aren't as useful.

All that said, there are some situations where incentives may be appropriate, especially when you get out of transactional surveys and into the realm of market research. If you're asking the participant to spend a lot of time, participate in a focus group, or otherwise do something more than just a quick favor to the company then you should be offering some compensation.

But for ordinary transactional surveys, incentives are usually a sign of a broken process.

Journey Surveys

Journey surveys provide a different approach to a customer feedback program, one which examines the overall customer experience rather than individual customer touchpoints.

Journey surveys may look a lot like the familiar transactional surveys, but there are some important differences. The journey survey happens after a customer reaches a point in a specific customer journey, and is focused on the entire journey rather than just the customer's most recent interaction with the company.

For example, let's look at the customer journey of opening a new bank account. To open a bank account, a new customer may have to make several contacts with the bank. The customer may research the bank online, visit a branch to fill out the paperwork, call to verify that funds have transferred correctly, and so forth. These different touchpoints can happen through different channels over an extended period of time.

A traditional transactional survey process would gather feedback from each individual channel independently, without any attention to the customer's larger journey. For example, there may be a web intercept survey on the website, a paper survey handed out in the branch, and a post-call survey after the customer calls on the phone.

But since customers use all these channels for a variety of purposes, data about the specific journey of opening a new account is scattered across multiple surveys with no unified view. From the customer's perspective, though, it's all part of the process of opening an account.

In contrast, a journey survey would happen after the customer has finished opening an account. The journey survey would ask about all the channels the customer used, and ask questions specifically about opening an account. For customers who are on different journeys, there would be different surveys: a bank could have surveys for getting a loan, fraud reports, paying off a mortgage, and so forth.

The result is a unified, customer-centric view that tells the whole story and not just one piece.

Whether to use journey surveys or transactional surveys depends on the goals, and both types of survey have their place. Transactional surveys are important when you need to know make sure a particular customer contact went well. For example, coaching and training employees requires making sure you have specific and detailed feedback about a specific customer interaction.

Journey surveys are better for understanding the overall customer experience. Journey surveys let you see where customers experience broken processes, and make decisions about how to allocate resources to improve.

It's important to keep both kinds of feedback program in your toolbox, and make sure you're using the right tool for your specific goals.

Product Reviews Should Include Customer Service

Last week, Brian Chen wrote in the New York Times' Personal Tech column that product reviews are broken.

The reason? Everybody focuses on the product, and ignores the service experience.

Which is absolutely spot-on.

The challenge for technology journalists is that, while it's relatively easy to use a product and evaluate how well it performs, it's much harder to review the company's service levels and response to customer problems.

But for a consumer, if the product develops an issue, then service is pretty much the only thing that matters.

For a journalist, though, relatively few products are obliging enough to break within the review period. Reporters work on deadline, after all. And even if something does break, many professional reviewers are well-known to the industry and get a special level of service most real customers could only dream of.

But maybe there's a few baby steps reviewers can take towards remedying this problem. For example, as part of a product review why not try evaluating some of these service-related factors:

  • How easy is it to find documentation and basic product information on the company's website?
  • How easy is it to call the company's technical support number and reach a human being?
  • Are there indirect support channels like online forums? If so, how easy are they to use and how good is the information found there?
  • What is the company's repair or return policy?

I have personal experience with some products where they received glowing media reviews while at the same time customers were complaining loudly online about poor quality and lousy support. Had some of those reviewers spent even a few minutes looking into the reliability and customer support, the reviews would have come out very differently.

So it's long past time for product reviewers to start paying attention to customer service.

Your bad service gave me a heart attack

I'm very skeptical of this claim, but just can't resist.

A Virginia woman is claiming that the customer service she received from Verizon Wireless was so bad that it caused a heart attack and sent her to the hospital. She had called Verizon to clear up a billing mistake, but says the CSR was so rude to her that the stress of the incident caused a heart attack and led to $60,000 in medical bills. She's suing Verizon for $2.35 million for intentional infliction of emotional distress.

Whatever the legal and medical merits of this case (and I suspect they are few), it did make me stop and think. We all know that dealing with bad customer service can be very stressful, and too much stress is unhealthy. Stress can even, in rare instances, trigger hidden medical conditions.

So add to the list of reasons to provide good customer service: "Look out for the health and well-being of our customers."

Comcast's Ten Point Plan

In the wake of Comcast's acquisition of Time Warner being sunk by bad customer experience, Comcast has apparently come out with a ten-point plan for improving customer service.

The more cynical among us will recognize that when a politician says, "I have a ten-point plan," that's really code for "I will pretend to do something about this issue."

Should we be so skeptical of Comcast's own efforts? Or is it possible that the company has finally decided to get serious about improving its customer experience?

Comcast's Action Plan, as leaked to The Consumerist website, is:

  1. Never being satisfied with good enough
  2. Investing in training, tools, and technology
  3. Hiring more people ... Thousands of people
  4. Being on time, every time
  5. Get it right the first time
  6. Keeping bills simple and transparent
  7. Service on demand
  8. Rethinking policies and fees
  9. Reimagining the retail experience
  10. Keeping score

So this isn't a bad list. It's not a great list either. For example, I would have included, "Empower all employees to solve customers' problems," and, "Fix broken processes." But that's just quibbling.

The real question is: Will Comcast actually commit resources and executive support to improving customer experience on an ongoing basis?

Because it's easy to write a ten-point plan. It's also easy to spend money to hire people or buy new software. But actually changing the culture of a company takes hard work, leadership, and years of time.

Personally, I'm skeptical. As a Comcast customer I would love to see this company change its stripes. But as a Customer Experience professional, I've seen too many of these sorts of initiatives fail.

Usually what happens is that after the initial hoopla and flurry of memos, nothing actually changes. Or if the leadership is serious there may be some significant improvements for a time, but then the company declares "mission accomplished" and things go right back to the way they were.

Actual, sustained change at a company like Comcast takes sustained commitment. That's a lot harder than writing a few memos.

Newsletter #87

I just published the 87th issue of our newsletter, Quality Times. This issue talks about how bad customer experience helped sink Comcast's merger with Time Warner, and I provide some things to look at when troubleshooting an underperforming survey process.

As always I hope you find this interesting and informative.

Abusive Customer Experience

When does customer service cross the line from bad to abusive?

Here's one example which I think is well across that line: British satellite TV provider Sky (not to be confused with Skynet) has a policy that customers can only cancel by phone. It's not possible to cancel through the company's website, by email, or even by registered letter or court summons.

Not that they made that phone cancelation easy. The whole point of forcing customers through this process is to make it hard--and ensure that customers have to talk to a "retention specialist" who can try to talk them out of it. Customer complaints and horror stories about the difficulty in canceling Sky service are easy to find.

That's abusive enough on it's own. But what really elevates Sky into its own special circle of consumer hell is that for some period of time (until coming under regulatory and media fire) until May 2014, Sky's own contract on its website explicitly said that its customers could cancel "by phone or by writing to us," even though written cancelation requests were ignored.

When this problem (a problem some would describe as "breach of contract" and "generally horrible abusive behavior") was publicized, the company solved the problem simply by updating its terms to clarify that customers could only cancel by phone.

And yet this codification of customer abuse was deemed a "victory" by The Telegraph, one of the newspapers which publicized the problems. 

I think we should all be wary of winning too many "victories" like this one.

Bad CX Sinks $45B Deal

"How do you measure the ROI on Customer Experience?"

That's a common discussion topic any time customer experience professionals gather. Everyone knows that there's a payoff to having a better customer experience, but much of the benefit comes in soft forms like increased customer loyalty, brand reputation, word-of-mouth marketing, and similar categories.

Those are inherently hard things to measure, and many in the CX world come from an operational background where costs and benefits are just columns in a spreadsheet. So figuring out the ROI of customer experience can be uncomfortably squishy at times.

But every now and then there's an example where the cost of bad customer experience is so overwhelming it just can't be ignored. I wrote about one case a couple years ago, where Time Warner Cable committed $50M in marketing to try to erase the damage done by years of terrible customer service (spoiler alert: it didn't work).

Today we have an even more eye-popping example with the cancellation of the proposed $45 Billion merger between Comcast and Time Warner.

Clearly, the infamously bad customer service at Comcast and Time Warner were not the only factors leading to the deal being killed. But the poor reputations both companies have earned over the past several years played a big role.

Right at the time when Comcast needed approval from federal regulators, it found itself in an extremely hostile media environment. "Customer abused by big monopoly company" stories are like catnip to the media, and Comcast provided mountains of raw material. The company's own statements about their customer service only fed the fire, making executives sound ignorant or delusional or both.

What's more, all those unhappy Comcast customers allowed the mobilization of political opposition. It's easy to get an upset customer to write a letter to the FCC, FTC, or their senator. It created the impression that the only people standing with Comcast were either paid by the company or afraid of it.

There was no way regulators were going to rubber-stamp this deal. There was too much grass-roots opposition. In the face of what would probably be a lengthy investigation and onerous conditions on approval, Comcast decided to call the whole thing off.

Would a company less loathed than Comcast have been able to pull off this deal? Quite possibly. There have been lots of corporate mergers larger than Comcast/Time Warner, including some which raised similar antitrust concerns. Any deal this size can get dragged into politics, and success in politics means getting more people on your side than your opponent's side. Comcast simply didn't have enough friends.

I'm sure there's going to be plenty of analysis and Monday-morning quarterbacking. But in the end, this $45 billion deal died because the company couldn't rally enough support, and it couldn't rally enough support in large part because of its reputation for mistreating customers.

Bad customer experience killed the Comcast merger.

Shots Fired in the Metrics War

A recent academic paper, The Predictive Ability of Different Customer Feedback Metrics for Retention, is likely to stir things up in the debate about which is the right metric to use for customer feedback.

The paper concludes that old-fashioned customer satisfaction and Net Promoter are statistically almost identical in their ability to predict customer retention, and Customer Effort performs somewhat worse.

Already, I've seen one NPS promoter claim that this "vindicates" NPS, which is not true. If anything, this research vindicates Customer Satisfaction, which NPS proponents often claim is less predictive than NPS.

But that aside, there are some important limitations to this research:

  • The study was conducted using only Dutch participants. Given the fact that survey questions in different languages are literally different questions, the research isn't applicable to English surveys.
  • The overall sample size was respectable (over 8,000 ratings), but the follow-up survey to determine retention got about a 15% response rate. That means that a little over 1,000 responses were available to use for determining the statistical relationships between the metrics and retention. That's sufficient (but not great), except that only about 20% of the respondents answered the Customer Effort question. So conclusions about the predictive value of Customer Effort are based on less than 300 responses total, a miniscule sample for this kind of study.
  • The study authors also tried to see if there were differences between how the metrics performed in different industries, so they segmented the results into 18 (!) different industries. At the industry level, the sample is incredibly thin and the differences between the metrics generally slight, and I don't see how the authors can justify trying to draw conclusions at this level.

Those critiques aside, this is an interesting paper and they did more right than wrong. I fear, though, that people who want to promote a particular survey metric are going to mis-use and mis-understand this research. 

My own view is that way too much time and effort is spent arguing about what's the "right" metric, when it's far more important to have a robust process. If you get the process right, even a mediocre metric will give better results than a great metric and a terrible process.

So by all means read the research and understand the value of different survey metrics. But when you go to build your own program, spend your time making sure you follow the principles of Agile Customer Feedback rather than trying to find the perfect survey question.

Too Small for Surveys?

When is the right time in a company's growth to put in place a customer feedback program?

Not every company needs or would benefit from surveys. Very small businesses may be very intimate with their customers and wouldn't learn anything new from a survey process. But as organizations grow in size and complexity, the need for a survey program becomes greater.

The role of a survey program is to provide the organization with visibility into how it is performing from the customer's perspective. So the right time to think about a survey program is when the company no longer gets that visibility in the ordinary course of business.

Here are two questions to ask when deciding if it's time to start surveying your customers:

  1. Is there any one person in the company who personally interacts with a significant fraction of the customers? In a smaller company, there are usually people who directly touch a lot of the customers. For example, a B2B consultancy where the CEO meets with all customers, or a pharmacy where the pharmacist personally talks to a large percentage of the people who walk through the door. This direct personal contact gives a lot of customer insight and intimacy, and probably means that you won't learn anything new from a survey. But if your business is big enough that no one person has the time or ability to touch more than a few percent of the customer base, then the only way you're going to get the big picture of how your customers feel about you is through a survey. This is a good application for a relationship survey, where you can take the temperature of your customers from time to time and make sure everything is on the right track.
  2. Is there any one person in the company who personally oversees all customer-facing employees? Just as a larger customer base makes it harder to see the big picture of your customer relationships, having a larger number of customer-facing employees makes it harder to see the big picture of how your employees are relating to your customers. If you're big enough to require at least a couple of supervisors, you should also have a transactional survey in place to collect data on specific customer interactions. This will provide more visibility and insight into how well the employees are dealing with customers, give better opportunities for coaching, and minimize the chances that a poor employee will slip through the cracks.

Smaller companies with fewer customers and simpler operations often don't need to perform customer surveys.

The time to think about a survey process is when you get big enough that you lose that customer and operational intimacy. When there's nobody who personally knows your customers or is personally responsible for supervising all the employees who deal with your customers, a formal feedback process gives you back some of that visibility you had as a smaller organization.

B2B Customer Feedback

Customer surveys are just as important a tool in business-to-business relationships as in business-to-consumer relationships, and we see a lot of interest from B2B companies in launching or improving their feedback programs. Most of the basic principles of survey design apply just as well in the B2B world as in the B2C world, but there are some important considerations to keep in mind.

Business-to-Business relationships are usually more complicated than consumer relationships, and have much higher lifetime value. There are often multiple decision-makers and decision-influencers, making it hard to get a definitive read on the overall strength of the relationship at any given time. However, we've found that it's often not hard to get customers in a business relationship to provide some feedback, since the relationship is often very important to the customer, too.

Here are some things to keep in mind when setting up a survey program for business customers:

  1. Consider the entire customer journey. Because B2B relationships usually have many different people involved in different aspects of the relationship, you want to try to capture feedback throughout the customer journey. Experiences like customer service calls and closing trouble tickets are obvious times to offer a survey, but you should also be asking for feedback after new orders, deliveries, invoices, training sessions, and any other point where the customer interacts directly with you.
  2. Respect the customer. "Respect the customer" is the first principle of Agile Customer Feedback, and it's even more important for B2B relationships because of the number of people involved and the value of the relationship. In practice, this means:
    1. Have strong exclusion rules in place. The same person should not get asked to take a survey over and over. I generally recommend that if a customer is asked to provide feedback, the same person won't get asked again for at least 30 days for any survey (even if it's about a different experience). And by all means, if a customer asks not to be surveyed, respect that.
    2. Be on the ball with closing the loop. If a customer had a bad experience or needs attention, get to it right away. Communicate back to your customers the importance of their feedback and anything you're doing differently because of it.
    3. Respect the customer's time. Keep transactional surveys short and relevant, and schedule time for longer relationship surveys. Don't call out of the blue and ask for more than five minutes.
    4. Make it personal. Having a real person call communicates that you take the relationship seriously; sending an email communicates that you don't want to spend money listening to your customers.
  3. Have a customer-centric view. Make sure you have the ability to pull together different surveys completed by different people at the same customer. Each person is going to have a different perspective on the relationship, and you want to be able to place all those pieces of feedback into context with each other. The goal is to see both the forest and the trees.

Building an effective feedback program in a business-to-business relationship isn't any harder than in a consumer relationship. Pay attention to the basics, respect your customers, and take into account the complexity of B2B, and your program will be off to a strong start.

Can you spot the survey mistakes?

Here's an amusing/horrifying story about a customer survey process gone horribly wrong:

Me: “Sir. Why are you giving us bad grades on the survey? You said everything was good.”

Customer: “Oh. Everything was good. I just didn’t like the movie. It was confusing.”

Me: “Sir, the surveys are not for the film itself. They’re for the theater and our staff.”

Customer: “Oh, but I want the studios to know I didn’t like the movie.”

Me: “That’s not how these surveys work. We don’t make the films; we just show them. The surveys are for customers to give feedback on how we performed. It’s a common mistake people make, but I’m going to strongly encourage you not to submit that survey.”

Customer: “Why not?”

Read the full story. Can you spot all the things this company is doing wrong in its survey process? Here's a partial list of mistakes I saw:

  1. The customer has to ask for a survey form, from the staff.
  2. The survey is designed in a way that it doesn't deal with the (apparently common) problem of customers reviewing the movie not the theater.
  3. At least some customers think the survey goes to the studio, not the theater chain.
  4. Customers can fill out the form with staff watching, and the staff can apparently try to talk the customer out of the survey.
  5. Despite the flaws in the process, the survey is apparently used to fire and promote people.
  6. Even a single bad survey is enough to cause serious problems for the theater staff.

For extra credit: how would you design a feedback process for a movie theater which actually works for its intended purpose?

New Case Study Posted

We've just posted a new case study on one of our clients, a B2B medical technology company where we are conducting customer interviews after a technical service call. You can download it, and please contact us if you have any questions.

What's Effective?

I use the word "effective" a lot in the context of building a customer feedback program.

As in, "to build an effective survey you should...." or, "effective customer feedback programs usually have....." or, "that's not an effective survey technique."

"Effective" is something we all want our surveys to be, but how do you know if your survey is effective or not?

"Effective" just means that something has the desired outcome or effect. So an effective survey is simply one which achieves its goals.

It seems like stating the obvious, but if you don't have a good handle on why you're conducting a customer survey, it's unlikely you're going to get much out of the process.

So the first step in trying to understand whether your survey is effective is clearly stating the goals of the process. Some common goals are:

  • To track opinions about the customer service level month-to-month (a common, if not very ambitious, goal)
  • To coach and train employees based on customer feedback
  • To identify customer pain points and broken business processes
  • To validate changes or improvements to the customer experience

Once you know what the goals of the program are, it's usually pretty easy to determine whether the survey is effective or not. What to do about an ineffective survey is a different issue, though often if the goals are well-understood, it's also pretty clear why a survey isn't meeting those goals (i.e. not enough data, not enough detail, data is not timely enough, surveys can't be connected to specific experiences, etc.).

But more often than not, the root cause of an ineffective survey is simply that it's not clear what the survey was supposed to accomplish in the first place.

So if you're trying to build an effective survey program, the first step is to make sure everyone understands what the goals are. Without that, you don't even know what "effective" is.

Latest issue of Vocalabs' newsletter has been published

We just published the latest issue of our newsletter, Quality Times.

In this issue we announce the availability of 2014 Executive Summary reports for the National Customer Service Survey in Communications Services and Banking. This is our unique syndicated research product where we interview customers immediately after a customer service call to one of the companies we follow, allowing us to collect detailed and in-depth research about specific customer experiences.

As always, I hope you find our newsletter interesting and informative. Email subscribers should be receiving their copies shortly.

Whatever happened to that study?

This particular story comes to us from the Department of Homeland Security, probably one of the most dysfunctional federal agencies (and that's truly saying something). But it will probably be familiar, in lesser form, to many people in large organizations struggling to build an effective feedback program.

You see, DHS has a problem. Its particular problem is having the lowest morale of any federal agency. So they commissioned an employee survey, which pointed to several changes management could do to improve things.

But nothing happened after that study was completed. So they paid for another survey, which pretty much said the same thing.

Still nothing happened. Nothing happened after the third study, either. Or the fourth.

Now, though, a new factor has emerged to weigh on the depressed morale of DHS workers: too many internal surveys.

The problem is that surveys are just a tool, and like many tools, they can be used for many different purposes. The same hammer which can be used to build a house can also be used to smash the windows. It all comes down to the intent of the wielder.

Surveys can be used very effectively to gain insights, identify root causes of problems, and support a program of continuous improvement. Surveys can also be used to delay and hinder change, and create the appearance of action where none exists. It all comes down to the intent of the wielder.

For a dysfunctional bureaucracy like DHS, which apparently does not have the organizational will to face its problems and make real changes, the employee survey is a very effective tool for resisting change. "We need to study the problem" is followed by "we need to finish the study before we do anything," then "we need to do another study," and finally, "whatever happened to that study?"

The lesson is that a survey, by itself, can't change anything. The organization and its leadership has to be committed to improvement before the tool can be used as it should be used. 

Download the latest NCSS reports

in

We just published the 2014 Executive Summary reports for the National Customer Service Survey (NCSS). This is our ongoing syndicated research project where we track the quality of customer service at selected large companies, by conducting phone interviews immediately after a customer service call.

Our Communications Services report covers AT&T, CenturyLink, Comcast, DirecTV, Dish Network, Sprint, Time Warner Cable, T-Mobile, and Verizon. This report is based on over 12,000 customer interviews from 2009 through 2014, and we include nine key metrics in our Executive Summary.

The Banking report includes Bank of America, Chase, Citi, and Wells Fargo. For these four companies we have data based on over 4,000 interviews between 2011 and 2014. The Executive Summary includes the same nine key metrics, and historical trends for all nine metrics.

Both reports are available from our website; in addition to the Executive Summary reports, subscribers receive real-time access to survey data as it comes in throughout the year, full responses to our 30-question interviews, and audio recordings of our customer interviews.

>> Download Communications Services Executive Summary

>> Download Banking Executive Summary

Syndicate content