The Customer Service Survey

Vocalabs' Blog

Blogs

What does it mean to be customer-centric?

Business writers like to talk about the benefits of being customer-centric. But what does it mean, and how do you know whether a company is customer-centric or not?

Like any other aspect of organizational culture, being customer-centric can be hard to define. There's no simple test or checklist that says you're customer-centric.

Being customer-centric is about considering the impact on customers in every decision the company makes. A customer-centric organization will:

  • Prioritize efforts that remove pain points for its customers.
  • Consider the impact on customers on decision-making throughout the organization, not just in the traditional areas of customer service and sales.
  • Train employees in all departments that the decisions they make can affect customers, including back-office functions.
  • Have leadership that takes an active interest in customer issues, both in aggregate and also individually.

These are all organizational outcomes, they are the things that come naturally to a customer-centric organization  as part of its culture.

Getting there is another matter. That's where the five competencies Jeanne Bliss talks about in Chief Customer Officer 2.0 come into play.

Amateurs Talk Strategy, Professionals Talk Execution

Amateurs Talk Strategy, Professionals Talk Logistics

That's an old military quote that sometimes gets pulled out at business leadership conferences. Strategy is the easy part. The hard part, the stuff the pros worry about, is the nuts and bolts of getting everything lined up and in the right place at the right time so the strategy can work.

It's an important message for customer feedback programs, too.

Developing a survey strategy is easy, and a lot of people have a lot of opinions on how to do it (some better than others).

But actually building an effective feedback program requires a lot of attention to detail. You need to:

  • Determine who to ask to participate in the survey
  • Decide what questions to ask
  • Determine the right time and channel to invite the customer to take the survey
  • Offer the survey to the customer in a way that makes the customer want to help
  • Route the survey responses to service recovery teams in real-time when appropriate
  • Coach front-line employees based on their individual survey responses
  • Deliver data to business users throughout the organization in a way that's timely and tailored to their individual needs
  • Monitor the survey process for signs of manipulation or gaps in the process
  • Adjust all aspects of the process on an ongoing basis as business needs change
  • Focus the entire organization on using customer feedback as an important tool to support both operational and strategic decision making

(As an aside: one thing not on this list is "Track your metrics and set goals," because tracking metrics is both easy and low-value. Everyone does it, but many organizations stop at that point in the mistaken belief that improved customer experience will magically follow.)

So just as military pros understand that wars are won and lost in the unglamorous details of moving people and supplies to the right place at the right time, survey pros understand that the effectiveness of a feedback program is built on the nitty-gritty of collecting and delivering the right data to the right people at the right time to help them do a better job.

What the amateurs don't recognize is that you can't just move an army on a whim, or improve customer experience by throwing some survey metrics at it.

So we circulated a Word doc...

So someone emailed around a Word doc with the survey design, and someone else edited it, then forwarded it to another person who copy-pasted it into the survey software, and the first person said it was good to launch so a fourth person uploaded the customer list and sent the invitations, and....wait, wasn't the Word doc already signed off? Why do we need to proofread it again?

Via The Daily WTF

No Customer Problem is Unimportant or Unfixable

In a couple of my clients, I've noticed an uncommon attitude towards the customer experience.

Where most companies often push back on trying to solve customer problems, these unusual companies take the opposite approach. They assume that No customer problem is unimportant or unfixable.

Compare that to the litany of reasons most companies give for not fixing their customer experience problems:

  • "Only a few customers are complaining about that."
  • "It would be very expensive to provide that level of service."
  • "That would require major investment of IT resources."
  • "That customer is just trying to get something for free."
  • "If we did that our customers would scam us."
  • "The way we're doing it now is better."
  • "You can't please every customer all the time."

What makes these excuses so insidious is that they are, very occasionally true. Some problems really do arise from freak circumstances (but usually if one customer complains, there are many others who have the same problem and aren't complaining). Sometimes systems are so big and outdated that it would be uneconomical to fix them (but at some point they will have to be replaced, and next time around you shouldn't let your systems get so far behind the curve). Some customers really are trying to scam you (but the overwhelming majority of customers are honest). And it is true that some customers will never be satisfied no matter what you do, but those customers are very rare.

Often one (or more) of those reasons is trotted out as a way to avoid taking a serious look at fixing some issue with the customer experience:

"What are we going to do about the complaints about how we verify customers' identities over the phone?"

"Only a few customers are complaining about that. Plus, if we changed the authentication then people would scam us."

"Oh, then I guess we shouldn't change that."

But if you take the attitude that No customer problem is unimportant or unfixable, then the conversation becomes completely different:

"What are we going to do about the complaints about how we verify customers' identities over the phone?"

"Only a few customers are complaining about that. Plus, if we changed the authentication then people would scam us."

"You might be right. But No customer problem is unimportant or unfixable, and this is definitely important enough to some of our customers that they took the time to complain. So we should at least explore some options and see if there's a better way to do things."

This attitude, that No customer problem is unimportant or unfixable, can dramatically shift a culture towards being customer-centric, especially when it comes straight from the top.

It's not an easy change, because it directly attacks the deep resistance to change in many organizations. But try making this your catch-phrase and see how it changes the discussion.

Doing a Thousand Things Right

Creating a good customer experience is often about doing a thousand little things right.

It's easy to lose sight of that fact when you're trying to think strategically about process improvement and engineering a better customer experience for your organization. Statistics can conceal the fact that behind every data point is a customer, and that customer received either a good experience or a bad one.

So while it's important to make sure the right processes are in place to enable a good customer experience, it's more important to make sure that the people who are part of those processes have the tools they need to make those thousand little decisions in the right way.

Every employee of every company is pulled in different directions by competing priorities. You have to balance things like working faster vs. more carefully; satisfying an upset customer vs. saving money; or solving a problem yourself vs. calling for help.

Even if a company says it cares about customer experience, what really matters is how employees are making those decisions on a day to day basis. To make the right decisions, a company needs to ensure:

  • Employees understand what customers expect and how to deliver it (you need good training)
  • Employees get regular, specific, and detailed feedback about how customers perceive the experience (you need a well-designed closed-loop survey)
  • Employees aren't pressured to make bad decisions (compensation needs to align with customer experience, or at least not pull the wrong way)
  • Employees know the leadership cares (customer experience needs to be an ongoing effort, not a one-time project)

This holds true for employees throughout the organization, not just the ones who deal directly with customers. A website designer or billing specialist can be subject to the same negative forces (work faster, save money, ignore the complaints) as a contact center rep or salesperson. If anything, back-office employees may be more susceptible to taking customer experience shortcuts since they don't have to deal with customers directly.

The good news is that most people genuinely want to do a good job, and if given the right tools and training and if shown that the company cares, they will be highly motivated to make the right decisions about customer experience.

If the leadership can just do a few big things right, it's not that hard for everyone else to do a thousand little things right.

Who has time to proofread?

Today's gem of a survey mistake comes to us via The Daily WTF.

I wish I knew where this was from, but it's maybe just as well that it remains anonymous.

A/B Testing for Customer Experience

A/B testing is one of the most powerful tools for determining which of two (or more) ways to design a customer experience is better. It can be used for almost any customer experience, and provides definitive data on which design is better based on almost any set of criteria.

Stripping off the jargon, an A/B test is really just a controlled experiment like what we all learned about in 8th grade science class. "A" and "B" are two different versions of whatever you're trying to evaluate: it might be two different website designs, two different speech applications, or two different training programs for contact center agents. The test happens when you randomly assign customers to either "A" or "B" for some period of time and collect data about their performance.

Conducting a proper A/B test isn't difficult but it does require some attention to detail. A good test must have:

  • Proper Controls: You want the "A" and "B" test cases to be as similar as possible except for the thing you are actually testing, and you want to make sure customers are being assigned as randomly as possible to one case or the other.
  • Good Measurements: You should have a good way to measure whatever you're using for the decision criteria. For example, if the goal of the test is to see which option yields the highest customer satisfaction, make sure you're actually measuring customer satisfaction properly (through a survey, as opposed to trying to infer satisfaction levels from some other metric).
  • Enough Data: As with any statistical sampling technique, the accuracy goes up as you get data from more customers. I recommend at least 400 customers in each test case (400 customers experience version A and 400 experience version B, for 800 total if you are testing two options). Smaller samples can be used, but the test will be less accurate and that needs to be taken into consideration when analyzing the results.

In the real world it's not always possible to do everything exactly right. Technical limitations, project timetables, and limited budgets can all force compromises in the study design. Sometimes these compromises are OK and don't significantly affect the outcome, but sometimes they can cause problems.

For example, if you're testing two different website designs and your content management system doesn't make it easy to randomly assign visitors to one version or the other, you may be forced to do something like switch to one design for a week, then switch to the other for a week. This is probably going to work, but if one of the test weeks also happens to be during a major promotion, then the two weeks aren't really comparable and the test data isn't very helpful.

But as long you pay attention to the details, A/B testing will give you the best possible data to decide which customer experience ideas are worth adopting and which should be discarded. This is a tool which belongs in every CX professional's kit.

Vocalabs Newsletter #90

I just published the 90th issue of Quality Times, Vocalabs' newsletter. This month I use the trend towards cord-cutting in cable TV to discuss what can happen in an industry where customer experience "doesn't matter" because of a monopoly or because customers have a hard time switching. I also offer my first-ever book review, of Jeanne Bliss' Chief Customer Officer 2.0.

Email subscribers should receive the newsletter shortly. As always I hope you find it interesting and informative.

Book Review: Chief Customer Officer 2.0

Longtime readers will know that I don't normally write book reviews on this blog. In fact, this will be my first. But when I was asked if I would review Jeanne Bliss' new book, Chief Customer Officer 2.0, it was easy to say Yes. Jeanne is a bona-fide guru of customer experience, and this update to her 2006 Chief Customer Officer is a book I was probably going to have to read anyway.

Chief Customer Officer 2.0 is a good introduction to the principles and practices of customer experience, aimed at the executive who needs to make it work (or the manager who needs to work with the executive to make it work). Jeanne lays out the five major competencies of a customer-centric organization: managing customers as assets, aligning around experience, building a customer listening path, being proactive in the experience, and one-company leadership/accountability/culture.

For each of these competencies, the book provides a description of what's involved, the benefits to be gained by achieving competence, a few anecdotes from various organizations' journeys, and a few ideas to get you going.

What Chief Customer Officer 2.0 won't provide you is all the nuts and bolts of how to execute each of these competencies. You will learn the importance of managing customers as assets and a general sense of what that means, but there's no accounting formula for establishing what that means to your company. You will understand the value of listening to customers as they travel along your customer journey, but this book doesn't instruct you on how to write a good survey.

Instead, Chief Customer Officer 2.0 sticks to the big picture. This is, in my view, the right approach. Each of the five competencies requires a lot of experience and getting a lot of details right in order to gain maturity. Trying to include all that in a single book would get bogged down in a level of minutiae and make it hard to grasp the larger picture.

If I have one complaint about this book, it's that Jeanne presents the five competencies as roughly equal in importance. While all five are important, I would argue that the fifth competency, Leadership, Accountability, and Culture, is the most important of the group. In my observations, companies which have the Leadership, Accountability, and Culture usually develop enough competence in the other four areas to drive a customer-centric organization (even if they never embark on a formal Customer Experience program). But if that leadership element is lacking, even a high level of maturity in other areas can go to waste because the organization doesn't care.

At the end of the day, if you or your organization want to become more customer-centric, this is a book you want to read. It won't give you all the answers, but it will give you the framework to start asking questions.

Vocalabs Newsletter #89

We published the 89th issue of Quality Times, our mostly-monthly newsletter. Email subscribers should already have their copies.

This month is a deep dive into survey methodology and process. In my first article I discuss a recent academic paper comparing the popular metrics NPS, CSAT, and Customer Effort and find that the research isn't all it's cracked up to be. Then I get into the subject of whether and when to pay an incentive to take a survey. My answer: "almost never."

As always, I hope you find this useful and informative.

Don't Treat Your Customers Like Mushrooms

"They treat me like a mushroom," as the old joke goes. "They keep me in the dark and feed me ****."

Nobody likes being treated like a mushroom, but some companies do that to their customers anyway. This week Spirit Airlines learned why this might not be a great idea.

Spirit, for those not familiar with the company, is the low-cost airline which has elevated poor customer experience to part of its brand image. In an industry where calibrated misery is an art form, this is saying something.

Over the past week, Spirit has been plagued with delays and cancelations. Customers didn't always buy the "bad weather" explanations offered by the gate agents. And let's face it, "weather" doesn't sound like a credible reason for a flight to be canceled when the plane is at the gate and both the departure and arrival airports are clear.

But Spirit makes it very difficult to talk to anyone at the company, and doesn't monitor social media (all part of that low-cost brand!). In the absence of any official word from the airline, rumors started to spread that there was some sort of pilot strike going on. Even when contacted by the media, at first Spirit didn't respond, leading some outlets to publish articles about the rumored labor action. "Pilot strike" is probably second only to "maintenance problems" on the list of things which will make passengers choose a different airline.

I have to think that this article (even when updated several hours later when Spirit finally gave an official explanation for the delays) did substantial damage to Spirit's bottom line. It creates the impression that the company doesn't care about its passengers even when stranding them far from home, and that the airline won't make an effort to keep customers informed when the schedule is becoming massively disrupted.

Surely it would have been cheaper just to communicate clearly in the first place.

Underpaid, Inexperienced Workforce

Bruce Temkin wrote a blog post recently about Walmart's plans to give a pay raise to many of its lowest-paid employees. Bruce's conclusion, after looking at this decision through the lens of customer experience, is that in the long run Walmart is likely to save money through lower staff turnover leading to better customer experience and more loyal customers.

I think Bruce is spot-on. The only part I disagree with is that this is surprising or counter-intuitive.

The simple fact is that employee turnover is expensive, inexperienced employees are more likely to make mistakes, and underpaid employees aren't likely to go out of their way for customers. People want to be valued in their jobs, and underpaying them to do menial work is a great way to communicate that they are not valued.

My guess is that by paying employees more, Walmart will save money through reduced turnover alone, nevermind any customer experience benefits. Over the longer term, better morale among employees could also pay dividends through better customer experience as a bonus.

We don't have to guess, though. Compare Sam's Club, a warehouse store run by Walmart, to Costco, a direct competitor known for relatively generous pay and benefits and low employee turnover.

Which has the better customer experience, higher customer loyalty, stronger reputation, and better financial results?

Dear Xcel Energy: Here's Why Nobody Takes Your Survey

Browsing the Xcel Energy website recently I was accosted by one of those ubiquitous popup surveys. You know the kind: one which asks if you'll take a survey about your browsing experience.

These surveys typically have an abominable response rate, and it's not hard to see why. Should you agree to take the survey you'll be asked to answer screen:

after screen:

after screen:

after screen:

after screen:

of questions. Thirty-two questions in all, of which 25 are required.

Clearly someone didn't get the memo about survey length.

What if you happen to notice a problem with Xcel's website, maybe a broken link somewhere, and you want to be a good samaritan and point it out to them?

Good luck with that. Because if you just write a short note in the comment box about the broken link and hit "Submit," here's what you get:

So much for being helpful (this is one of the reasons why Mandatory Survey Questions are Evil). If you're not interested in rating the "balance of text and graphics" on the site, or providing no less than three different ratings of the search function, Xcel doesn't want to hear from you.

Not that it would have mattered anyway: notice the text at the very bottom of the survey, "Please note you will not receive a response from us based on your survey comments."

To the customer that means, "Nobody will read the comments you write."

Xcel might as well save the bother of putting up a survey and just have a page that says, "We don't actually want your feedback but thanks anyway." It seems like a lot less trouble.

Shockingly Good CX from T-Mobile

It's often easier to find things to complain about in Customer Experience than examples of things that went well. But yesterday I had a customer service experience with T-Mobile that went shockingly smoothly.

Here's the setup: I needed to make a small change to my T-Mobile plan. I logged in to my account online and tried to make the change, but got an error message that said I needed to call customer service for help.

So I called customer service. Here's what happened:

  1. The speech recognition system offered "Representative" as the very first option.
  2. I was not told to go to the website for faster service.
  3. I was not asked to play Twenty Questions before being routed to a person.
  4. I was asked to authenticate myself by entering the last four digits of my SSN. When I was connected to the representative, she already knew I was authenticated and didn't ask me to authenticate again.
  5. The CSR was pleasant, friendly, and relaxed. She did not try to rush me, and took a moment to chat.
  6. The representative immediately understood my problem, fixed it, and explained how long it would take for the change to show up in my online account. She even saved me a couple bucks by tweaking the effective date of my plan change (something that's not possible online).
  7. A few hours later, as promised, my plan change was visible when I logged in online.

None of this is rocket science, but it's amazing what a difference paying a little bit of attention to the customer can make.

Customer Obsession is Mutual

We all know someone who is obsessed with a certain company or brand.

Some companies seem to attract this. I'm thinking of Apple, Zappos, Nike, and the like.

Those same companies that generate customer obsession are themselves obsessed. It turns out that obsession is a mutual thing. Customers are drawn to brands which really care about the same things they do.

Other emotions can also be mutual: apathy, contempt, hatred.

So how does your company feel about your customers?

 

Customer Experience Doesn't Matter...Until it Does

Consumer Reports is out with a survey today about cable TV companies. They had a couple of not-surprising findings:

  • Consumers hate their cable companies, and
  • More and more consumers are finding alternatives to cable TV

Both of these are well-known trends, and it's hard not to conclude that they're deeply related. For decades cable TV companies have enjoyed monopoly status in most markets, and since revenue growth was mostly a matter of selling bigger packages to existing customers and buying up smaller companies, cable companies have largely ignored their customer experience.

The result is a horrible service experience (which is not only bad for customers, but is also shockingly inefficient and expensive) and years of pent-up ill-will.

It's no wonder that when viable alternatives to traditional cable TV become available, customers started cutting the cord. This is especially true of younger consumers, who never got into the cable habit to begin with and don't see why they should spend hundreds of dollars a month for TV. For them, Netflix is much better and much cheaper.

Customer experience doesn't always drive revenue growth, as there are many factors which go into buying decisions. But customer experience does drive goodwill and loyalty.

In those industries where customers' choices are limited by things like the lack of viable alternatives and difficulty switching vendors (such as cable TV, airlines, banks, and mobile phone companies) it can be very tempting to under-invest in customer experience.

But that's a dangerous approach. If your customers don't like you, they'll head for the exits when an alternative becomes available. And it doesn't matter how deeply entrenched your business is: sooner or later there will be an alternative.

It's extremely difficult to turn around a bad reputation. So even if you think customer experience doesn't matter to your business, someday it will. When that day comes, will your customers be loyal? Or will they flee?

Newsletter #88 is published

We just published the 88th issue of Quality Times, our newsletter about customer experience and customer feedback. In this issue I discuss survey strategy with two articles, one about journey surveys, and another about doing transactional surveys in a B2B environment.

As always I hope you find this interesting and informative.

Survey Incentives

Incentive payments are a standard technique for increasing survey response rate. Whether a straight-up payment ("Get $5 off your next order for taking our survey") or entry into a drawing ("Take our survey for a chance to win $5,000!"), this pitch will be familiar to almost everyone.

The problem is that in many cases, incentives are deployed as a lazy and expensive way to "fix" a broken process without addressing the underlying issues.

If a survey has a low response rate, there's usually some underlying cause. For example:

  • The survey takes too long.
  • The survey isn't being offered to customers in a way that's convenient.
  • The process relies on customers remembering to take the survey (especially surveys printed on cash-register tapes or at the end of a call).
  • The company doesn't communicate that it takes the feedback seriously.
  • The survey is broken (for example, a web survey which returns errors).
  • The survey invitation looks too much like spam or a scam.
  • The survey gets in the way of something else the customer wants to do (especially pop-up surveys on web pages).
  • The survey doesn't respect whatever genuine desire to give feedback the customer may have.

Rather than trying to identify the underlying issue and fix it, often it's easier to just throw money at customers to try to boost response. What's wrong with that? Here are a few things:

  • Incentives can be expensive. I know of companies which spend more on the survey incentives than the survey itself.
  • Incentives motivate the customer in the wrong way. Feedback given out of a genuine desire to help is more likely to be sincere and detailed than feedback given to earn a few bucks.
  • Incentives are almost never necessary in a transactional feedback program. A well-designed process will normally give a high enough response rate without the use of incentives.

But the biggest sin of survey incentives is that they're often used to hide deeper problems with the survey, problems which make the entire process much less effective. Designing an effective transactional feedback program involves some tradeoffs, but those tradeoffs help ensure that the survey design is carefully focused.

For example, transactional surveys need to be reasonably short to get a good response rate. That means some (possibly difficult) decisions need to be made about which questions to ask and which questions not to ask. But that process also forces the company to carefully consider what the purpose of the survey really is, and what's important to ask. The result is almost always a better survey, precisely because it doesn't include all the things that aren't as useful.

All that said, there are some situations where incentives may be appropriate, especially when you get out of transactional surveys and into the realm of market research. If you're asking the participant to spend a lot of time, participate in a focus group, or otherwise do something more than just a quick favor to the company then you should be offering some compensation.

But for ordinary transactional surveys, incentives are usually a sign of a broken process.

Journey Surveys

Journey surveys provide a different approach to a customer feedback program, one which examines the overall customer experience rather than individual customer touchpoints.

Journey surveys may look a lot like the familiar transactional surveys, but there are some important differences. The journey survey happens after a customer reaches a point in a specific customer journey, and is focused on the entire journey rather than just the customer's most recent interaction with the company.

For example, let's look at the customer journey of opening a new bank account. To open a bank account, a new customer may have to make several contacts with the bank. The customer may research the bank online, visit a branch to fill out the paperwork, call to verify that funds have transferred correctly, and so forth. These different touchpoints can happen through different channels over an extended period of time.

A traditional transactional survey process would gather feedback from each individual channel independently, without any attention to the customer's larger journey. For example, there may be a web intercept survey on the website, a paper survey handed out in the branch, and a post-call survey after the customer calls on the phone.

But since customers use all these channels for a variety of purposes, data about the specific journey of opening a new account is scattered across multiple surveys with no unified view. From the customer's perspective, though, it's all part of the process of opening an account.

In contrast, a journey survey would happen after the customer has finished opening an account. The journey survey would ask about all the channels the customer used, and ask questions specifically about opening an account. For customers who are on different journeys, there would be different surveys: a bank could have surveys for getting a loan, fraud reports, paying off a mortgage, and so forth.

The result is a unified, customer-centric view that tells the whole story and not just one piece.

Whether to use journey surveys or transactional surveys depends on the goals, and both types of survey have their place. Transactional surveys are important when you need to know make sure a particular customer contact went well. For example, coaching and training employees requires making sure you have specific and detailed feedback about a specific customer interaction.

Journey surveys are better for understanding the overall customer experience. Journey surveys let you see where customers experience broken processes, and make decisions about how to allocate resources to improve.

It's important to keep both kinds of feedback program in your toolbox, and make sure you're using the right tool for your specific goals.

Product Reviews Should Include Customer Service

Last week, Brian Chen wrote in the New York Times' Personal Tech column that product reviews are broken.

The reason? Everybody focuses on the product, and ignores the service experience.

Which is absolutely spot-on.

The challenge for technology journalists is that, while it's relatively easy to use a product and evaluate how well it performs, it's much harder to review the company's service levels and response to customer problems.

But for a consumer, if the product develops an issue, then service is pretty much the only thing that matters.

For a journalist, though, relatively few products are obliging enough to break within the review period. Reporters work on deadline, after all. And even if something does break, many professional reviewers are well-known to the industry and get a special level of service most real customers could only dream of.

But maybe there's a few baby steps reviewers can take towards remedying this problem. For example, as part of a product review why not try evaluating some of these service-related factors:

  • How easy is it to find documentation and basic product information on the company's website?
  • How easy is it to call the company's technical support number and reach a human being?
  • Are there indirect support channels like online forums? If so, how easy are they to use and how good is the information found there?
  • What is the company's repair or return policy?

I have personal experience with some products where they received glowing media reviews while at the same time customers were complaining loudly online about poor quality and lousy support. Had some of those reviewers spent even a few minutes looking into the reliability and customer support, the reviews would have come out very differently.

So it's long past time for product reviewers to start paying attention to customer service.

Your bad service gave me a heart attack

I'm very skeptical of this claim, but just can't resist.

A Virginia woman is claiming that the customer service she received from Verizon Wireless was so bad that it caused a heart attack and sent her to the hospital. She had called Verizon to clear up a billing mistake, but says the CSR was so rude to her that the stress of the incident caused a heart attack and led to $60,000 in medical bills. She's suing Verizon for $2.35 million for intentional infliction of emotional distress.

Whatever the legal and medical merits of this case (and I suspect they are few), it did make me stop and think. We all know that dealing with bad customer service can be very stressful, and too much stress is unhealthy. Stress can even, in rare instances, trigger hidden medical conditions.

So add to the list of reasons to provide good customer service: "Look out for the health and well-being of our customers."

Comcast's Ten Point Plan

In the wake of Comcast's acquisition of Time Warner being sunk by bad customer experience, Comcast has apparently come out with a ten-point plan for improving customer service.

The more cynical among us will recognize that when a politician says, "I have a ten-point plan," that's really code for "I will pretend to do something about this issue."

Should we be so skeptical of Comcast's own efforts? Or is it possible that the company has finally decided to get serious about improving its customer experience?

Comcast's Action Plan, as leaked to The Consumerist website, is:

  1. Never being satisfied with good enough
  2. Investing in training, tools, and technology
  3. Hiring more people ... Thousands of people
  4. Being on time, every time
  5. Get it right the first time
  6. Keeping bills simple and transparent
  7. Service on demand
  8. Rethinking policies and fees
  9. Reimagining the retail experience
  10. Keeping score

So this isn't a bad list. It's not a great list either. For example, I would have included, "Empower all employees to solve customers' problems," and, "Fix broken processes." But that's just quibbling.

The real question is: Will Comcast actually commit resources and executive support to improving customer experience on an ongoing basis?

Because it's easy to write a ten-point plan. It's also easy to spend money to hire people or buy new software. But actually changing the culture of a company takes hard work, leadership, and years of time.

Personally, I'm skeptical. As a Comcast customer I would love to see this company change its stripes. But as a Customer Experience professional, I've seen too many of these sorts of initiatives fail.

Usually what happens is that after the initial hoopla and flurry of memos, nothing actually changes. Or if the leadership is serious there may be some significant improvements for a time, but then the company declares "mission accomplished" and things go right back to the way they were.

Actual, sustained change at a company like Comcast takes sustained commitment. That's a lot harder than writing a few memos.

Newsletter #87

I just published the 87th issue of our newsletter, Quality Times. This issue talks about how bad customer experience helped sink Comcast's merger with Time Warner, and I provide some things to look at when troubleshooting an underperforming survey process.

As always I hope you find this interesting and informative.

Abusive Customer Experience

When does customer service cross the line from bad to abusive?

Here's one example which I think is well across that line: British satellite TV provider Sky (not to be confused with Skynet) has a policy that customers can only cancel by phone. It's not possible to cancel through the company's website, by email, or even by registered letter or court summons.

Not that they made that phone cancelation easy. The whole point of forcing customers through this process is to make it hard--and ensure that customers have to talk to a "retention specialist" who can try to talk them out of it. Customer complaints and horror stories about the difficulty in canceling Sky service are easy to find.

That's abusive enough on it's own. But what really elevates Sky into its own special circle of consumer hell is that for some period of time (until coming under regulatory and media fire) until May 2014, Sky's own contract on its website explicitly said that its customers could cancel "by phone or by writing to us," even though written cancelation requests were ignored.

When this problem (a problem some would describe as "breach of contract" and "generally horrible abusive behavior") was publicized, the company solved the problem simply by updating its terms to clarify that customers could only cancel by phone.

And yet this codification of customer abuse was deemed a "victory" by The Telegraph, one of the newspapers which publicized the problems. 

I think we should all be wary of winning too many "victories" like this one.

Bad CX Sinks $45B Deal

"How do you measure the ROI on Customer Experience?"

That's a common discussion topic any time customer experience professionals gather. Everyone knows that there's a payoff to having a better customer experience, but much of the benefit comes in soft forms like increased customer loyalty, brand reputation, word-of-mouth marketing, and similar categories.

Those are inherently hard things to measure, and many in the CX world come from an operational background where costs and benefits are just columns in a spreadsheet. So figuring out the ROI of customer experience can be uncomfortably squishy at times.

But every now and then there's an example where the cost of bad customer experience is so overwhelming it just can't be ignored. I wrote about one case a couple years ago, where Time Warner Cable committed $50M in marketing to try to erase the damage done by years of terrible customer service (spoiler alert: it didn't work).

Today we have an even more eye-popping example with the cancellation of the proposed $45 Billion merger between Comcast and Time Warner.

Clearly, the infamously bad customer service at Comcast and Time Warner were not the only factors leading to the deal being killed. But the poor reputations both companies have earned over the past several years played a big role.

Right at the time when Comcast needed approval from federal regulators, it found itself in an extremely hostile media environment. "Customer abused by big monopoly company" stories are like catnip to the media, and Comcast provided mountains of raw material. The company's own statements about their customer service only fed the fire, making executives sound ignorant or delusional or both.

What's more, all those unhappy Comcast customers allowed the mobilization of political opposition. It's easy to get an upset customer to write a letter to the FCC, FTC, or their senator. It created the impression that the only people standing with Comcast were either paid by the company or afraid of it.

There was no way regulators were going to rubber-stamp this deal. There was too much grass-roots opposition. In the face of what would probably be a lengthy investigation and onerous conditions on approval, Comcast decided to call the whole thing off.

Would a company less loathed than Comcast have been able to pull off this deal? Quite possibly. There have been lots of corporate mergers larger than Comcast/Time Warner, including some which raised similar antitrust concerns. Any deal this size can get dragged into politics, and success in politics means getting more people on your side than your opponent's side. Comcast simply didn't have enough friends.

I'm sure there's going to be plenty of analysis and Monday-morning quarterbacking. But in the end, this $45 billion deal died because the company couldn't rally enough support, and it couldn't rally enough support in large part because of its reputation for mistreating customers.

Bad customer experience killed the Comcast merger.

Syndicate content