The Customer Service Survey

"Smart" Products vs. Services

by vocalabs on Fri, 2016-08-19 14:43

It used to be that when you opened up your wallet, you knew whether you were buying a product or a service. New pair of shoes? Product. Haircut? Service. Trip to Disney World? Service. Mickey Mouse T-Shirt? Product.

Knowing what you're getting is important for setting your expectations as a customer. If you're buying a product you don't expect the seller to do much other than deliver it and fix it if it breaks, and you get to continue enjoying the product until it's either used up or worn out.

When you buy a service you expect the seller to do something for you. This could be one time or an ongoing basis, but you generally expect to keep paying as long as the seller is providing the service. Once you stop buying the service (or the seller has fulfilled their obligation) you aren't entitled to keep getting the service.

One of the joys of living in the 21st century is that we now have consumer products that are both products and services, thanks to the Internet and smart devices. That lightbulb you just bought from Home Depot might actually be a "smart" lightbulb which requires ongoing services provided by the manufacturer in order to deliver all the features listed on the box. And if the manufacturer decides to stop providing the services, the lightbulb stops working.

That may sound like a made-up example, but it literally happened this summer. "Connected by TCP" brand lightbulbs, sold by Home Depot among other places, needed to connect to an online service provided by the manufacturer in order to do all the cool things the box said they would do. TCP decided it was no longer worth providing the service, so they shut it down on June 1st, and all those expensive "smart" lightbulbs (as of today, still for sale for $137/pair) instantly became a whole lot dumber.

This is a customer experience nightmare. The company has done a terrible job of setting expectations about what exactly customers are buying. It sits on the shelf at Home Depot just like any other lightbulb, looking and acting like a consumer product, but what customers are actually getting is a product plus an embedded service, and neither the product nor the service is fully functional without the other.

This is also not the first time a "smart" product was crippled or completely disabled when the manufacturer decided to stop providing the hidden service that made it work. Nor will it be the last. As these incidents keep happening, I expect the negative customer experiences to taint the whole market for adding smarts to ordinary consumer products. Most consumers will not be willing to buy "smart" lightbulbs, thermostats, refrigerators, and other devices when they know the manufacturer might not be willing to keep supporting the product for its entire expected lifespan.

To get past this, manufacturers of "smart" products need to make sure that customers are guaranteed a good customer experience even if the company isn't around to provide it. That probably means a combination of setting customer expectations ("Comes with five years of service included!" makes it clear that the service is part of the product), and making better choices about how to deliver the embedded services.

Because the only thing more disappointing than buying a product that breaks, is buying a product that breaks because the manufacturer broke it.

The Tool is No Better Than the Hand That Wields It

by vocalabs on Wed, 2016-08-10 14:46

Customer experience failures have many causes. Poor employee training and morale, rigid adherence to policy, broken processes, understaffing, bad design, a culture of indifference, and occasionally--very occasionally--lack of some critical piece of IT infrastructure.

But even when lack of technology isn't the problem, often the first solution a company will reach for is technology.

I think this is just human nature. Internal problems are hard for organizations to solve. Root causes can be buried deep under years of corporate politics and history that nobody wants to unearth. It's much easier to leave the skeletons in the ground and look for the technological quick-fix.

And so the company reaches for the latest state-of-the-art buzzwords and implements Big Data, EFM, Analytics, or (in an earlier era) CRM, ERP, WFM, or some other technology to solve what is fundamentally a problem with execution.

There's no doubt that these technologies bring value and have an important role in any company's infrastructure. But the technology can't solve a problem where the root cause is people and process. Technology is just a tool, and like any tool can be used well or poorly.

For example, if you are delivering poor customer experience because your employees are not empowered to solve customers' problems, implementing Enterprise Feedback Management will not solve that problem. At best, it might make it more obvious that there's an issue with corporate policies but you're still going to have to drain that swamp to fix things.

On the other hand, EFM can be a valuable tool when your organization is ready and able to make better use of customer feedback from top to bottom.

The mistake is in thinking that the tool will, by itself, drive the needed organizational changes. Instead, implementing the technology is something companies often do because it's easier than addressing the real problems.

Technology vendors are happy to encourage this thinking: it's easier to sell a product that the customer thinks will solve all their problems. But the net result is disappointment when the big technology projects fail to deliver the hoped-for results.

We saw this a generation ago, when CRM was the hot new thing. Failed CRM implementations were so common that some pundits went so far as to predict that CRM itself would prove to be just a fad. Of course CRM wasn't a fad: eventually we figured out what CRM is useful for (keeping track of customers), and what CRM couldn't do all by itself (increase sales, make your customers happier and more loyal). Today nobody questions the value of CRM, but we also have much more realistic expectations and nobody begins a CRM project thinking it will fix deep-seated organizational problems.

In the Customer Experience world, we need to keep in mind that our problems are often not solvable by technology. Technology can help, but the root causes are usually leadership, culture, people, and processes.

The good news is that it may be difficult and slow, but the problems are solvable with the right commitment.

And you might need to dig up a few skeletons along the way.

Dark Patterns in Customer Experience

by Peter Leppik on Fri, 2016-07-29 14:50

Harry Brignull has spent some time collecting and contemplating "Dark Patterns" in online commerce, the unethical, manipulative, and sometimes outright deceptive things companies do to try to manipulate customers into things they probably didn't want to do.

For example: the form that has a "sign me up for email marketing" checkbox in tiny type at the bottom you need to find and uncheck if you don't actually want to sign up for spam. Brignull has examples that take this to an appalling (and hilarious) extreme.

The full 30-minute video shows the evolution of Dark Patterns and offers some insights into why companies persist in these customer-unfriendly (and sometimes illegal) tactics. Brignull maintains a website with a catalog of Dark Patterns he's collected over the years, and it's well worth browsing.

Coming from a Customer Experience perspective, I think it's especially useful to think about how Dark Patterns come to be, and how they illustrate some of the challenges in trying to design and implement outstanding CX. Dark Patterns almost always involve either misleading the customer or making it hard for the customer to do something (like cancel a subscription). The end result is likely to be an upset customer (or former customer) and very poor CX.

By the time the customer gets upset--when she discovers she's been fooled or trapped--the company already has what it wants. It's too easy for the company to think that CX doesn't matter, because the short-term metrics (sales, newsletter subscriptions, etc.) don't adequately capture the damage that's being done to the customer relationship.

So while you may be able to boost the percentage of customers who buy travel insurance over the short term, those customers aren't going to be happy when they discover the extra charge. They probably won't be fooled a second time, and may take their business elsewhere or warn their friends. Some may call and demand refunds.

In extreme cases, government regulators may get involved.

In most cases, CX professionals who are doing their jobs properly will be working against Dark Patterns. The challenge, as it so often is in the Customer Experience world, is to help the rest of the organization understand that while manipulation, deception, and intentional barriers may sometimes improve short-term metrics, they rarely pay off in the long run.

Newsletter #99: Errors About Margin of Error

by Peter Leppik on Wed, 2016-07-27 14:56

We just published the 99th issue of our newsletter, Quality Times. This month I discuss what the Margin of Error in a survey means (and more importantly, what it doesn't mean). You can also read about what happened when one company's survey left a very poor brand impression for some of its customers.

This newsletter is one of the ways we get to know prospective clients. So if you find this useful and informative, please help us out by forwarding this to other people who might also enjoy it, and encourage them to subscribe. You can subscribe to this newsletter on our website.

 

Falling Through the Cracks

by Peter Leppik on Thu, 2016-07-21 15:01

Most companies can handle the ordinary customer service issues just fine. Often, the difference between a company with terrible service and great service is what happens when things go a little wrong. Do customers get their problems resolved quickly and painlessly, or do they fall through the cracks and get ground up by the machine?

Last week I had the experience of being ground up by the Verizon machine.

I have a prepaid Verizon SIM for my iPad, and I was on vacation in a rustic camp on an island in Northern Minnesota. The iPad was my only reliable Internet service, and when I used up my data allotment I needed to refill my account.

At first I tried to use the "Manage my Verizon Account" feature on the iPad, but that gave me an error message and instructed me to call a toll-free customer service number. I called the number and explained my problem to the customer service rep, who told me that I needed to log on to my Verizon account through a web browser to refill it.

This presented a problem, since the iPad's data allotment was used up and there was no other reliable Internet connection available. I explained this to the Verizon rep, who gave me a verbal shrug and told me there was no other way.

So I sat at the end of the dock, where I could just barely get an Internet connection on my T-Mobile phone, to slowly and painfully navigate the Verizon website. After probably a half hour of this, I was told that I could not reactivate my account online, and I was given a different customer service number to call.

I called that number, explained my problem to the rep, who forwarded me to a different department and another rep, who forwarded me to another rep in another department, who forwarded me to another department. Of course I had to explain my problem all over again with each rep and read off the same IMEI and ICCID numbers each time.

For those playing along at home, the IMEI is 15 digits and the ICCID is 19 digits. You can imagine how much fun it was reading 34 digits to each of four different Verizon reps, all while sitting at the end of a dock with the wind blowing across the microphone of my phone.

The final rep, who was apparently in the same department as the very first person I spoke to, chastised me for calling back after I'd already been told I needed to go online (seriously!). When I explained to her that I had managed to go online and that hadn't worked either, she offered to transfer me into an IVR where I could refill my account through the phone.

After punching in the same 34 digits into the IVR, the prerecorded voice pleasantly informed me that I could not refill my account through the IVR and I would need to call customer service.

It was at that point (after two hours, five reps, and three different self-service channels has yielded nothing) that I gave up on trying to give Verizon any of my money. There is a happy-ish ending, though, in that I managed to find a particular spot on the island where I could leave my T-Mobile phone, activate the hotspot, and get a solid Internet connection in the cabin. T-Mobile was a lot cheaper, too.

I'm not sure what crack I fell through at Verizon that day, and I don't think I want to find out. Whatever the situation, it was clear that none of the normal service channels were able or willing to help me.

Verizon likes to talk a lot about how they have the best coverage, and there's no question that Verizon's coverage on the island was much better than T-Mobile's. But in the end, Verizon's coverage didn't matter because T-Mobile made it so much easier to be their customer.

The lesson from that day is that it doesn't matter how great your core product or service is if your overall customer experience is bad enough.

Surveys Leave Brand Impressions

by Peter Leppik on Wed, 2016-07-06 16:30

Surveys don't just collect data from participants. Surveys also give the participants insights into what your priorities are, and this can impact your brand image.

Computer game company Ubisoft learned this the hard way recently, when they sent a survey to their customers. The first question asks the customer's gender. Customers who selected "Female" were immediately told that their feedback was not wanted for this survey.

While I'm sure this was not the intended message, it definitely came across to some Ubisoft customers as insensitive to women who enjoy playing games like Assassin's Creed (such people do exist). The company quickly took the survey down and claimed it was a mistake in the setup of the survey.

Whether this was a genuine mistake or an amazingly bad decision by a market researcher who got a little too enthusiastic about demographic screening, it definitely reinforces the image of the game industry as sexist and uninterested in the half of the market with two X chromosomes.

While this might be a particularly egregious example, it's important to remember that customer feedback really is a two-way street. While your customers are telling you how they feel about you, you are also telling your customers a lot about your attitudes towards them. For example:

  • Do you respect the customer's time by keeping the survey short and relevant?
  • Do you genuinely want to improve by following up and following through on feedback?
  • Do you care about things that are relevant to the customer?
  • Do you listen to the customer's individual story?

The lesson is that you should always think about a survey from the customer's perspective, since the survey is leaving a brand impression on your customers. While your mistakes might not be as embarrassing as Ubisoft's, you do want to make sure the impression you leave is a positive one.

Mistakes about Margin of Error

by Peter Leppik on Wed, 2016-06-29 16:01

Pop quiz time!

Suppose a company measures its customer satisfaction using a survey. In May, 80% of the customers in the survey said they were "Very Satisfied." In June 90% of the customers in the survey said they were "Very Satisfied." The margin of error for each month's survey is 5 percentage points. Which of the following statements is true:

  1. If the current trend continues, in August 110% of customers will be Very Satisfied.
  2. Something changed from May to June to improve customer satisfaction.
  3. More customers were Very Satisfied in June than in May.

Answer: We can't say with certainty that any of the statements is true.

The first statement can't be true, of course, since outside of sports metaphors you don't ever get more than 100% of anything. And the second statement seems like it might be true, but we don't have enough information to know whether the survey is being manipulated.

But what about the third statement?

Since the survey score changed by more than the margin of error, it would seem that the third statement should be true. But that's not what the margin of error is telling you.

As it's conventionally defined for survey research, the margin of error means that if you repeated the exact same survey a whole bunch of times but with a different random sample each time, there's an approximately 95% chance that the difference between the results of the original survey and the average of all the other surveys would be less than the margin of error.

That's a fairly wordy description, but what it boils down to is that the margin of error is an estimate of how wrong the survey might be solely because you used a random sample.

But you need to keep in mind two important things about the margin of error: First, it's only an estimate. There is a probability (about 5%) that the survey is wrong by more than the margin of error.

Second, the margin of error only looks at the error caused by the random sampling. The survey can be wrong for other reasons, such as a bias in the sample, poorly designed questions, active survey manipulation, and many many others.

Margin of Error Mistakes

I see two very common mistakes when trying to understand that the Margin of Error in a survey.

First, many people forget that the Margin of Error is only an estimate and doesn't represent some magical threshold beyond which the survey is accurate and precise. I've had clients ask me to calculate the Margin of Error to two decimal places, as though it really mattered whether it was 4.97 points or 5.02 points. I've actually stopped talking in terms of whether something is more or less than the margin of error, instead using phrases like "probably noise" if it's much less than the margin of error, "suggestive" for things that are close to the margin of error, and "probably real" for things that are bigger than the margin of error and I don't have any reason to disbelieve them. This intentionally vague terminology is actually a lot more faithful to what the data is saying than the usual binary statements about whether something is statistically significant or not.

Second, many people forget that there's lots of things that can change survey scores other than what the survey was intended to measure, and Margin of Error doesn't provide any insight into what else might be going on. Intentional survey manipulation is the one we always worry about (for good reason, it's common and sometimes hard to detect), but there are many things that can push survey scores one way or another.

It's important to keep in mind what the Margin of Error does and does not tell you. Do not assume that just because you have a small margin of error the survey is automatically giving accurate results.

One Picture that Captures the Essence of the CX Challenge

by Peter Leppik on Wed, 2016-06-22 17:23

The challenge of trying to promote good Customer Experience practices can be summed up by one picture I ran across today. If you can't read the image, it's a sign from a clinic that reads, "You are not a number to us. Our goal is to ensure you have the best experience possible. Please take a number to help us serve you better."

So, um, yeah, about that...

CX is often in tension with other parts of an organization, which can make it challenging to go from saying the right things to doing the right things. No matter how much effort and research the Customer Experience team puts into creating a better experience, there's always someone else who thinks some other way will be better.

The result can be, like the sign, a jarringly obvious reminder that the organization doesn't really believe its own customer-friendly hype. The person who made that sign probably didn't see the obvious disconnect; chances are it seemed like a perfectly reasonable message for customers.

Sometimes it takes that customer's outside perspective to break through the company's internal blinders.

(By the way, I tried to find the original source of this picture and couldn't, but it has been posted in several other places. Here's a reverse image search if you want to see where it's been.

Spurious errors

by Peter Leppik on Wed, 2016-06-15 12:35

I consider it something of a professional responsibility to take surveys when they're offered. I don't do every single survey (I tried that once a few years ago, and it wound up consuming way too much of my time), but I try to do them when I can.

A distressing theme is the number of bugs and errors I see in customer surveys. I haven't tracked it formally, but I would guess that at least one in ten online surveys I attempt is broken in some way. It's alarming how many companies are apparently unaware that their customer surveys don't work.

Today's example comes from MIcroCenter, where the survey form told me, "The errors below must be corrected before you can proceed." Which is all well and good if there had been any errors in the form, but there weren't.

So I guess MicroCenter won't benefit from my important perceptions and observations.

How Can We Help You?

Let us put our expertise in customer feedback to work for you.

Please tell us about how we can help you improve your customer feedback process.