The Customer Service Survey

Vocalabs' Blog

Blogs

We Value Your Feedback!

Via the Daily WTF, it seems that Adobe has a unique way to show customers just how seriously the company takes its customer surveys (click the image for a bigger version which is actually legible):

I mean, c'mon guys, I'm sure your QA department is pretty busy testing all your software products, but would it kill you to proofread the survey once in a while?

Be Thankful for Negative Feedback

It's that time of year again: airports are filling up, frozen turkeys are on sale at the supermarket, and writers are searching for a seasonal hook for their articles.

So, right.

Most people don't like getting negative customer feedback. But getting a bad survey from a customer is actually something to be thankful for (see what I did there?). Here are three reasons why:

  1. Negative Feedback Helps You Improve: Can you imagine a basketball coach who gives nothing but positive feedback to his players? I can't, and if such a person existed, I doubt he would be very effective. Of course, everyone would rather hear the good news than the bad news, but it's the negative feedback which gives us the ability to improve. So if you want to improve your customer experience, a customer who gives negative feedback is like a coach. Sometimes coaches are angry, rude, insulting, and hard to listen to. But without that coach to point out your mistakes, you're simply not going to get any better.
  2. Negative Feedback is More Honest: Not only would most people rather hear positive feedback, most people would also rather give positive feedback. It's uncomfortable to criticize other people (even through a faceless web page), and that's why so many customers give top scores on surveys. So when a customer makes the effort to criticize instead of just saying things were fine, you're probably getting the straight dope.
  3. Negative Feedback Keeps you Grounded: The Dunning-Kruger Effect is a big problem in the customer experience world: many many companies think they provide an above-average experience when in truth they stink. A healthy dose of criticism from your own customers is one way to keep your self-perceptions grounded in reality.

So next time a survey comes back with bad scores and withering criticism, don't get upset or defensive.

Instead, take a deep breath and be thankful that you have customers willing to help you in this way.

And pass the cranberry sauce.

Money vs. Attention

Techdirt ran another article about Comcast's reputation for poor customer service today. In it, the author repeated a common conclusion which I think is probably wrong:
 
...Comcast has no meaningful competitive incentive to change, and therefore simply refuses to spend the money necessary to fix the problem.
 
He's right that Comcast has no competitive incentive to change, but wrong in assuming it's all about not spending money.
 
Here's the thing: it's possible to spend money on customer service efficiently, and it's possible to spend money on customer service inefficiently. If you have bad service but spend the money well, then spending more money is likely to improve the service.
 
But if you're spending the money poorly, then spending more won't get you better service, just more bad service.
 
The kinds of complaints you see about Comcast have all the hallmarks of money being spent very badly.
 
For example, take the complaints about very long hold times. While occasional long hold times can be caused by a surge in calls, the hold queue costs money. All those people sitting on hold are tying up phone ports and running up long distance charges. Sure it's fractions of a cent per person per minute, but it adds up. And if those people don't get through today, they're calling back tomorrow, and if they don't call back tomorrow they're probably taking their business elsewhere. Persistently long hold times are a symptom of a company digging itself ever deeper into a hole of unresolved customer problems.
 
And it's very expensive when a customer talks to a rep but doesn't get his problem fixed on the first call, because that customer is calling back and it's going to cost twice as much. Complaints about poor resolution rates and having to make many calls to solve a problem mean that the company is wasting money by not taking the time to fix problems properly the first time.
 
But even that's nothing compared to the money Comcast wastes when they have to send a truck to the customer's house multiple times for the same problem (or when they send a truck for something the customer could have fixed herself).
 
I have no inside information, but my gut tells me that Comcast is actually spending far more on customer service than they should have to. The problem is they are wasting most of the money by delivering very poor service.
 
My guess is that if Comcast got its act together, it could deliver much better service and save a ton of money. But that would require an upfront effort to (a) train agents better, (b) allow support reps to spend more time with each customer to solve the problem on the first call, (c) empower reps to solve customer problems, and (d) allow different parts of the company to coordinate better.
 
The irony is that this investment would primarily be in the form of management time and attention, not money. The savings would probably start rolling in pretty quickly.
 
So to me (as an outsider looking in), the real problem is not that Comcast doesn't want to spend the money to fix its service. The real problem is that the senior leadership doesn't want to pay enough attention to the problem to get it fixed.

Gamification vs. Incentives and Recognition

Call me a curmudgeon, but I have a hard time getting behind "gamification" in the workplace.

For those not familiar with gamification, it basically means using the principles of video game mechanics in a real-life situation, like a call center, to motivate people and change their behavior. Gamification, as a buzzword, has been around long enough to develop both hype and backlash.

My problem is that, at its core, gamification in the workplace is really nothing more (or less) than the systematic use of employee recognition, rewards, and achievement as a way to motivate employees. But people have been doing that for as long as there have been workplaces. So the "new" thing in gamification is just the idea of being deliberate and systematic about employee motivation.

But by calling it "gamification" instead of (for example) "employee recognition program," you're implying that a cleverly-crafted set of achievements can somehow transform a dreary workplace into something fun like Mario Cart. At the end of the day, though, a boring job will still be a boring job.

It can also be insulting if done wrong, implying that handing out meaningless "achievements" is just as good as giving employees raises or bonuses.

So while I'm all on board for the idea of rewarding and recognizing employees, and I think the game industry may have a few things to teach us about what motivates people, can we please stop pretending that work, for most people, is anything other than work?

So much for my life's dream

Sadly, it seems I will never be a Mad Scientist. At least not if I stay in the survey business.

Troubleshooting a Survey: What Can Go Wrong

A lot of things have to happen to build an effective customer feedback program.

The flip side of this that if you have a customer feedback program which isn't effective, there's a lot of potential reasons. Using a systematic approach to troubleshooting the feedback process can help avoid wasting time on implementing the wrong solutions.

So to help with an ineffective survey process, here's a short troubleshooting guide for common survey problems.

Problem: Low Survey Response

General Troubleshooting Questions:

  • Are you are getting accurate contact information for customers?
  • Does the survey work (no errors, database problems, etc.)?
  • Is the survey a reasonable length (one page with no scrolling for online surveys, five minutes or less for phone interviews)?
  • Does the survey appear to come from a legitimate source?
  • Are you ensuring that customers don't get over-surveyed?
  • Can the customer take the survey immediately when asked, or does the customer need to remember to do it at a later time?
  • Does the survey require the customer to go through extra steps (copy a code from a receipt, call a phone number, etc.)?
  • Does the survey have mandatory questions?
  • Is the customer asked to take the survey a long time (days or even weeks) after the transaction?

Troubleshooting Questions for Email/Online Surveys:

  • Are survey invitations being marked as spam?
  • Does the invitation look professional and legitimate?
  • Does the invitation explain why you want the customer's feedback?
  • Does the invitation promise that the survey will be short (note: the survey must actually be short)?

Troubleshooting Questions for Phone Interviews:

  • Do the phone interviewers sound polite, empathetic, and professional on the phone?
  • Do the phone interviewers have noticeable foreign accents?
  • Is the Caller ID set to a real phone number which customers can call back to verify the survey is legitimate?
  • Does the interview script give an honest estimate of the survey time?
  • Do interviewers identify themselves and the sponsor of the survey?

Problem: No Follow-Through With Customers

Troubleshooting Questions:

  • Do you have a closed-loop process for customers who may want or need extra attention?
  • Is there tracking to ensure customers who need follow-up are actually contacted?
  • Are follow-up calls conducted by someone empowered to solve the customer's problem?
  • Do you capture and track the root causes of customers' issues?
  • Are follow-up calls conducted by someone other than the person who may have caused the customer's problem?

Problem: Survey Responses Are Not Relevant to the Business

Troubleshooting Questions:

  • Has the survey been updated recently?
  • Have you reviewed the performance of each survey question, and removed questions which are not yielding useful information?
  • Have you experimented with new survey questions relevant to current business issues?
  • Are you asking follow-up questions when customers have negative feedback?
  • Do you ask business stakeholders to provide feedback on what questions are relevant to them?
  • Do you regularly update the survey as the business needs evolve?
  • Do front-line employees have access to raw customer feedback in real time?

Problem: The Business Does Not Fix Known Problems in the Customer Experience

  • Is there a leadership commitment to improve the customer experience?
  • Do other parts of the organization get data to show how they impact the customer experience?
  • Are you using individual customer stories to persuade the organization that these issues are important?
  • Is the customer survey perceived as credible?
  • Does the company culture encourage listening to customer feedback?
  • Can you connect poor customer experience to financial metrics (through churn, increased operational expense, higher customer acquisition cost, etc.)?

Problem: Too Much Survey Data and Not Enough Useful Information

  • Have you reviewed the performance of each survey question and removed questions which are not yielding useful information?
  • Are you giving customer-facing employees direct and real-time access to their customer feedback?
  • Are you asking follow-up questions when the customer gives negative feedback?
  • Do you have a reporting tool which allows easy filtering of customer feedback?
  • Are you tracking general categories of customer comments in free response questions?
  • Do your categories evolve as the business needs change?
  • Do you keep the number of categories manageable, so you don't have categories which are either irrelevant to the business or statistically insignificant?

Problem: Survey Reports Are Ignored

  • Is there a leadership commitment to improve the customer experience?
  • Does the company have a culture of listening to customer feedback?
  • Are survey reports tailored to the needs of the individual recipient, or does everyone get the same reports?
  • Can recipients of survey reports modify the reports (filter the data, calculate new metrics, read customer comments, etc.)?
  • Have you asked for feedback on survey reports from the people who receive them?
  • Do recipients of reports feel they have a stake in the customer feedback process?

These questions get to a lot of common underlying problems we see with customer feedback processes. This doesn't cover everything that can go wrong, but it's a good place to start if you don't think you're getting the results you should.

Vocalabs Newsletter #83 is published

We just published issue 83 of Quality Times, our newsletter about measuring the customer experience.

In this issue I have two articles: one about the difference between "insightful" and "useful" data (spoiler alert: they are not the same!), and another about what makes a customer experience brittle.

As always I hope you find this interesting and informative. You can also subscribe to receive our newsletter via email, and receive it as soon as it's published every month.

Explosion of Really Bad Surveys

Local newspaper columnist James Lileks takes some well-deserved (and hilarious) potshots today at bad surveys.

He also reveals that, back in college, he did a turn as a telephone interviewer. So he at least has some sympathy for what it's like to be in the survey biz.

It does seem like there's been an explosion of really bad surveys over the past several years. Personally I blame the confluence of several factors:

  1. Online surveys have gotten really cheap and easy. This means organizations do more surveys but at the same time put less care and effort into designing the survey. Gone are the days when doing anything but the smallest survey meant hiring a market research company (for a minimum of $50K). It's distressingly common to see surveys riddled with typos, nonsensical questions, and other problems which make it clear nobody could be bothered to do a good job.
  2. Yet the long-form survey style somehow persists. When surveys were rare and expensive it made sense to ask every imaginable question because you needed to squeeze every possible insight from each participant. Today this mindset continues, even though surveys are cheap and common, and it's not unusual for a consumer to be asked to respond to literally hundreds of questions about a single three-day trip.
  3. And consumers are refusing to respond to bad surveys. Across the industry you hear people complaining that response rates are down on email surveys. But instead of asking the sensible question ("Why don't people want to take our survey?"), many companies respond by simply sending more survey invitations. To the same badly-designed, overly-long survey that 98% (or more!) of their customers won't fill out.

These problems won't be easy to solve, mostly because the root cause is that most organizations don't care as much about the customer experience as they say they do. This has always been the case--when it comes to customer service most companies talk the talk much better than they walk the walk--but the difference is that today it's easy to just do a survey instead of doing something.

When Does Bad Customer Service Become Evil?

There's a story making the rounds the past few days of a customer who had tried for over a year to get Comcast to correct a series of billing mistakes. Eventually he got fed up with ongoing mistakes and incompetence, and called the office of Comcast's controller to lodge a complaint.

That didn't work either, and that's when things got weird and evil. This customer happens to be an accountant, and mentioned in one of his complaints that he thought Comcast's billing problems should be investigated by the Public Company Accounting Oversight Board.

Which, to me, seems like a reasonable thing to say if you're an angry customer subjected to over a year's worth of overcharges and billing mistakes. I'm not an accountant, but persistent billing mistakes sure sound like an accounting problem to me.

But apparently Comcast thought otherwise: Comcast contacted the customer's employer and apparently said something that got the customer fired. Did I mention that the customer's employer happens to do a lot of business with Comcast?

Needless to say there's a lawsuit now, and a lot of he-said-she-said. The Consumerist has a good summary in a pair of articles: one about the customer getting fired, and another about Comcast apologizing for bad service but not getting the customer fired.

So it's easy to take the angle of tsk-tsking Comcast for another horrific example of bad customer service. No, make that evil customer service, since this has gone beyond the realm of incompetent into malicious.

But what I want to know is what the heck is going on at Comcast? Someone, somewhere inside Comcast at some point thought it was OK to call a customer's employer and say something that got the customer fired because of a billing complaint.

(A complaint which, by the way, Comcast has acknowledged was legitimate.)

Someone in the company had to have known how this would play in the media, to say nothing of the courts. Yet it happened anyway. I can think of a few explanations:

  1. Comcast thinks it's above the law and public opinion and can act with impunity. This is probably not strictly true, but the company has been persistently successful despite its poor reputation, so maybe people think they can get away with stuff.
  2. Someone panicked. Part of me thinks that it's really plausible that someone in Comcast might panic over the prospect of an accounting review of its billing systems. Just by the company's reputation for mistakes, it seems fair to assume that Comcast's systems aren't really ready for their close-up.
  3. Internally, Comcast is just out of control. Media reports over the past few months have painted a picture of a company divided into fiefdoms and disorganized, so it's possible that there just isn't enough adult supervision going on.

I don't know which of these theories, if any, is right. But it's clear that something weird is going on at Comcast.

Too bad the ATM at my bank doesn't do this

This video is old, but it's new to me. Take 45 seconds to watch it and see a customer experience that practically defines "delight."

Apparently in Japan, if you press the "help" button on a train ticket kiosk, a guy actually pops out from the wall behind the kiosk to lend assistance--to the delight and befuddlement of foreign tourists.

Doing a little research, I discovered that the guy isn't just sitting behind the kiosk all day waiting for Americans to push the help button. His main job is keeping the machines stocked with blank tickets, which is done from behind the kiosk so it doesn't disrupt normal operations. But as long as he's there, he can lend a hand as needed.

Of course, to the Japanese this is just normal and not the least bit delightful. That's the problem with the treadmill of customer delight.

I'm also reminded of the New York subway (and many other large American cities), where attendants are also available at many stations. But in America, we tend to put the attendants inside glass booths instead of having them magically step out of the wall when needed.

The Japanese way somehow seems so much more...delightful.

You Had Good Runs

Given the circumstances, 1-800-Flowers can't be happy to see their name in the New York Times over an epic miscommunication.

The flower arrangement for Grandma's funeral was supposed to read, "Farewell Grammy, you had a good run."

What it actually said, as transcribed by the phone agent who, according to the customer, "spoke English as a second language," was "FAR WELL GRANDMY YOU HAD A GOOD RUNS".

When the customer sent an e-mail complaint he heard no response--probably because he sent it to a "do not reply" address, which is where his order confirmation came from.

Someday I'll write an article about why using "do not reply to this email" email addresses are a dumb idea from a customer experience perspective. But this is not that article.

Instead, I'd like to pose the question of why 1-800-Flowers--a company whose entire existence is based on people's desire to be sensitive, communicate, and do what's culturally expected--can't hire employees with the sensitivity, communication skills, and cultural background to understand that while "you had a good run" is an appropriate (if cheeky) message for a funeral while, "you had good runs" is not.

Of course we all know the reason: it costs more money to hire good employees than bad ones.

But, as the Times columnist notes, 1-800-Flowers has attracted hundreds and hundreds and hundreds of complaints about botched orders.

Apparently the company has decided (for now at least) the negative publicity and bad word-of-mouth is a reasonable price to pay. I'll be curious to see how long that lasts.

Newsletter #82 is Published

The 82nd edition of our newsletter, Quality Times, has been published. If you are an e-mail subscriber you should be seeing it in your inbox shortly.

This month, the theme is cross-channel customer experiences. There are some very real challenges in providing a good customer experience when customers cross organizational silos, but some of the results I've seen make me think that this is one of the best opportunities for improving overall customer satisfaction. I think this is going to be an important area in the coming years.

As always I hope you find this interesting and informative.

Brittle Experiences

Think about what happens to a piece of glass when you hit it too hard: it shatters into a million pieces. We say that glass is brittle because it breaks before it bends.

Not all materials do this. Steel, for example, is likely to bend (maybe a lot) before it actually breaks. This is why we build bridges out of steel and not glass.

It's useful to apply the concept of brittleness to the world of business. Customer experiences, like bridges, are designed to handle a certain amount of strain before they start to fail.

When things start to go wrong, a brittle experience is likely to go catastrophically wrong for the customer or the company (or both). On the other hand, if the process is flexible enough to bend a bit and handle the unusual situation, it may not be that big of a deal.

For example, air travel today is very often a brittle experience. When everything goes well (as it usually does), you get to your destination on-time and with at least some dignity intact.

But if your travel plans go even slightly awry, the airline experience quickly goes from smooth to a stressful mess which could extend longer than the original trip. A brief thunderstorm at your departure airport means there's a long line of planes waiting to take off, and you sit on the ground for an hour or two. That departure delay means you miss your connecting flight. The next flight to your destination is overbooked, so you wind up spending the night at your connecting city waiting for a flight with an open seat to take you to your destination.

What started out as a minor hiccup (the brief thunderstorm) quickly turned into a stressful multi-day experience because the airports and airlines are too overloaded and too inflexible to handle even minor disruptions without it spiraling out of control. That's brittle: small problems become big problems and the whole thing goes very wrong for some passengers.

It's worth examining all elements of the customer experience under the lens of brittleness. Of course we expect that most of the time things will go smoothly for most customers, so the "normal" experience needs the most attention. But even the best-designed system won't be able to handle every situation.

So what happens when a customer has a problem? Are you flexible enough to deal with it gracefully? Or does the customer experience shatter into a million pieces like a piece of glass?

People Hate us on Yelp!

Lots of small businesses don't care much for Yelp, the online review site which can have an outsize influence on driving traffic to or from a small restaurant or shop. It doesn't help that Yelp's business model revolves around selling advertising and promotional services to those same businesses--a practice which can feel a little corrupt at times. It doesn't help that some Yelp reviewers these days seem to feel entitled to special treatment because of all the reviews they post.

One California restaurant, Botto Bistro, has come up with a creatively subversive way to market itself using Yelp: they are campaigning to be the restaurant with the lowest score on Yelp, rewarding customers who post one-star reviews with coupons and freebies. Amusingly, Yelp has responded by taking down hundreds of one-star reviews of the restaurant, thus boosting its Yelp score.

I love this idea for a lot of different reasons. First, it plays well into the restaurant's image of "Italian cooking with an attitude," as other parts of their website mock clueless customer questions and aggressively demonstrate that they don't think the customer is always right.

I also like they way they turn the whole concept of Yelp on its head, rendering powerless a big company which can sometimes seem like a bully to the small business dependent on Yelp for new customers.

The data nerd in me also really loves the way they demonstrate that you can't take metrics and customer feedback at face value--you always need to ask what's behind the numbers. Here's a case where people who love the restaurant are giving terrible reviews because that's part of their brand image and shtick.

And finally, I like the way they've found to increase customer engagement by asking customers to do something silly and subversive and unique. Customers who are "in the know" can read the Yelp reviews with a completely different understanding than everyone else.

Botto Bistro has demonstrated once again that customer experience isn't about providing the "best" customer experience, but providing the experience which best matches what your particular customers will appreciate.

Speech vs. DTMF

Most big companies have moved to speech recognition for their phone systems, but that doesn't mean old-fashioned button pushing is dead. Here are some rules of thumb I've developed about when it's a good idea to use DTMF (aka Touch-Tone) in a speech system:

  1. If you're asking the customer to input a bunch of numbers (i.e. credit card number, order number, account number, etc.) you should ALWAYS allow DTMF input. A substantial percentage of callers will try to dial the number even if you tell them to speak it. Plus it works better.
  2. "Press or say one" style application design should be avoided--it has the expense of speech with none of the advantages.
  3. Whenever possible, speech prompts should allow a DTMF fallback. There will always be situations where speech doesn't work, but you don't need to tell the caller about the DTMF option unless there's an error. For example, start with "Do you want sales, customer service, or technical support?" to prompt a spoken response, but if that fails, offer "What department do you want? You can say the name of the department or press one for sales, two for customer service, or three for technical support."
  4. For the love of all that is beautiful and innocent in this world, please don't disable the "zero" option to reach a live person! It doesn't work, and is the most effective way to really make your customers mad. 

Cross-Channel Service Continuity

I'm going to predict that one of the most exciting areas in customer experience over the next decade will be cross-channel service continuity.

A few forward-thinking companies are starting to pay attention to this, but it's mostly off the radar right now. It's so far off the radar that it doesn't even have a snappy name or acronym yet (CCSC, anyone?). But it's exciting because the early evidence is that this is one of the most powerful ways to improve the overall customer experience in many big organizations.

So what the heck am I talking about? In concept, the idea is simple: when a customer contacts a company more than once about some issue, the company treats those contacts as part of the same experience. Even if the contacts are through different channels.

This makes perfect sense, since to the customer those multiple contacts are all part of the same experience. But nearly every large company has them siloed off into different parts of the organization which don't talk to each other. Often, they can't talk to each other even if they want to.

And that's what makes CCSC (I really need a better name!) hard: there's a lot of infrastructure which needs to be in place to make it work. The call center needs to know that you just visited the website, and vice-versa. Building this technology will keep companies like IBM, Accenture, and a host of new startups very happy for a long time.

But at SpeechTEK last month, USAA and Nuance presented the results of exactly this sort of initiative. Here's the session description:

Consider Becky, a USAA member who is looking at homeowner’s insurance options online, but has a question and decides to call. After she authenticates, the IVR notes that Becky was logged into the website and asks if she is calling for a homeowner’s insurance quote. Becky happily confirms that is indeed her intention. Many businesses see such proactive, cross-channel scenarios as a pipe dream, but this presentation reviews the quantitative and qualitative methods used to understand customer cross-channel behavior and create user interface designs that support them.

In the SpeechTEK session, USAA shared that this simple piece of cross-channel service continuity--routing the customer straight to the right department based on a recent online experience--had a powerful effect on customer satisfaction and other key metrics. Imagine what we could do with true service continuity, where the customer would not only be routed to the right department but could also resume the same transaction.

This is consistent with research that we published a couple years ago where we found that service continuity across multiple calls to a call center completely eliminated the dissatisfaction normally associated with having to call more than once. In other words, customers didn't mind having to make more than one call, as long as they didn't have to start over (see page 4 of this report).

In my view, the almost complete isolation of most customer service channels from each other is one of the most badly broken pieces of the customer experience at many large companies. But that means it's also one of the biggest opportunities to generally improve customer experiences.

And that's why CCSC (ugh, that name!) is likely to be one of the hottest ideas in customer experience in years to come.

Another Reason to Write Relevant Surveys

I'm constantly making the point that customer surveys need to be well-written, meaningful, and relevant to the customer.

This is just a matter of respect: someone is doing you a favor by taking your survey, so don't waste their time.

But if that's not enough, here's another one. If you ask dumb questions, someone may mock you on TV, like Keith Olberman mocked a Minnesota Twins marketing survey yesterday.

Somehow I don't think this survey succeeded in promoting the Twins' brand image.

Insights Aren't Enough

Anyone who has done any sort of data collection or analysis in the business world has almost certainly been asked to produce insights. "We're looking for insightful data," is a typical statement I hear from clients on a regular basis.

But for some reason, people don't talk much about getting useful data. There's an implicit assumption that "insightful data" and "useful data" are the same thing.

They aren't, and it's important to understand why.

  • "Insightful" data yields new knowledge or understanding about something. It tells you something you didn't already know.
  • "Useful" data can be applied towards achieving some goal. It moves you closer to your business objective.

Data can be either "insightful" or "useful," or both, or neither. Insightfulness and usefulness are completely different things.

For example, if you discover as part of your customer research that a surprisingly high percentage of your customers are left-handed, that may be insightful but it's probably not useful (unless you're planning to market specifically to southpaws).

Or if your survey data shows that some of your customer service reps have consistently higher customer satisfaction than others, that's very useful information, but it's probably not insightful (you probably expected some reps to score higher than others).

The best data is both insightful and useful, but that's rare. Most companies have enough of an understanding of how their business works that true insights are unusual, and true insights which can be immediately applied towards a business goal are even less common.

And of course data which is neither useful nor insightful serves no purpose. Nevertheless, this sort of research is distressingly common.

When it comes down to useful data vs. insightful data, I tend to prefer usefulness over insightfulness. Data which is useful, even if it doesn't reveal any new insights, still helps advance the goals of the company. That's not to imply that insights have no value: even a useless insight can be filed away in case it becomes important in the future.

But whether you're looking for insights or usefulness, remember that they are not the same thing.

Cross-channel customer feedback

If you check your bank balance online and then call customer service because you discovered a mistake, chances are that you think of that as two parts of a single customer experience.

But in almost every case, your bank sees that as two (or more) completely unrelated interactions. So what the company thinks are several routine customer touchpoints could easily be a frustrating mess to you, the customer.

This is why getting customer feedback about cross-channel experiences is so very important. Collecting this data will identify the service gaps and inconsistencies that are often completely invisible to companies.

There are two strategies for collecting cross-channel feedback:

  1. Target customers who have multiple interactions: If a customer contacts a company more than once within a short period of time, it's a good bet that those contacts are related. So we can target customers for a survey based on this specific behavior. For example, any customer who logs in to the website and then calls on the phone within two hours would be called shortly afterwards by an interviewer to find out why. The advantage of this approach is that it's efficient, and you are specifically targeting your survey towards customers who are likely to have valuable feedback. It can be challenging, though, to match the records from different silos of the organization quickly enough to make this happen.
  2. Ask about cross-channel experiences as part of the normal feedback process: If it's not possible to specifically target customers who crossed service channels, a reasonable strategy is to add questions about cross-channel experiences to an existing survey. When we've done this for our clients, it's common for us to find that a high percentage (20% or more) of the customers we survey after a customer service call had tried the website before calling. This high incidence lets us collect some hard data about what's driving customers to pick up the phone instead of sticking to the online channel

Cross-channel behavior is one of the biggest and most universal blind spots in most companies' customer feedback programs. Most companies simply have no idea how often customers are crossing organizational silos, what's driving that behavior, and what effect it has on the overall customer experience.

We've also found that when customers have to start over each time they contact a company about the same problem, it's a major driver of dissatisfaction. But service continuity is often overlooked because the company isn't equipped to deal with multiple touchpoints as a single experience.

Collecting some feedback about cross-channel experiences is a good place to start in fixing what is likely a major service problem.

Issue #81 of Quality Times

We published Issue #81 of our newsletter, Quality Times.

In this issue I write about business dashboards: the good, the bad, but usually the ugly. As always, I hope you find this useful and informative, and welcome any comments and suggestions. 

Naughty, Naughty Radisson

I came across something new while doing a customer survey about a recent stay at the Radisson Blu in Chicago.

Near the end of the survey, they inserted a page which wasn't part of the actual customer survey, but rather a TripAdvisor feedback form.

Now, I understand that getting a lot of reviews on TripAdvisor is really important to hotels these days. But this practice strikes me as nothing short of abusive. That's because before the Radisson asked me to rate them on TripAdvisor, they already knew my answers to the customer survey.

Is the Radisson being honest and asking everyone to fill out the TripAdvisor form? Or are they being sneaky and only asking customers who had a good experience for a review. I don't know, and there's no way for me to know.

But what I do know is that this makes all the feedback on this hotel on TripAdvisor immediately suspect. Even if the Radisson is being honest today, I don't trust that they (and all other hotels which may do this) will continue to be honest. The stakes are simply too high, and the temptation too great.

So caveat emptor as always.

Happiness is Driven By Expectations

In the news today is some research on what drives people's happiness moment to moment. Using data from 18,000 participants, researchers found that people's reported happiness is driven not simply by what's going on in their lives, but by what's going on relative to their expectations.

For example, how happy (or upset) you are about getting a $250 car repair bill depends on whether you expected the bill to be $50 or $1,000.

On one level this is obvious.

On another level it's very important to understand that creating a positive customer experience is equal parts delivering a good experience and making sure the customer's expectations are properly managed.

In other words, under-promise and over-deliver.

Sometimes this is straightforward: Disney is famous for telling park visitors that the line to get into a ride will take longer than it actually will.

Other times the expectations may be outside your control. If you are an online retailer and Amazon.com starts offering free overnight shipping, then it's likely some of your customers will be disappointed if you don't offer the same.

In these cases it's important to understand not just what customers' expectations are but where they are coming from. That way you can be on top of shifting expectations and respond appropriately.

Case in point: For years in the mobile phone industry, customers on traditional plans expected to be locked into a two-year contract. Customers don't want this, but there were no other options and so a mobile phone company could keep customers happy despite locking them into a contract. But when T-Mobile unilaterally decided to eliminate the two-year contract, that put T-Mobile in the position of setting customers' expectations for the whole industry. It also made T-Mobile the only player actually meeting those expectations, and as a result T-Mobile is capturing a lot of subscribers.

In customer experience, its important to pay as much attention to expectations as delivery.

Weekend Read: The Philosophy of Great Customer Service

Here's a great article from the founder of CD Baby, Derek Sivers, on The Philosophy of Great Customer Service.

Derek attributes his success with CD Baby to great customer service. And he attributes CD Baby's great customer service to a philosophy which can be summed up as genuine engagement with customers.

Uncorrelated Data

A few months ago I wrote about the Spurious Correlation Generator, a fun web page where you could discover pointless facts like the divorce rate in Maine is correlated to per-capita margarine consumption (who knew!).

The other side of the correlation coin is when there's a complete lack of any correlation whatsoever. Today, for example, I learned that in a sample of 200 large corporations, there is zero correlation between the relative CEO pay and the relative stock market return. None, nada, zippo.

(The statistician in me insists that I restate that as, "any correlation in the data is much smaller than the margin of error and is statistically indistinguishable from zero." But that's why I don't let my inner statistician go to any of the fun parties.)

Presumably, though, the boards of directors of these companies must believe there's some relationship between stock performance and CEO pay. Otherwise why on Earth would they pay, for example, Larry Ellison of Oracle $78 million? Or $12 million to Ken Frazier, CEO of Merck? What's more, since CEOs are often paid mostly in stock, the lack of any correlation between stock price and pay is surprising.

It's easy to conclude that these big companies are being very foolish and paying huge amounts of money to get no more value than they would have gotten had they hired a competent chief executive who didn't happen to be a rock star. And this explanation could well be right.

On the other hand, the data doesn't prove it. Just as a strong correlation doesn't prove that two things are related to each other, the lack of a correlation doesn't prove they aren't related.

It's also possible that the analysis was flawed. Or that they are related but in some more complicated way than a simple correlation.

In this case, here are a few things I'd examine about the data and the analysis before concluding that CEO pay isn't related to stock performance:

  1. Sample Bias: The data for this analysis consists of 200 large public companies in the U.S. Since there are thousands of public companies, and easily 500 which could be considered "large," it's important to ask how these 200 companies were chosen and what happens if you include a larger sample. It appears that the people who did the analysis chose the 200 companies with the highest CEO pay, which is a clearly biased sample. So the analysis needs to be re-done with a larger sample including companies with low CEO pay, or ideally, all public companies above some size (for example, all companies in the S&P 500).
  2. Analysis Choices: In addition to choosing a biased sample, the people who did the analysis also chose a weird way to try to correlate the variables. Rather than the obvious analysis correlating CEO pay in dollars against stock performance in percent, this analysis was done using the relative rank in CEO pay (i.e. 1 to 200) and relative rank in stock performance. That flattens any bell curve distribution and eliminates any clustering which, depending on the details of the source data, could either eliminate or enhance any linear correlation.
  3. Input Data: Finally there's the question of what input data is being used for the analysis. Big public companies usually pay their CEOs mostly in stock, so you would normally expect a very strong relationship between stock price and CEO pay. But there's a quirk in how CEO compensation is reported to shareholders: in any given year, the reported CEO pay includes only what the CEO got for actually selling shares in that year. A chief executive could hang on to his (or too rarely, her) stock for many years and then sell it all in one big block. So in reality the CEO is collecting many years' worth of pay all at once, but the stock performance data used in this analysis probably only includes the last year. The analysis really should include CEO pay and stock performance for multiple years, possibly the CEO's entire tenure.

So the lack of correlation in a data analysis doesn't mean there's no relationship in the data. It might just mean you need to look harder or in a different place.

My Dashboard Pet Peeve

I have a pet peeve about business dashboards.

Dashboards are great in theory. The idea is to present the most important information about the business in a single display so you can see at a glance how it's performing and whether action is required. Besides, jet planes and sports cars have dashboards, and those things are fast and cool. Everyone wants to be fast and cool!

In reality, though, most business dashboards are a mess. A quick Google search for business dashboard designs reveals very few which clearly communicate critical information at a glance.

Instead, you find example after example after example after example after example which is too cluttered, fails to communicate useful information, and doesn't differentiate between urgent, important, and irrelevant information. I didn't have to look far for those bad examples, either: I literally just took the top search results.

Based on what I've seen, the typical business dashboard looks like the company's Access database got drunk and vomited PowerPoint all over the screen.

As I see, there are two key problems with the way business dashboards are implemented in practice:

First, there's not enough attention given to what's most important. As a result, most dashboards have too much information displayed and it becomes difficult to figure out what to pay attention to.

This data-minimization problem is hard. Even a modest size company has dozens, perhaps hundreds, of pieces of information which are important to the day-to-day management of the business. While not everyone cares about everything, everything is important to someone. So the impulse to consolidate everything into a single view inevitably leads to a display which includes a dizzying array of numbers, charts, and graphical blobs.

Second, the concept of a "dashboard" isn't actually all that relevant to most parts of a business. The whole purpose of making critical information available at a glance is to enable immediate action, meaning within a few seconds. In the business world, "extremely urgent" usually means a decision is needed within a few hours, not seconds. You have time to pause and digest the information before taking action.

That said, there are few places where immediate action is required. For example, a contact center has to ensure enough people are on the phones at all times to keep the wait time down. In these situations, a dashboard is entirely appropriate.

But the idea of an executive watching every tick of a company dashboard and steering the company second-by-second is absurd. I get that driving a sports car or flying a jet is fun and work is, well, work. But you will never manage a company the way you drive a car. Not going to happen.

But for better or worse, the idea of a business dashboard has resonance and dashboards are likely to be around for a while.

To make a dashboard useful and effective, probably the most important thing is to severely restrict what's included. Think about your car. Your car's dashboard probably displays just a few pieces of information: speed, fuel, the time, miles traveled, and maybe temperature and oil pressure. Plus there's a bunch of lights which turn on if something goes wrong. A business dashboard should be limited to just a handful (3-4) pieces of information which are most important, and maybe some alerts for other things which need urgent attention. This might require having different dashboards for different functions within the company--on the other hand, it would be silly to give the pilot and the flight attendants the same flight instruments.

The other element in useful dashboards is timing. If the data doesn't require minute-by-minute action, then having real-time displays serves little purpose. In fact, it might become a distraction if people get too focused on every little blip and wobble. Instead, match the pace of data delivery to the actions required. For example, a daily dashboard pushed out via e-mail, with alerts and notifications if something needs attention during the day. 

Syndicate content