The Customer Service Survey

Vocalabs' Blog


Issue #81 of Quality Times

We published Issue #81 of our newsletter, Quality Times.

In this issue I write about business dashboards: the good, the bad, but usually the ugly. As always, I hope you find this useful and informative, and welcome any comments and suggestions. 

Naughty, Naughty Radisson

I came across something new while doing a customer survey about a recent stay at the Radisson Blu in Chicago.

Near the end of the survey, they inserted a page which wasn't part of the actual customer survey, but rather a TripAdvisor feedback form.

Now, I understand that getting a lot of reviews on TripAdvisor is really important to hotels these days. But this practice strikes me as nothing short of abusive. That's because before the Radisson asked me to rate them on TripAdvisor, they already knew my answers to the customer survey.

Is the Radisson being honest and asking everyone to fill out the TripAdvisor form? Or are they being sneaky and only asking customers who had a good experience for a review. I don't know, and there's no way for me to know.

But what I do know is that this makes all the feedback on this hotel on TripAdvisor immediately suspect. Even if the Radisson is being honest today, I don't trust that they (and all other hotels which may do this) will continue to be honest. The stakes are simply too high, and the temptation too great.

So caveat emptor as always.

Happiness is Driven By Expectations

In the news today is some research on what drives people's happiness moment to moment. Using data from 18,000 participants, researchers found that people's reported happiness is driven not simply by what's going on in their lives, but by what's going on relative to their expectations.

For example, how happy (or upset) you are about getting a $250 car repair bill depends on whether you expected the bill to be $50 or $1,000.

On one level this is obvious.

On another level it's very important to understand that creating a positive customer experience is equal parts delivering a good experience and making sure the customer's expectations are properly managed.

In other words, under-promise and over-deliver.

Sometimes this is straightforward: Disney is famous for telling park visitors that the line to get into a ride will take longer than it actually will.

Other times the expectations may be outside your control. If you are an online retailer and starts offering free overnight shipping, then it's likely some of your customers will be disappointed if you don't offer the same.

In these cases it's important to understand not just what customers' expectations are but where they are coming from. That way you can be on top of shifting expectations and respond appropriately.

Case in point: For years in the mobile phone industry, customers on traditional plans expected to be locked into a two-year contract. Customers don't want this, but there were no other options and so a mobile phone company could keep customers happy despite locking them into a contract. But when T-Mobile unilaterally decided to eliminate the two-year contract, that put T-Mobile in the position of setting customers' expectations for the whole industry. It also made T-Mobile the only player actually meeting those expectations, and as a result T-Mobile is capturing a lot of subscribers.

In customer experience, its important to pay as much attention to expectations as delivery.

Weekend Read: The Philosophy of Great Customer Service

Here's a great article from the founder of CD Baby, Derek Sivers, on The Philosophy of Great Customer Service.

Derek attributes his success with CD Baby to great customer service. And he attributes CD Baby's great customer service to a philosophy which can be summed up as genuine engagement with customers.

Uncorrelated Data

A few months ago I wrote about the Spurious Correlation Generator, a fun web page where you could discover pointless facts like the divorce rate in Maine is correlated to per-capita margarine consumption (who knew!).

The other side of the correlation coin is when there's a complete lack of any correlation whatsoever. Today, for example, I learned that in a sample of 200 large corporations, there is zero correlation between the relative CEO pay and the relative stock market return. None, nada, zippo.

(The statistician in me insists that I restate that as, "any correlation in the data is much smaller than the margin of error and is statistically indistinguishable from zero." But that's why I don't let my inner statistician go to any of the fun parties.)

Presumably, though, the boards of directors of these companies must believe there's some relationship between stock performance and CEO pay. Otherwise why on Earth would they pay, for example, Larry Ellison of Oracle $78 million? Or $12 million to Ken Frazier, CEO of Merck? What's more, since CEOs are often paid mostly in stock, the lack of any correlation between stock price and pay is surprising.

It's easy to conclude that these big companies are being very foolish and paying huge amounts of money to get no more value than they would have gotten had they hired a competent chief executive who didn't happen to be a rock star. And this explanation could well be right.

On the other hand, the data doesn't prove it. Just as a strong correlation doesn't prove that two things are related to each other, the lack of a correlation doesn't prove they aren't related.

It's also possible that the analysis was flawed. Or that they are related but in some more complicated way than a simple correlation.

In this case, here are a few things I'd examine about the data and the analysis before concluding that CEO pay isn't related to stock performance:

  1. Sample Bias: The data for this analysis consists of 200 large public companies in the U.S. Since there are thousands of public companies, and easily 500 which could be considered "large," it's important to ask how these 200 companies were chosen and what happens if you include a larger sample. It appears that the people who did the analysis chose the 200 companies with the highest CEO pay, which is a clearly biased sample. So the analysis needs to be re-done with a larger sample including companies with low CEO pay, or ideally, all public companies above some size (for example, all companies in the S&P 500).
  2. Analysis Choices: In addition to choosing a biased sample, the people who did the analysis also chose a weird way to try to correlate the variables. Rather than the obvious analysis correlating CEO pay in dollars against stock performance in percent, this analysis was done using the relative rank in CEO pay (i.e. 1 to 200) and relative rank in stock performance. That flattens any bell curve distribution and eliminates any clustering which, depending on the details of the source data, could either eliminate or enhance any linear correlation.
  3. Input Data: Finally there's the question of what input data is being used for the analysis. Big public companies usually pay their CEOs mostly in stock, so you would normally expect a very strong relationship between stock price and CEO pay. But there's a quirk in how CEO compensation is reported to shareholders: in any given year, the reported CEO pay includes only what the CEO got for actually selling shares in that year. A chief executive could hang on to his (or too rarely, her) stock for many years and then sell it all in one big block. So in reality the CEO is collecting many years' worth of pay all at once, but the stock performance data used in this analysis probably only includes the last year. The analysis really should include CEO pay and stock performance for multiple years, possibly the CEO's entire tenure.

So the lack of correlation in a data analysis doesn't mean there's no relationship in the data. It might just mean you need to look harder or in a different place.

My Dashboard Pet Peeve

I have a pet peeve about business dashboards.

Dashboards are great in theory. The idea is to present the most important information about the business in a single display so you can see at a glance how it's performing and whether action is required. Besides, jet planes and sports cars have dashboards, and those things are fast and cool. Everyone wants to be fast and cool!

In reality, though, most business dashboards are a mess. A quick Google search for business dashboard designs reveals very few which clearly communicate critical information at a glance.

Instead, you find example after example after example after example after example which is too cluttered, fails to communicate useful information, and doesn't differentiate between urgent, important, and irrelevant information. I didn't have to look far for those bad examples, either: I literally just took the top search results.

Based on what I've seen, the typical business dashboard looks like the company's Access database got drunk and vomited PowerPoint all over the screen.

As I see, there are two key problems with the way business dashboards are implemented in practice:

First, there's not enough attention given to what's most important. As a result, most dashboards have too much information displayed and it becomes difficult to figure out what to pay attention to.

This data-minimization problem is hard. Even a modest size company has dozens, perhaps hundreds, of pieces of information which are important to the day-to-day management of the business. While not everyone cares about everything, everything is important to someone. So the impulse to consolidate everything into a single view inevitably leads to a display which includes a dizzying array of numbers, charts, and graphical blobs.

Second, the concept of a "dashboard" isn't actually all that relevant to most parts of a business. The whole purpose of making critical information available at a glance is to enable immediate action, meaning within a few seconds. In the business world, "extremely urgent" usually means a decision is needed within a few hours, not seconds. You have time to pause and digest the information before taking action.

That said, there are few places where immediate action is required. For example, a contact center has to ensure enough people are on the phones at all times to keep the wait time down. In these situations, a dashboard is entirely appropriate.

But the idea of an executive watching every tick of a company dashboard and steering the company second-by-second is absurd. I get that driving a sports car or flying a jet is fun and work is, well, work. But you will never manage a company the way you drive a car. Not going to happen.

But for better or worse, the idea of a business dashboard has resonance and dashboards are likely to be around for a while.

To make a dashboard useful and effective, probably the most important thing is to severely restrict what's included. Think about your car. Your car's dashboard probably displays just a few pieces of information: speed, fuel, the time, miles traveled, and maybe temperature and oil pressure. Plus there's a bunch of lights which turn on if something goes wrong. A business dashboard should be limited to just a handful (3-4) pieces of information which are most important, and maybe some alerts for other things which need urgent attention. This might require having different dashboards for different functions within the company--on the other hand, it would be silly to give the pilot and the flight attendants the same flight instruments.

The other element in useful dashboards is timing. If the data doesn't require minute-by-minute action, then having real-time displays serves little purpose. In fact, it might become a distraction if people get too focused on every little blip and wobble. Instead, match the pace of data delivery to the actions required. For example, a daily dashboard pushed out via e-mail, with alerts and notifications if something needs attention during the day. 

Policy vs. Culture

Yesterday, Comcast had to endure an online PR nightmare when a customer posted a recording of what happened to him when he tried to cancel his Comcast service. This NPR article has the recording, along with the Comcast's stating-the-obvious response that "We are very embarrassed" by what happened.

Comcast went on to say that, "The way in which our representative communicated with them is unacceptable and not consistent with how we train our customer service representatives." But reading through the hundreds of comments on the NPR article and other sites, it's clear that this one call was not an isolated incident.

At the moment I imagine Comcast's PR and marketing teams are in damage control mode, trying to limit the spread and fallout of this incident.

While they do that, I encourage Comcast's executives to spend some time meditating on the difference between policy and culture.

Policy is what a company says it will do, through training, written procedures, and executive's public statements.

Culture, on the other hand, is what a company actually encourages its employees to do, through formal and informal incentives, subtle messages about which policies are more important, decisions about hiring and promotion, and where executives focus their time and attention.

I have no doubt that Comcast trains its retention agents that they shouldn't annoy customers who call to cancel. There's probably even a statement somewhere in the training manual to the effect that customer satisfaction is very important.

But who gets the bonuses and promotions: the agent who quickly and efficiently processes customer requests, or the one who convinces the most customers to not cancel?

Comcast already has a reputation for very poor customer service (which, by our survey data, is deserved). But as long as the company treats incidents like this as PR problems rather than indicators of an underlying cultural problem, Comcast's service levels and reputation are unlikely to improve.

Newsletter #80 Published

We just published issue #80 of Quality Times, Vocalabs' newsletter about measuring and improving the customer experience.

In this issue I discuss the fact that it's impossible to delight customers all the time, since they will soon come to expect that exceptional experience. This creates a treadmill of customer delight where a great customer experience raises expectations, making it harder to exceed expectations in the future. I hope you find this useful and informative, and welcome any comments and suggestions.

The Kano Model and Customer Feedback

In the 1980's, Noriaki Kano developed a conceptual model of how customer preferences drive satisfaction. This is now known as the Kano Model, and the basic idea is that attributes of a product or service can be classified by the effect they have on customers' opinions:

  • Must Have attributes are things which customers expect. Customers will be much less satisfied if the Must Have attribute isn't present, but are indifferent when it is present. For example, a car must have a steering wheel.
  • Key Drivers are those things which make customers more satisfied when they are present, and less satisfied when they aren't. These are the attributes which account for the most difference in customer satisfaction, and as a result are often points of competitive differentiation. For example, extra legroom on an airliner is likely to make you more satsified, and reduced legroom is likely to make you less satisfied.
  • Delighters are things which customers don't expect and have a positive impression. Customers will often be more satisfied if given a Delighter, but not getting one does not make customers less satisfied. For example, free overnight shipping with an online order is likely to make customers very happy, but they won't be less satisfied without it.
  • Indifferent attributes don't drive satisfaction to a significant degree. For example, most customers probably don't care much if a vending machine accepts $1 coins, since few consumers in the U.S. actually use $1 coins.

Part of the purpose of a customer feedback program is usually to make sure a company is meeting customers' expectations. In the context of the Kano Model, you want to make sure you always deliver the Must Haves and the Key Drivers, and provide Delighters where possible.

This suggests that as part of the customer feedback process you should be asking customers about whether the experience they received delivered the various elements which could be drivers of customer satisfaction. Some of this is obvious and common. For example, in a call center, Call Resolution is almost always a key driver, and most customer surveys ask about this.

But other things may not be so obvious and common. Few companies include a question about whether the employee was polite and professional, even though this is usually a significant Must Have attribute of any customer service interaction. Most companies simply assume that their employees are normally polite and professional; whereas the Kano Model would suggest that this should be tracked because you can't afford an outbreak of rudeness.

In the Kano Model, customer expectations can also shift over time. Things which used to be Delighters can become Key Drivers if customers come to expect them. Similarly, if an industry provides poor experiences over an extended period of time, what had once been a Must Have could become a Key Driver or even a Delighter (for example, free meals in coach on an airplane). What's more, one customer's Must Have might be another customer's Delighter: not all customers have the same expectations.

Fortunately, the customer feedback program can also help track changes in customer expectations. If a customer survey asks about Kano attributes, that data can be used to correlate each attribute against a top-level metric like Customer Satisfaction. Attributes which correlately highly to the top-level metric are Key Drivers. Attributes which can drag the metric down but not push it up are Must Haves, and attributes which tend to push the metric up but not drag it down are Delighters.

Armed with this information, it's possible to track how customer expecations shift over time and adjust the product and service delivery accordingly.

The Treadmill of Customer Delight

There is no such thing as a company which provides a consistently exceptional customer experience.

The problem is that customers' expectations are set, in large part, by the experiences they have already had. And so the exceptional experience can quickly become ordinary, and the ordinary experience soon becomes expected. And if you don't deliver the expected experience, the customer is disappointed.

This treadmill of rising customer expectations can seem like futility if your goal is to always delight the customer. But it's impossible to delight the customer every time, because delight requires surprise, and you can't surprise people with the expected level of service.

But the other side of the coin is that customers are reluctant to give up a level of service they have come to expect, and it's better to be the company setting the expectations than to have your competitors decide where the bar is.

For example, before Apple introduced the iPhone no company offered a touchscreen phone without a keyboard. But once the iPhone hit the market, customers' expectations of what a smartphone looked like and could do changed very quickly. This left the other mobile phone makers scrambling to offer a competitive product, and put Apple in the enviable (and profitable) position of being able to stay ahead of the competition for several years just by raising the bar a little bit each year on the iPhone.

Another example is the shipping industry. FedEx and UPS have set customers' expectations that shipping a package delivers a lot more service than just moving a box from point A to point B. Today customers expect to be able to get quotes online, request a same-day pickup, get regular updates on the location of the package, and receive a notification when it's delivered. FedEx and UPS did not have to add these services to their portfolios, but when they did, customers quickly came to expect them. Even the stodgy U.S. Post Office now offers these services.

If you're setting out to deliver a great customer experience your goal should not be to delight your customers.

Instead, you should strive to make your customers come to expect an experience which used to delight them.

Know Your Customer

Zappos, the online shoe company owned by Amazon, rightly has a reputation for providing a great customer experience.

They also know their typical customer well, and provide the kind of fun, quirky experience she (and it is mostly "she") wants.

But this can sometimes go a little awry, as it did with a recent order I recently placed with Zappos. They sent me a cheeky e-mail confirmation telling me:

We've received your order and can't help but notice your impeccable taste! Your order details are listed below, and they look fabulous!

Which probably would have been just fine, had I been ordering anything other than a pair of steel-toed boots.

Compliance and Customer Opinions

Compliance and customer opinions often get lumped together when trying to measure the customer experience, since they are both generally related to the quality of the experience.

This is a mistake because even though compliance and customer opinions are related, they are looking at different elements of the customer experience, and require different tools to measure. Many programs go astray when they try to measure customer opinions using techniques for compliance, and vice-versa.

Compliance relates to what actually happened during a customer experience: were the necessary steps followed, did the employee's actions conform to the requirements of the job, and so forth. Compliance items are usually based on objective reality.

Customer opinions are more typically related to the desired outcomes: was the customer's problem solved, was the customer satisfied, did the customer feel like it took too much effort, etc. Customer opinions live inside the customer's head, and generally can't be measured without getting direct feedback from the customer.

For example, if you want to track whether a customer service representative uses the customer's name on a call (an objective compliance-related question), it's a mistake to use a customer survey for that purpose. Likewise, if you want to measure whether customers think the wait on hold is too long (a matter of opinion), you need to actually ask customers for their feedback rather than assuming a certain length of time is OK for everyone.

To help make the right decision about how to measure different parts of the customer experience, here's a quick-reference guide:

Customer Opinion Compliance
  • Related to customer's perception of the experience
  • Related to the specific events which took place during the customer experience
  • Drives business outcomes
  • Drives customer opinion
  • Measure using direct feedback from the customer:
    • Customer surveys
    • Complaints
    • Social media
  • Measure using objective records of what happened during the customer experience:
    • Call recordings
    • Video surveillance
    • CRM entries
    • Mystery shopping
    • Analytics
  • Example metrics:
    • Net promoter
    • Customer satisfaction
    • Problem resolution
    • Customer effort
  • Example metrics:
    • Customer was promptly greeted
    • Customer was given correct information
    • Salesperson provided all required disclosures
    • CSR used customer's name

It's tempting to ask whether customer opinions or compliance is more important. I believe it's important to track both, since otherwise you're only getting one side of the story.

But even more important is to recognize whether a given metric is related to compliance or opinion, and track it using the right tool for the job.

Vocalabs Newsletter Published: Survey Overload

We just published issue #79 of Quality Times, our newsletter about measuring and improving the customer experience. This issue talks about survey overload, and why I think correlation analysis is an over-used tool in the business world.

E-mail subscribers should be receiving their copies shortly. As always, I hope you find this newsletter interesting and informative.

Customer Experience Isn't the Only Thing

Customer experience is an important part of how companies differentiate themselves competitively. But it's not the only thing. There are some circumstances where it makes sense for a company to not care about how its customers are treated. For example:

  • In some customers' minds, poor customer experience means low prices. Our favorite example these days is Spirit Airlines, a company which seems to delight in inflicting fees and inconvenience on its customers (though recently Spirit seems to be having a change of heart, possibly because of bad publicity). So if you are trying to stake out the market position of low-price leader, it may actually help you to make your customer experience worse.
  • If customers don't have a choice, then customer experience doesn't matter. Some of the lowest customer satisfaction scores are for cable TV companies, and it's not hard to see why: in most places you have only one real choice if you want cable (and increasingly, if you want high-speed Internet service), since the competitors don't have access to the physical infrastructure. So the cable incumbent doesn't need to invest in improving the customer experience to maintain its customer base.
  • When it's hard to switch to a competitor, it's possible to under-invest in the customer experience without losing too much business. Banks and mobile phone carriers are both industries where customers will often put up with terrible service because it's just too hard to change (or too expensive).

In all these cases, the company which provides a poor experience is relying on something else to keep customers coming: price, an effective monopoly, contractual commitments, etc.

But this can be a risky strategy. Subjecting customers to poor service builds up a reservoir of customer ill-will over time. If the market changes--or the company develops a bad enough reputation--it can be very expensive to repair the damage.

Pro Tip: Actually Listen to your Call Recordings Before Responding to a Complaint

Most companies of any size routinely record calls to customer service, and in some industries it's a legal requirement.

So when a customer complaints that customer service told him one thing but the company did another, it might be a good idea to actually listen to the recording and find out what the customer was actually told.

Otherwise you might wind up like Anthem Blue Cross, featured (in a bad way) in the LA Times. You might also wind up featured in a lawsuit.

The story is that an Anthem member needed a fairly expensive medical proceedure. Like a good and careful consumer, he called Anthem to make sure his doctor was in-network and the proceedure would be covered.

Upon getting a "yes" on both counts, he went ahead and got the treatment.

Experienced readers will know what happens next, because we've seen this movie before: After the treatment, Anthem denied the claim saying the doctor was out of network.

But this time there's a plot twist, since the patient had recorded his earlier call to Anthem. You know, the one where customer service assured him the doctor was in-network and the proceedure was covered. He appealed Anthem's decision and included a recording of the call with the appeal.

Incredibly, Anthem denied the appeal, claiming the member had never been told the doctor was in-network or the operation would be covered--points directly contradicted by the member's recording of the call.

This is about the point where the LA Times (and the lawsuit) come in. I assume that this will now be settled promptly in the patient's favor, since even faceless bureaucracies have their limits.

There are two morals to this story: For companies, do your homework and use the tools at your disposal. It can avoid an expensive and embarassing stuation.

For consumers, it's a really good idea to record calls to companies you do business with. After all, they're recording you, you should feel no hesitation to record them.

Combatting Survey Overload

I took a three-day trip this week to the CXPA conference in Atlanta, and was asked to take a survey about almost every single part of the trip.

If you're wondering why it's getting hard to get customers to respond to e-mail surveys, this is why. I was asked to take surveys by the airline, the hotel, the conference, and even the hamburger joint where I grabbed lunch after arriving.

Most of these surveys were quite lengthy (only the survey about my visit to the airline club had a reasonable number of questions), and more than one of them had over 75 questions. Do I even need to add the fact that the overwhelming majority of the questions were completely irrelevant to my personal experience? Or the fact that some of the questions were so badly written that I couldn't even figure out what they were asking?

All told, I was asked to answer somewhere between 200 and 300 questions about this short business trip. Most of the questions were irrelevant, and some were incomprehensible. I felt like my time had been wasted and my patience abused, all for an exercise which the companies clearly didn't care enough about to do properly.

And that's why it's getting harder and harder to get customers to respond to e-mail surveys.

So how do you do it right? Here's my advice:

  1. Keep it short and focused. For an online survey my mantra is "one page, no scrolling." If you can't fit the questions on a single screen, then your survey is too long. And while you may think there are 75 different things you need to ask about, the truth is that if you're trying to focus on 75 things then you are focused on nothing.
  2. Proofread, test, and test again. There is no excuse for bad questions, but an e-mail survey can go to thousands of customers before anyone spots the mistake. Have both a customer and an outside survey professional provide feedback (not someone who helped write the survey).
  3. Communicate to customers that you take the survey seriously and respect and value their feedback. Don't just say it, do it (following #1 and #2 above are a good start). Other things you should be doing include personally responding to customers who had complaints, and updating survey questions regularly to ask about current issues identified on the survey.

While these are all important things to do to build an effective online survey, the unfortunate truth is that the well is still being poisoned by the overwhelming number of bad online surveys. Customers have been trained to expect that any e-mail invitation to take a survey is likely to be more trouble than it's worth, and so I expect response rates to continue to go down.

More on Correlation

Just in case you need more convicing that correlation is not causation, spend some time browsing the Spurious Correlation Generator.

Every day it finds a new (but almost invariably bogus) statistical correlation between two unrelated data sets. Today, for example, we learn that U.S. spending on science and technolgy is very strongly correlated (0.992) with suicides by hanging.

In the past we've learned that the divorce rate in Maine is correlated to per-capita margarine consumption, and that the number of beehives is inversely correlated with arrests for juvenile pot possession.

These correlations are completely bogus, of course. The point is to illustrate the fact that if you look at enough different data points you will find lots of spurious statistical relationships. With computers and big data, it's trivially simple to generate thousands of correlations with very high statistical significance which also happen to be utterly meaningless.

Getting It Wrong

A little over a year ago, Forrester Research issued a report called 2013 Mobile Workforce Adoption Trends. In this report they did a survey of a few thousand IT workers worldwide and asked a bunch of questions about what kinds of gadgets they wanted. Based on those survey answers they tried to make some predictions about what sorts of gadgets people would be buying.

One of the much-hyped predictions was that worldwide about 200 million technology workers wanted a Microsoft Surface tablet.

Since then, Microsoft went on to sell a whopping 1-2 million tablets in the holiday selling season (the seasonal peak for tablet sales), capturing just a few percent of the market.

At first blush, one would be tempted to conclude that Forrester blew it.

Upon further reflection, it becomes clearer that Forrester blew it.

So what happened? With the strong disclaimer that I have not read the actual Forrester report (I'm not going to spend money to buy the full report), here are a few mistakes I think Forrester made:

  1. Forrester was motivated to generate attention-grabbing headlines. It worked, too. Ignoring the fact that there have been Windows-based tablets since 2002 and none of them set the world on fire, Forrester's seeming discovery of a vast unmet demand for Windows tablets generated a huge amount of publicity. Forrester might also have been trying to win business from Microsoft, creating a gigantic conflict of interest.
  2. Forrester oversold the conclusions. The survey (as far as I can tell) only asked IT workers what sort of tablet they would prefer to use, at a time when the Microsoft Surface had only recently entered the market and almost nobody had actually used one. That right there makes the extrapolation to "what people want to buy" highly suspect, since answers will be based more on marketing and brand name than the actual product. Furthermore, since this was a "global" survey, there was probably a substantial fraction of the population outside the U.S., Canada, and the E.U. who are unlikely to buy (or be issued) a tablet of any sort in the near future.
  3. Forrester let the hype cycle get carried away. I found many many articles quoting the "200 million Microsoft Surface Tablets" headline without any indication that Forrester did anything to tamp this down. Forrester's actual data basically said that about a third of IT workers surveyed said they would prefer a Microsoft-based tablet rather than Android, Apple, or some other brand, and if you believe there are about 600 million information workers worldwide (which Forrester apparently does), that's 200 million people. When that morphed into "Forrester predicts sales of 200 million Surface tablets," they did nothing to bring that back to reality.

All this is assuming that Forrester actually did the survey right, and they got a random sample, asked properly designed questions, and so forth.

At the end of the day, anyone who built a business plan and spent money on the assumption that Microsoft would sell 200 million Surface tablets any time in the next decade has probably realized by now that they made a huge mistake.

As the old saw goes, making predictions is hard, especially about the future.

Vocalabs Newsletter #78: Net Promoter and Customer Effort

We've published issue #78 of our newsletter. This issue has two articles: one about Net Promoter and Customer Effort, and the other about the importance of queue time in a customer service environment.

E-mail subscribers should have received their copies by now. As always, I hope you find our newsletter interesting and informative.

Correlation Is Not Causation

Anyone who works with statistics has heard the phrase "correlation is not causation."

What it means is that just because A is correlated with B you can't conclude that A causes B. It's also possible that B causes A, or that A and B are both caused by C, or that A and B are mutually dependent, or that it's all just a big coincidence.

Similarly, you can't assume that lack of correlation means lack of causation. Just because A isn't correlated with B doesn't mean that A does not cause B. It's possible that A causes B but with a time delay, or through some more complex relationship than the simple linear formula most correlation analysis assumes. It's also possible that B is caused by many different factors, including A, C, D, E, F, G, and the rest of the alphabet.

In reality, a linear correlation analyis mostly tells you the degree to which A and B are measuring the same thing. That's useful information but it doesn't necessarily tell you how to drive improvement in B.

I'm always a little disappointed when, in a business setting, someone does a linear correlation of a bunch of different variables against some key metric and then assumes that the things with the highest correlation coefficient are the ones to focus on. Correlation analysis can't actually tell you what causes the metric to go up or down: it's the wrong tool for the job. At best, it's a simple way to get a few hints about what might be worth a deeper look.

Actually understanding the drivers for a business metric requires a more sophisticated set of tools. A/B testing (where you actually perform an experiment) is the gold standard, but you can also learn a lot from natural experiments (taking advantage of events which normally happen in the course of business), and also from the basic exercise of formulating theories about what causes the metric to change and testing those theories against existing data. 

Issue #77 of Vocalabs' newsletter is published

We just published issue #77 of Quality Times, Vocalabs' newsletter. In this issue we have a pair of articles related to the design and interpretation of customer surveys. One is a few rules of thumb to follow when designing surveys; the other discusses how customers interpret satisfaction questions.

I hope you find this useful and informative, and welcome any comments and suggestions. 

Net Promoter and Customer Effort: Two Metrics Measuring Two Different Things

People often ask, "What's the right metric to use on a customer survey?"

The answer, of course, depends on what you're trying to measure. Often the survey has more than one goal, and this will require measuring more than one metric. Unfortunately, the people promoting the Net Promoter methodology have been promoting the idea that you only need to measure one thing (and, of course, that one thing is their metric).

As a case in point, we have a client currently asking both a recommendation question (similar to Net Promoter) and a customer effort question. Customer Effort is a metric designed to measure the roadblocks a customer experiences in trying to get service, and it's a good way to gauge how smoothly a particular transaction went. Net Promoter, in contrast, measures a customer's overall relationship with the brand and company.

In this survey we noticed a curious thing: a meaningful percentage of customers who both said they would recommend the company, but who also said they had to go through a lot of effort to get what they wanted on the customer service call.

This should be surprising to anyone using Net Promoter to measure a particular customer experience--the theory being that customers who just had a bad experience will be less likely to recommend the company.

That theory may have some truth on average, but when it comes to individual customers there's clearly something else going on.

So we listened to a number of the interview recordings to better understand what the customers were saying. And the message was loud and clear: These customers had a bad customer service experience, but were loyal to the company for completely unrelated reasons.

The recommendation question was doing exactly what it was supposed to do: measure the customer's overall relationship with the company. And the customer effort question was also doing exactly what it was supposed to do: find the ways the company made it hard for customers to get the service they expected.

The lesson is simple, but often needs to be repeated. Ask the question about what you want to know. Don't expect a survey question designed to tell you one thing to measure something else.

Net Promoter and Customer Effort are two different questions which measure two different things.

How long did you wait?

One of the oldest complaints about customer service is having to wait on hold to talk to a person. It's still a problem from time to time in many companies, and we published some research on hold times as part of the mid-2013 NCSS Banking report (see page 3 of the PDF report).

We had a recent opportunity with a client to explore how well customers estimate their wait on hold. Anecdotally, we all know the customer who said he waited ten minutes but only actually spend 30 seconds in queue. For this client, they were able to supply us the actual time in queue for each customer who completed a survey, which we compared to the customer's estimate of the wait for an agent.

The results were interesting and surprising. It turns out that an individual customer's estimate of the time spent waiting bears almost no relationship to the actual queue time for that customer. There were plenty of instances of dramatic over- and under-estimates of the wait time. I'm talking about people who claimed they had to wait ten minutes but actually spent less than a minute in queue--or, conversely, people who said it was under a minute when it was actually several.

However, on average, customers' estimates of the wait time were astonishingly accurate. For example, taking all the people who said their wait time was "about two minutes", and averaging their actual queue time, it was surprisingly close to 120 seconds.

We also found that both actual and perceived wait time correlated to IVR and call satisfaction, but the perceived wait time was a stronger relationship. I suspect this may have to do with the customer's emotional state: the more annoyed he is with the call, the less satisfied, and the longer he thinks he had to wait to speak to someone.

Finally, there's a significant minority of customers (I'm guessing around 20%) who apparently are including the time spent navigating the IVR in their estimates of the wait to speak to someone. So even if the actual queue time was short, a long and complicated IVR makes some people feel like they're waiting for an agent.

So the lessons we learned are:

  • Queue time still matters in customer service. It feels a little old-school in this age of social media and natural language IVR, but make sure you're answering the phones promptly.
  • The actual queue time and what the customer thought the queue time was are two different things. You're probably measuring the former but not the latter, but it's the customer's perception which counts.
  • Making customers go through extra steps to reach a person makes them think it took longer to reach someone, and makes customers less satisfied with the service.

We're Watching You, Comcast!


Comcast and Time Warner have launched a PR offensive to try to convice people that it's going to improve it's customer service in advance of their pending merger, as evidenced by a pair of puff-pieces in USA Today and Marketwatch today.

Comcast, of course, is the company which was far behind its peers for customer service in the recent National Customer Service Survey results. Time Warner did better than Comcast, but is still below most of the others.

Speaking as a Comcast customer myself, I truly hope the company is mending its ways in customer service. But I'm also very skeptical. It takes more than good intentions and noise from the executive suite to make this kind of change: it requires changing the way thousands of individual employees interact with customers on a daily basis, it requires fixing broken processes which prevent resolution of customer issues, and most of all it requires time and hard work.

Many customer service initiatives fail because, while the leadership is willing to talk a good game, they aren't willing to devote the effort and resources.

Fortunately, though, we won't have to take Comcast's word on whether their customer service is improving. We will see soon enough, through the ongoing customer feedback in the National Customer Service survey, whether they are actually making any improvements. I look forward to seeing the results over the coming months.

So Comcast, it's great that you're talking about improving service. But we're watching you.

Customers Don't Give Points, They Take Them Away

What does "Very Satisfied" mean? Does it mean "Outstanding job, above and beyond expectations?" or does it mean "I don't have any complaints?"

Many people who receive customer feedback think it means the former. But in most cases, the data suggests that it actually means the latter. In other words, if a customer gives you the top score in a survey, often times it just means you did your job.

Case in point: for one of our clients, we are following up on one of the core satisfaction questions by asking the customer to explain the reason for his or her rating. Because this is an interview format, we are getting a response over 90% of the time.

When the customer gave the top rating, "Very Satisfied," 99% of the reasons given are positive (and the small number of negative comments were mostly about unrelated things). This isn't surprising.

But when the customer gave anything other than that top score, even the mostly-OK-sounding "Somewhat Satisfied," 96% of the reasons the customers gave for their rating are negative.

In other words: If the customer didn't give the best possible score, there was almost always a specific complaint.

We see a similar pattern in most questions where we ask the customer to rate the company or the interaction. Another client which is using a 0-10 point "recommendation" question (aka "Net Promoter"), we see over half the people who gave an 8 out of 10 had some specific complaint (and nearly everyone who gave 6 or below had a complaint).

The notion that the middle point on the scale is somehow "neutral" (even if we call it "Neutral") is simply not consistent with how people really answer these kinds of questions.

Instead, most people start near the top of the scale and mark you down for specific reasons. If the customer has nothing to complain about, you get something at or near the best possible score.

So in most cases, customers don't give you a better rating for better service and a worse rating for worse service. Instead, they give you a good rating and take away points for things you did wrong.

Syndicate content