The Customer Service Survey

Technology for the Sake of Technology

by Peter Leppik on Fri, 2016-12-16 11:25

For $1,500 you can apparently buy a "smart" toaster oven that uses sensors, AI, deep learning, and other buzzwords to automatically recognize what you're cooking and figure out how to optimally cook whatever you put in it, and send you a message on your cellphone when it's done.

Except, perhaps, not so much. According to a review in Fast Company of the June toaster oven, not only does the product not work as advertised, even if it did work it would be much more complicated and difficult to use than an ordinary "dumb" appliance. Where an ordinary toaster oven is fairly simple to use (set temperature, add food, set timer, remove food), using the smart version requires navigating multiple menus, answering questions about how you want your food cooked, and hoping that you and the oven guessed the right settings. Worse, the oven's software will update itself from time to time (to make it smarter of course), so once you've figured out the settings to toast your toast exactly the way you want it, you could find that the next morning it cooks differently because of an overnight update.

This is a classic example of applying technology solely for the sake of technology, and just like in most such instances the result is a much worse customer experience than what you got with the old, "dumb" product at a multiple of the price.

Improving the customer experience requires taking a customer-centric approach, rather than a technology-centric approach. Begin with the customer's journey. Identify the pain points and roadblocks. Find ways to improve the experience--which might or might not involve technology--that removes those problem areas.

Some will argue that this incremental approach won't lead to revolutionary change. As Henry Ford once (supposedly) said, if he'd asked his customers what they wanted, they would have said a faster horse. But the fact is that the overwhelming majority of improvements come through slow and steady refinements to the products and services that are already out there. Radical innovation, like what led to the first iPhone, is rare and often fails.

When you take the technology-centric approach, and try to apply new technology to an old problem just for the sake of applying new technology, more often than not the result is going to be an expensive failure. Rather than solving actual customer problems, you're more likely to invent new problems that don't really exist just so you can apply the technology to solve them.

As proof, all you need to do is read through the recipes on the website for the June toaster oven. Despite the product's hefty price tag and advanced features, five of the ten recipes instruct the user to do nothing more than set the oven temperature and cook for a certain number of minutes, functionality that's available in any ordinary $25 toaster oven for one sixtieth the price. Not even the June's manufacturer could find more than a handful of recipes that used (not necessarily required) all the product's advanced features.

Stop And Think Before Collecting Useless Data

by Peter Leppik on Wed, 2016-12-07 14:52

People are naturally attracted to shiny new things, and that's just as true in the world of business intelligence as in a shopping mall. So when offered an interesting new piece of data, the natural inclination is to chew on it for a while and ask for more.

But not all this data turns out to be particularly useful, and the result is often an accumulation of unread reports. I've known companies where whole departments were dedicated to collecting, analyzing, and distributing data that nobody (outside the department) used for any identifiable purpose.

Before gathering data and creating reports, it's worth taking a moment to consider what the data will be used for. There's a few broad categories, ranging from the most useful to the most useless:

  1. The most useful data is essential for business processes. For example, sales and accounting data is essential for running any kind of business. An employee coaching program built around customer feedback requires customer feedback data to operate. If the data is a required input into some day-to-day business process, then it falls into this category and there's no question of its usefulness as long as the underlying process is operating.
  2. Less essential but still very useful is data to help make specific decisions. Without this information the company might make a different (and probably worse) decision about something, but a decision could still be made. For example, before deciding whether to invest in online chat for customer service it's helpful to have some data about whether customers will actually use it.
  3. Data to validate a decision that's already been made may seem useless, since the data won't change the decision. But in a well-run organization, it can be valuable to take the time and effort to review whether specific decisions turned out in hindsight to be the right thing to do. Ideally this will lead to self-reflection and better decision-making in the future, though in practice most organizations aren't very good at this kind of follow-through.
  4. Occasionally data to monitor the health of the business will have value, though most of the time--when things are going well--these reports won't make any practical difference. Most tracking and trending data falls into this category (assuming that the underlying data isn't also being used for some other purpose). The value of this type of data is that it can warn of problems that might not be visible elsewhere; but the risk is that red flags will be ignored. Lots of companies track customer satisfaction, but might not take action if customer satisfaction plummets but sales and profitability remain high.
  5. Data that might be useful someday is the most useless category, since in practice "someday" rarely arrives. Information that's "nice to have" but doesn't drive any business activity or decision-making is probably information you can do without.

It may seem that there's little harm in collecting useless data, but the reality is that it comes with a cost. Someone has to collect the data, compile the reports, and distribute the results. Worse, recipients who get too many useless reports are more likely to miss the important bits for all the noise.

So before collecting data, take a moment to think about how--and whether--it's going to be used.

Possibly the Best Management Podcast You Will Listen To This Year

by Peter Leppik on Wed, 2016-11-30 15:05

Dan Ariely recently spoke at the Commonwealth Club of California on "The Hidden Logic that Shapes Our Motivations." He covers a lot of ground in psychology and behavioral economics, but what really makes it worth listening to is the discussion of what motivates employees to work hard and do a good job.

There's probably as much practical management wisdom packed into this one hour podcast as in an entire shelf of management books. It's worth listening to the whole thing just to hear why you probably shouldn't pay bonuses to your best employees. Along the way you'll also learn that pizza is sometimes better than cash, and that incentives can cause productivity to decrease.

Humility After the Election

by Peter Leppik on Fri, 2016-11-18 15:34

After the results of the 2016 presidential election came in, the first reaction of many people was that the polls were wrong. A more detailed analysis seems to show that the polls in 2016 were about as accurate (or inaccurate) as they usually are, but many people treated them as more precise than they really are.

I think the surprise (to many) election of Donald Trump serves as an important reminder of the limitations of survey research. Surveys aren't a precision instrument. That's partly because of inaccuracies and biases in sampling, but it's also because surveys are trying to measure opinions, and opinions are inherently fuzzy, malleable, and context-dependent.

In fact, given the limitations of surveys, it's remarkable that political polls are as accurate as they are. Predicting the outcome of an election is easily the most challenging application for a survey, given that you are trying to predict the future behavior of the general population, races are often decided by margins smaller than the margin of error, and you don't get credit for being close if you predicted the wrong winner.

This year's campaign should serve as an important reminder to be humble when interpreting survey results. A solid voice-of-the-customer program isn't as challenging as election forecasting, but customer surveys can still have important biases and inaccuracies. Keep in mind that:

  • Low response rates mean that you are more likely to hear from customers with strong opinions.
  • The survey process may be excluding some customer segments that have different experiences than the customers who can take the survey.
  • If you're giving employees bonuses for hitting survey targets, they may be trying to manipulate the survey.
  • Customers may be interpreting the survey questions differently than you intended.

Keep this in mind, and you will be less likely to make the mistake of being too confident when trying to understand what your customers are trying to tell you.

Let Me Put my Customer Hat On

by Peter Leppik on Wed, 2016-11-16 13:51

To create a good customer experience you need to be able to put yourself in the customer's shoes.

But you also need to be grounded in what actual customers expect and experience.

When You Assume...

When you put your "customer hat" on, are you trying to come to a genuine understanding of specific customer issues and feedback, or are you imagining what a customer might think based on your own assumptions?

As the old saying goes, when you "assume" you make an "ass of U and me."

It's tempting to try to think about the customer's perspective, but customers have a very different experience than employees. Crossing the chasm between an insider's perspective and a customer's perspective is almost impossible without customer feedback. For example:

  • Employees understand how the company works, and customers don't.
  • Employees understand industry jargon, and customers don't.
  • Employees know why certain policies exist, and customers don't.
  • Employees have experience navigating their company's bureaucracy, and customers don't.

These differences in perspective can create blind spots when you try to understand the customer's viewpoint.

Understanding the Customer's Experience

To really put yourself in the customer's shoes, you should:

  • Begin with customer feedback, not you or your team's ideas of what customers are thinking.
  • Take the view that each customer's perspective is reasonable, and trust what they're telling you.
  • Expect that different customers have different experiences. When customers have conflicting opinions, both are equally valid.
  • Work to understand why some customers might feel differently about your customer experience than you do.
  • Understand what parts of your customer experience may be painful to customers even though they make sense to people inside your organization.

This puts the voice of the customer front and center where it belongs. Too often, companies will take the opposite approach: beginning with their own preconceived ideas, they imagine what they think customers want and then collect customer feedback to validate their opinions.

And while customers and employees may agree about many customer experience problems--things that are painful to customers are often also painful to employees--the insider perspective is usually incomplete.

So when you put that customer hat on, make sure you're not putting it on backwards.

The Hundredth Newsletter

by Peter Leppik on Fri, 2016-11-11 15:19

This week we published the 100th issue of Quality Times, Vocalabs' newsletter about customer feedback and customer experience.

A hundred newsletters is a lot, and it seemed appropriate to take a few steps back and consider what customer surveys are really for and how to make them more effective. We know what it takes to make customer feedback effective in improving the customer experience, the problem is that most surveys simply aren't designed to be effective.

As always I hope you find our newsletter interesting and informative, and if you enjoy it please subscribe to get the latest edition hot off the presses via email.

The Intersection of 65,000 and 115,000 Is Not "&"

by Peter Leppik on Tue, 2016-11-08 10:30

Bad Data VisualizationGood data visualization is a balancing act. Communicating facts and statistics in a way that's both pleasing to the eye and conveys meaning intuitively requires skills that are not always easy to find.

It's not surprising when charts and graphs sometimes misfire, especially when the designer tries too hard to be clever and just winds up being confusing.

Just as shipwrecks are sometimes useful ways to spot where the rocks are, really bad data visualizations can help us avoid the mistakes of others.

WTF Visualizations is like a roadmap of how not to communicate data. I highly recommend you spend some time browsing their examples of really terrible data visualization.

First you will laugh. Then you will think. And then, I hope, you will resolve never to venture into those same waters.

Proven Success

by Peter Leppik on Wed, 2016-09-21 15:21

I was going through some of our old client data and discovered something very interesting. In every single case where a client has adopted the broad outlines of the kind of survey process we advocate in Agile Customer Feedback, the client has seen substantial long-term gains. For example:

  • Client A: +10 to NPS
  • Client B: +12 to CSAT with the support incident
  • Client C: +14 to Call Sat in the contact center
  • Client D: +13 to Call Sat

These improvements are changes in full-year survey scores from the first year we started working with each client up to the most recent year we have comparable survey data.

I didn't include all our clients on this list, but we have seen statistically significant (and often remarkable) improvements in survey scores at every single client where we:

  1. Conduct real-time phone interviews of their customers after a customer experience,
  2. Deliver the feedback to the front lines of the organization in real time, and
  3. Have at least two years of survey data.

It's not often in the business world where you can honestly say you have a solution that works every time. But in our case it's true. Every single time we've implemented an Agile Customer Feedback process for a client, it's delivered significant and sustained improvements over the long term.

If Your Employees Will Defraud Customers, They'll Also Cheat on the Survey

by vocalabs on Fri, 2016-09-09 14:40

In the news today, we learned that Wells Fargo had a small issue with employees defrauding customers by creating phony accounts in order to run up fees and hit sales targets. And by "small" I mean "mind-bogglingly massive."

Over the past several years, 5,300 Wells Fargo employees have been terminated because of these fake accounts. That's roughly 2% of the company's workforce, one in 50 employees.

When you consider that not all Wells Fargo employees have the ability or incentive to set up fake customer accounts in this way, and that probably not everyone who did it got caught, it would appear that this behavior was extremely widespread within Wells Fargo.

It's not hard to speculate on why this probably happened. Like many organizations, Wells Fargo probably set aggressive performance targets for employees with serious consequences for missing a goal. Over time, employees learned that they could bend the rules a little to hit their targets, and then bend them a little more. And over time, the goals probably ratcheted up and got harder and harder to meet without cheating. After a while, committing a little light fraud to meet your numbers seems perfectly normal. After all, everyone else is doing it.

Wells Fargo leadership will probably insist that they did not condone any of this, but widespread employee misconduct does not happen in a vacuum. If the consequence for missing your sales targets is the same as the consequence for creating a fake customer account (you lose your job in both cases), the incentive is clearly there to commit fraud and hope you get away with it.

All of this is a little far afield from my usual topic of writing about customer surveys. But a lot of companies have the same sort of high-stakes incentive systems around their customer surveys as they do for hitting sales targets.

Most people have a fairly strong moral compass, but they can be gradually led astray if they're in an environment where misconduct seems normal and carries few consequences. Cheating on a survey is certainly not as bad as creating fake bank accounts to generate fees from customers.

Your goal in any customer feedback program should be to improve your performance, not just generate high survey scores. But if you treat your survey as a high-stakes performance metric for employees, you are almost guaranteed to get cheating. When that happens, the survey is no longer giving you meaningful information.

Rexthor, the Dog-Bearer

by vocalabs on Fri, 2016-08-26 14:42

Statistical analysis is a powerful tool, and like any other power tool, it can cause a lot of damage if not used properly.

So without further comment, here is today's XKCD comic:

How Can We Help You?

Let us put our expertise in customer feedback to work for you.

Please tell us about how we can help you improve your customer feedback process.