Vocalabs Newsletter: Quality Times

Issue
104

Circling the Drain with Happy Customers

In This Issue


Circling the Drain with Happy Customers

American Customer Satisfaction Index (ACSI) data for the retail industry was released this month, and the folks over at Consumerist noticed something, well, odd. Scores for Sears, JC Penny, and Macy's took huge leaps in 2016--despite the fact that those traditional department stores have been closing stores in the face of sustained sales declines and changing consumer tastes. In addition, Abercrombie & Fitch, a speciality clothing retailer, also posted a big ACSI gain despite struggling to actually sell stuff.

The idea that customer satisfaction scores would jump as the companies are losing customers seems counterintuitive to say the least. The ACSI analyst gamely suggested that shorter lines and less-crowded stores are leading to higher customer satisfaction and, yeah, I'm not buying it.

I'd like to suggest some alternate hypotheses to explain why these failing retailers are posting improved ACSI scores:

Theory 1: There's No There There

Before trying to explain why ACSI scores might be up, it's worth asking whether these companies' scores are actually improving, or whether it might just be a statistical blip.

Unfortunately, ACSI doesn't provide much help in trying to answer this question. In their report (at least the one you can download for free) there's no indication of margin of error or the statistical significance of any changes. They do disclose that a total of about 12,000 consumers completed their survey, but that's not helpful given that we don't know how many consumers answered the questions about Sears, JC Penny, etc.

With this kind of research there's always a temptation to exaggerate the statistical significance of any findings--after all, you don't want to go through all the effort just to publish a report that says, "nothing changed since last year." So I'm always skeptical when the report doesn't even make a passing reference to whether a change is meaningful or not.

It could be that these four companies saw big fluctuations in their scores simply because they don't have many customers anymore and the sample size for those companies is small. There's nothing in the report to rule this possibility out.

Theory 2: Die-Hards Exit Last

We know that even though surveys like ACSI purport to collect opinions about specific aspects of customer satisfaction, consumer responses are strongly colored by their overall brand loyalty and affinity.

So as these shrinking brands lose customers, we expect the least-loyal customers to leave first. That means the remaining customers will, on the whole, be more loyal and more likely to give higher customer satisfaction scores than the ones who left.

In other words, these companies' survey scores are going up not because the customer experience is any better, but because only the true die-hard fans are still shopping there.

If this is the case, then the improved ACSI scores are real but not very helpful to the companies. They are circling the drain with an ever-smaller core group of more and more loyal customers.

This is a hard theory to test. If ACSI has longitudinal data (i.e. they survey some of the same customers each year) then it might be possible to tease out changes in customer populations from changes in customer experience. But as with the first theory, the published report doesn't provide enough information to test this theory.

Theory 3: ACSI Has Outlived its Usefulness

Finally, it's worth asking whether the ACSI is simply no longer relevant. The theory behind ACSI is that more-satisfied customers will lead to more customer loyalty and higher sales, all else being equal. But the details are important, and the specific methodology of ACSI was developed over 20 years ago, based on research originally done in the 1980's.

I know that ACSI has made some changes over the years (for example, they now collect data through email, which wasn't practical in the 90's), but I don't know if they've evolved the survey questions and scoring to keep up with changes in customer expectations and technology. Back in 1994 when ACSI launched, not only did we not have Facebook and Twitter, but Amazon.com had only just been founded, and most people didn't even have access to the Internet.

So if the index hasn't kept up enough, it's possible that ACSI is putting too much weight on things that don't matter to a 21st century consumer, and missing new things that are important.

Interpreting Survey Data Is Hard

I'm only picking on ACSI because their report is fresh. The fact is that interpreting survey data is hard, and it's important to explore alternate explanations for the results. Even when the data perfectly fits your prior assumptions you may be missing something important without looking at competing theories.

It's entirely possible that ACSI did exactly that, tested all three of my alternate theories and others, and they have additional data that supports their explanation that, "Fewer customers can lead to shorter lines, faster checkout, and more attention from the sales staff." But if they went through that analysis there's no evidence of it in their published report.

When the survey results are unexpected, you really need to explore what's going on and not just reach for the first explanation that's remotely plausible.


Teaching, Not Scoring

Imagine taking a college class, and at the beginning of the semester the Professor announces, "For this class, we're not going to be handing back any of your papers or exams, and we won't tell you any of your grades on individual assignments and tests. The only grade you'll get is your final grade at the end of the semester which will be an average of all your work."

You wouldn't expect to learn much in this class. In order to improve, you would want to know what you were doing well at and where you needed to improve throughout the semester. You would want specific feedback about specific things you had done.

And yet many customer feedback programs are structured just like this insane Professor's class. Somehow we expect employees to know how to improve despite getting little more than an average survey score every month or every quarter.

In order to make a survey program helpful, we need to give people the chance to connect specific customer feedback to specific things the employee did to garner that feedback. We also need to help employees think about the feedback as constructive criticism so they have the tools to apply the feedback to their daily customer interactions.

Here are some tips to help make this happen:

  • Deliver individual survey responses directly to front-line managers and supervisors as soon as it comes in. Managers and supervisors should discuss the feedback with employees as soon as is practical, either for encouragement or for ways to improve.
  • Don't make the survey process so high-stakes that employees feel they must get good scores or else. This inhibits learning and can encourage survey manipulation.
  • Treat negative surveys as opportunities to improve, not mistakes to be punished. Always remember that each survey is only one customer's opinion, and while you want customers to have good opinions it's also not possible to please everyone.
  • Don't just ask customers for a rating, ask them to explain what happened and why they feel they way they feel. We learn more from stories than from statistics.

There are, of course, real concerns about managing the delivery of customer feedback to employees. But the solution is better coaching and supervision, not giving people so little feedback that it becomes useless.

Newsletter Archives