Vocalabs Newsletter: Quality Times

Issue
89

Shots Fired in the Metrics War

In This Issue


Shots Fired in the Metrics War

A recent academic paper, The Predictive Ability of Different Customer Feedback Metrics for Retention, is likely to stir things up in the debate about which is the right metric to use for customer feedback.

The paper concludes that old-fashioned customer satisfaction and Net Promoter are statistically almost identical in their ability to predict customer retention, and Customer Effort performs somewhat worse.

Already, I've seen one NPS promoter claim that this "vindicates" NPS, which is not true. If anything, this research vindicates Customer Satisfaction, which NPS proponents often claim is less predictive than NPS.

But that aside, there are some important limitations to this research:

  • The study was conducted using only Dutch participants. Given the fact that survey questions in different languages are literally different questions, the research isn't applicable to English surveys.
  • The overall sample size was respectable (over 8,000 ratings), but the follow-up survey to determine retention got about a 15% response rate. That means that a little over 1,000 responses were available to use for determining the statistical relationships between the metrics and retention. That's sufficient (but not great), except that only about 20% of the respondents answered the Customer Effort question. So conclusions about the predictive value of Customer Effort are based on less than 300 responses total, a miniscule sample for this kind of study.
  • The study authors also tried to see if there were differences between how the metrics performed in different industries, so they segmented the results into 18 (!) different industries. At the industry level, the sample is incredibly thin and the differences between the metrics generally slight, and I don't see how the authors can justify trying to draw conclusions at this level.

Those critiques aside, this is an interesting paper and they did more right than wrong. I fear, though, that people who want to promote a particular survey metric are going to misuse and misunderstand this research. 

My own view is that way too much time and effort is spent arguing about the "right" metric, when it's far more important to have a robust process. If you get the process right, even a mediocre metric will give better results than a great metric and a terrible process.

So by all means read the research and understand the value of different survey metrics. But when you go to build your own program, spend your time making sure you follow the principles of Agile Customer Feedback rather than trying to find the perfect survey question.


Incentive Payments

Incentive payments are a standard technique for increasing survey response rate. Whether a straight-up payment ("Get $5 off your next order for taking our survey") or entry into a drawing ("Take our survey for a chance to win $5,000!"), this pitch will be familiar to almost everyone.

The problem is that in many cases, incentives are deployed as a lazy and expensive way to "fix" a broken process. If a survey has a low response rate, there's usually some underlying cause. For example:

  • The survey takes too long.
  • The survey isn't being offered to customers in a way that's convenient.
  • The process relies on customers remembering to take the survey (especially surveys printed on cash-register tapes or at the end of a call).
  • The company doesn't communicate that it takes the feedback seriously.
  • The survey is broken (for example, a web survey which returns errors).
  • The survey invitation looks too much like spam or a scam.
  • The survey gets in the way of something else the customer wants to do (especially pop-up surveys on web pages).
  • The survey doesn't respect whatever genuine desire to give feedback the customer may have.

Rather than trying to identify the underlying issue and fix it, some companies just throw money at customers to try to boost response. What's wrong with that? Here are a few things:

  • Incentives can be expensive. I know of companies which spend more on the survey incentives than the survey itself.
  • Incentives motivate the customer in the wrong way. Feedback given out of a genuine desire to help is more likely to be sincere and detailed than feedback given to earn a few bucks.
  • Incentives are almost never necessary in a transactional feedback program. A well-designed process will normally give a high enough response rate without the use of incentives.

But the biggest sin of survey incentives is that they're often used to hide deeper flaws with the survey, problems which make the entire process much less effective. Designing an effective transactional feedback program requires making tradeoffs, but those tradeoffs ensure that the survey design is carefully focused and effective.

For example, transactional surveys need to be reasonably short to get a good response rate. That means some (possibly difficult) decisions need to be made about which questions to ask and which questions not to ask. But that process also forces the company to carefully consider what the purpose of the survey really is, and what's important to ask. The result is almost always a better survey, precisely because it doesn't include all the things that aren't as useful.

All that said, there are some situations where incentives may be appropriate, especially when you get out of transactional surveys and into the realm of market research. If you're asking the participant to spend a lot of time, participate in a focus group, or otherwise do something more than just a quick favor then you should be offering some compensation.

But for ordinary transactional surveys, incentives are usually a sign of a broken process.

Newsletter Archives