Survey Design and Interpretation
In This Issue
Writing a good survey isn't hard, but there are some gotchas if you've never done it before. Here are a few rules of thumb to help avoid the biggest mistakes:
Keep the survey short:
- Phone interviews should be under 5 minutes
- Online surveys should fit on a single screen without scrolling
- IVR surveys should be 5 questions or fewer
- Keep the questions short and use simple language. Avoid jargon or brand names, since there's a good chance customers won't recognize them.
- Always begin by asking the customer to rate the company as a whole, even if that's not what the survey is about. This gives customers who have a problem with the company a chance to get it off their chest so they won't penalize the representative.
- Put the most important questions (usually your tracking metrics) near the beginning. That way they are less likely to be biased by other questions and more likely to be answered.
- Be as consistent as possible with your rating scale. For example, don't switch from a 0-10 scale to a 1-5 scale.
- In the U.S., it's conventional for higher numbers to be better. Don't make "1" best and "10" worst as it's likely to confuse people. (This rule may differ in other cultures).
- Always have at least one free response question.
- Plan on making regular changes to the survey. You won't get it perfect the first try.
Following these rules won't necessarily give you a great survey, but breaking these rules will almost always make it worse.
What does "Very Satisfied" mean? Does it mean "Outstanding job, above and beyond expectations?" or does it mean "I don't have any complaints?"
Many people who receive customer feedback think it means the former. But in most cases, the data suggests that it actually means the latter. In other words, if a customer gives you the top score in a survey, often times it just means you did your job.
Case in point: for one of our clients, we are following up on one of the core satisfaction questions by asking the customer to explain the reason for his or her rating. Because this is an interview format, we are getting a response over 90% of the time.
When the customer gave the top rating, "Very Satisfied," 99% of the reasons given are positive (and the small number of negative comments were mostly about unrelated things). This isn't surprising.
But when the customer gave anything other than that top score, even the mostly-OK-sounding "Somewhat Satisfied," 96% of the reasons the customers gave for their rating are negative.
In other words: If the customer didn't give the best possible score, there was almost always a specific complaint.
We see a similar pattern in most questions where we ask the customer to rate the company or the interaction. Another client which is using a 0-10 point "recommendation" question (aka "Net Promoter"), we see over half the people who gave an 8 out of 10 had some specific complaint (and nearly everyone who gave 6 or below had a complaint).
The notion that the middle point on the scale is somehow "neutral" (even if we call it "Neutral") is simply not consistent with how people really answer these kinds of questions.
Instead, most people start near the top of the scale and mark you down for specific reasons. If the customer has nothing to complain about, you get something at or near the best possible score.
So in most cases, customers don't give you a better rating for better service and a worse rating for worse service. Instead, they give you a good rating and take away points for things you did wrong.