Vocalabs Newsletter: Quality Times

Issue
79

Survey Overload

In This Issue


Survey Overload

I took a three-day trip this month to the CXPA conference in Atlanta, and was asked to take a survey about almost every single part of the trip.

If you're wondering why it's getting hard to get customers to respond to e-mail surveys, this is why. I was asked to take surveys by the airline, the hotel, the conference, and even the hamburger joint where I grabbed lunch after arriving.

Most of these surveys were quite lengthy (only the survey about my visit to the airline club had a reasonable number of questions), and more than one of them had over 75 questions. Do I even need to add the fact that the overwhelming majority of the questions were completely irrelevant to my personal experience? Or the fact that some of the questions were so badly written that I couldn't even figure out what they were asking?

All told, I was asked to answer somewhere between 200 and 300 questions about this short business trip. Most of the questions were irrelevant, and some were incomprehensible. I felt like my time had been wasted and my patience abused, all for an exercise which the companies clearly didn't care enough about to do properly.

And that's why it's getting harder and harder to get customers to respond to e-mail surveys.

So how do you do it right? Here's my advice:

  1. Keep it short and focused. For an online survey my mantra is "one page, no scrolling." If you can't fit the questions on a single screen, then your survey is too long. And while you may think there are 75 different things you need to ask about, the truth is that if you're trying to focus on 75 things then you are focused on nothing.
  2. Proofread, test, and test again. There is no excuse for bad questions, but an e-mail survey can go to thousands of customers before anyone spots the mistake. Have both a customer and an outside survey professional provide feedback (not someone who helped write the survey).
  3. Communicate to customers that you take the survey seriously and respect and value their feedback. Don't just say it, do it (following #1 and #2 above are a good start). Other things you should be doing include personally responding to customers who had complaints, and updating survey questions regularly to ask about current issues identified on the survey.

While these are all important things to do to build an effective online survey, the unfortunate truth is that the overwhelming number of bad surveys are poisoning the well for the few good ones. Customers have been trained to expect that any e-mail invitation to take a survey is likely to be more trouble than it's worth, and so I expect response rates to continue to go down.


Correlation is Not Causation

Anyone who works with statistics has heard the phrase "correlation is not causation."

What it means is that just because A is correlated with B you can't conclude that A causes B. It's also possible that B causes A, or that A and B are both caused by C, or that A and B are mutually dependent, or that it's all just a big coincidence.

Similarly, you can't assume that lack of correlation means lack of causation. Just because A isn't correlated with B doesn't mean that A does not cause B. It's possible that A causes B but with a time delay, or through some more complex relationship than the simple linear formula most correlation analysis assumes. It's also possible that B is caused by many different factors, including A, C, D, E, F, G, and the rest of the alphabet.

In reality, a linear correlation analysis mostly tells you the degree to which A and B are measuring the same thing. That's useful information but it doesn't necessarily tell you how to drive improvement in B.

I'm always a little disappointed when, in a business setting, someone does a linear correlation of a bunch of different variables against some key metric and then assumes that the things with the highest correlation coefficient are the ones to focus on. Correlation analysis can't actually tell you what causes the metric to go up or down: it's the wrong tool for the job. At best, it's a simple way to get a few hints about what might be worth a deeper look.

Actually understanding the drivers for a business metric requires a more sophisticated set of tools. A/B testing (where you actually perform an experiment) is the gold standard, but you can also learn a lot from natural experiments (taking advantage of events which normally happen in the course of business), and also from the basic exercise of formulating theories about what causes the metric to change and testing those theories against existing data. 

Newsletter Archives