One of the most basic questions about any survey process is, "how many people are taking the survey?" Like a doctor measuring a patient's pulse and blood pressure, the response rate can provide a lot of clues about how healthy (or unhealthy) the overall survey process is.
Following up on my earlier article about measuring survey response rates, here is a quick guide to different response rate metrics, and what they mean:
End to End Response Rate
How it's calculated: (surveys completed) / (total number of customers selected for the survey)
What it's useful for: This is my preferred way to measure a survey's response rate, because it is the most general and therefore the most sensitive to problems anywhere in the survey process. This calculation will also give the lowest number of any of the response rate formulae, so vendors often like to use other metrics to make their process look better.
What's a reasonable response: For the immediate live interview surveys we perform, end-to-end response rates of 20% to 40% are common, though it's not unusual to see numbers a few points better or worse. For live interviews where we're not calling the customer back immediately after a service experience, the response rate can drop by half or more. Automated surveys often see end-to-end response rates below 1%.
How it's calculated: (surveys completed) / (surveys completed + customer refusals)
What it's useful for: This is a measurement of how willing customers are to actually take a survey once they are contacted. Acceptance rate ignores wrong numbers, technical problems, and customers who don't answer the phone when called.
What's a reasonable acceptance rate: A low acceptance rate means that only the customers who are most eager to provide feedback are actually completing the survey, so the survey will be skewed towards the most extreme (positive or negative) opinions. To ensure that this response bias is minimized, I like to see an acceptance rate over 50%; at Vocalabs we typically see 70% to 80% for live interviews. For IVR surveys, acceptance rates of 10% to 20% are more typical.
How it's calculated: (surveys where the customer participates to the end of the survey) / (surveys where the customer answered at least one question)
What it's useful for: Completion rate tells you if customers are abandoning the survey after they agreed to participate. There are lots of reasons why people abandon surveys, but the most common are that the survey is too long or annoying, or that some technical problem prevented them from continuing. A lot completion rate is cause for serious concern, since it means that there are probably a lot of people getting very annoyed with the survey.
What's a reasonable completion rate: Anything below 80% completion rate is alarming, since it means that the survey is either badly broken or very irritating to a large fraction of customers. In live interviews we typically see 98% or better completion; automated surveys will always have a lower completion rate because customers are less worried about being rude to a computer.