Errors About Margin of Error
In This Issue
Pop quiz time!
Suppose a company measures its customer satisfaction using a survey. In May, 80% of the customers in the survey said they were "Very Satisfied." In June 90% of the customers in the survey said they were "Very Satisfied." The margin of error for each month's survey is 5 percentage points. Which of the following statements is true:
- If the current trend continues, in August 110% of customers will be Very Satisfied.
- Something changed from May to June to improve customer satisfaction.
- More customers were Very Satisfied in June than in May.
Answer: We can't say with certainty that any of the statements is true.
The first statement can't be true, of course, since outside of sports metaphors you don't ever get more than 100% of anything. And the second statement seems like it might be true, but we don't have enough information to know whether the survey is being manipulated.
Since the survey score changed by more than the margin of error, it would seem that the third statement should be true. But that's not what the margin of error is telling you.
As it's conventionally defined for survey research, the margin of error means that if you repeated the exact same survey a whole bunch of times but with a different random sample each time, there's an approximately 95% chance that the difference between the results of the original survey and the average of all the other surveys would be less than the margin of error.
That's a fairly wordy description, but what it boils down to is that the margin of error is an estimate of how wrong the survey might be solely because you used a random sample.
But you need to keep in mind two important things about the margin of error: First, it's only an estimate. There is a probability (about 5%) that the survey is wrong by more than the margin of error.
Second, the margin of error only looks at the error caused by the random sampling. The survey can be wrong for other reasons, such as a bias in the sample, poorly designed questions, active survey manipulation, and many many others.
The true statement we can make about our survey results is that there's only a very small probability that the change from May to June was because of our random sample.
Margin of Error Mistakes
I see two very common mistakes when trying to understand that the Margin of Error in a survey:
- Forgetting that the Margin of Error is only an estimate and does not represent a bright line beyond which the survey is accurate and precise.
- Forgetting that lots of things can change survey scores other than what the survey was intended to measure, and Margin of Error doesn't provide any insight into what else might be going on. Intentional survey manipulation is the one we always worry about (for good reason, it's common and sometimes hard to detect), but there are many things that can push survey scores one way or another.
I've had clients ask me to calculate the Margin of Error to two decimal places, as though it really mattered whether it was 4.97 points or 5.02 points. To help our clients better understand what the data means, I often use intentionally vague terminology like "probably noise," "suggestive," or "probably real" rather than talking about whether a change in the data is outside the Margin of Error. This is a lot more faithful to what the data is saying than the usual binary statements about statistical significance.
It's important to keep in mind what the Margin of Error does and does not tell you. Do not assume that just because you have a small margin of error the survey is giving accurate results.
Surveys don't just collect data from participants. Surveys also give the participants insights into what your priorities are, and this can impact your brand image.
Computer game company Ubisoft learned this the hard way recently when they sent a survey to their customers. The first question asks the customer's gender. Customers who selected "Female" were immediately told that their feedback was not wanted for this survey.
While I'm sure this was not the intended message, it definitely came across to some Ubisoft customers as insensitive to women who enjoy playing Ubisoft games like Assassin's Creed. The company quickly took the survey down and claimed it was a mistake.
Whether this was a genuine mistake or an amazingly bad decision by a market researcher who got a little too enthusiastic about demographic screening, it definitely reinforces the image of the game industry as sexist and uninterested in the half of the market with two X chromosomes.
This is a particularly egregious example of the fact that customer feedback really is a two-way street. While your customers are telling you how they feel about you, you are also telling your customers a lot about your attitudes towards them. For example:
- Do you respect the customer's time by keeping the survey short and relevant?
- Do you genuinely want to improve by following up and following through on feedback?
- Do you care about things that are relevant to the customer?
- Do you listen to the customer's individual story?
The lesson is that you should always think about a survey from the customer's perspective, since the survey is leaving a brand impression on your customers. While your mistakes might not be as embarrassing as Ubisoft's, you do want to make sure the impression you leave is a positive one.