Vocalabs Newsletter: Quality Times

Issue
33

In This Issue:


Random Sampling

By Peter Leppik
pleppik@vocalabs.com

One question we sometimes get is "how many customers should we survey?" or "is surveying only half a percent of callers enough?" It's not at all obvious that surveying 500 out of 100,000 customers tells you anything at all about the 99,500 customers you didn't survey.

On the other hand, political pollsters regularly predict the outcome of national elections with tens of millions of voters, typically by surveying between 500 and 2,000 citizens. And while election forecasts are by no means perfect, they have a pretty good track record.

The key is random sampling, which is just a fancy way of saying "picking people to survey at random." If you do it well, then the people you don't survey will be, on average, pretty similar to the ones you do survey.

There's a simple experiment you can do to demonstrate this. Get a bucket, 750 red marbles, and 250 blue marbles. Make sure the marbles are the same size, weight, and material. Dump all the marbles into the bucket, and mix them around really well. Your bucket will contain 1,000 marbles (the "population" in statistics-speak), which you know are 25% blue and 75% red.

Now we're going to do a survey: close your eyes and count out 100 marbles from the bucket (a statistician would call this the "sample"). Open your eyes and count the number of blue marbles you plucked out.

Approximately 95% of the time when you do this experiment, you should have between 15 and 35 blue marbles: the margin of error is 10 percentage points. But the really remarkable thing is that it doesn't matter how many marbles were in the bucket: you could have had 25,000 blue marbles and 75,000 red marbles in a dump truck, yet if you count out 100 marbles with your eyes closed, you would still grab between 15 and 35 blue marbles 95% of the time.

The margin of error drops as you increase the sample size: surveying four times as many customers will cut the margin of error in half. So if you grab 400 marbles from the back of the dump truck, the margin of error will be 5 percentage points, and 95% of the time you will measure between 20% and 30% blue marbles (or between 80 and 120 marbles). If you want to measure the percentage of blue marbles to within one percentage point, you will need to grab 10,000 marbles to get between 24% and 26% 95% of the time.

This remarkable ability to estimate the fraction of blue marbles in a dump truck by just grabbing a few hundred at random has a downside, though. This statistical margin of error is actually the best you can do with a survey of a given size, and there's no way to reliably do any better (though there are lots of ways to do worse). So if it costs you $1,000 to survey each customer, you're probably never going to find the budget to get a sample of 400 customers and a 5-point margin of error--and there's no way to get to that margin of error without the large sample size.

There's another important caveat: the survey only works if you really choose which people to survey at random. Imagine filling the bucket with 750 red plastic marbles and 250 blue steel marbles. When you mix it around and randomly grab 100 marbles, odds are you'll only get a couple of the blue ones because the heavier steel marbles all sank to the bottom of the bucket.

This kind of problem is called a sample bias, and it happens any time the group of people you don't survey has something systematically different about them from the group you did survey. There's no simple way to detect or quantify sample bias, but there are some red flags. For example, when call center agents have an incentive to perform well on the survey, they will often discover ways to manipulate who takes the survey and create a severe sample bias. In these circumstances, it's important to make sure the survey technique keeps the agent completely out of the loop.

It is possible, though, to create a survey which allows you to sample just a few hundred or thousand people and be confident that the results will hold true for all your customers, even the ones you didn't survey.


Common Survey Problems

By Peter Leppik
pleppik@vocalabs.com

It turns out that the same problems tend to crop up over and over again in customer service surveys, so we've developed a list of the most common problems.

Here's our checklist, in the form of questions you should ask yourself about your survey process:

Survey Design

  • Are the survey questions biased?
  • Are the survey questions ambiguous? For example, in an interview, do participants often ask the interviewer to clarify a question?
  • Is the survey too long or redundant?
  • Are the management goals of the survey ambiguous or poorly understood?

Methodology

  • Can the survey be manipulated? If employees are given bonuses based on survey results, you should actively look for signs of survey manipulation.
  • Are certain customers systematically excluded from the survey? For example, are there conditions under which the customer cannot take the survey?
  • Is there too much delay between the customer interaction and when the survey is administered?
  • Is it difficult or impossible to analyze survey data based on other data about the customer (i.e. customer type, what the customer called about, who the customer spoke to, recording of the phone call, etc.)?

Analysis

  • Is there a lack of context for the survey data such as historical survey data, industry data, etc.?
  • Is the error analysis missing, incomplete, presented out of context, or otherwise inadequate? Error analysis should include not just a calculation of the margin of error, but also look at sources of bias and potential problems with the survey questions and method.
  • Is the analyst interpreting the language of the survey the same way that participants do? For example, "satisfied" can mean different things to different people.

Follow-Through

  • Is there a failure to regularly generate action-items for improving service based on survey results?
  • Are there key decision makers who don't believe in the survey process?
  • Are there employees (or managers) who don't think the survey matters?
  • Is upper-level management failing to support the survey?
     

Newsletter Archives