One of the most important steps in designing a good survey happens before you make the first decision about the survey itself: deciding the objectives of the survey. It may seem like an obvious thing, but we often see companies who have decided to conduct a customer service survey without having a firm handle on what the survey is supposed to accomplish.
For example, we ask a client what they want to accomplish with the survey and get the generic non-answer, "Measure customer satisfaction."
Well, sure, that's what a survey does, but to what purpose? Do you want to:
- Benchmark your customer service against peers and/or competitors?
- Identify and compensate highly skilled customer service reps?
- Track changes in service quality over time?
- Identify dissatisfied customers for additional attention?
- Find and fix service problems?
- Validate changes to an IVR or speech application?
- Measure overall product satisfaction?
- Something else?
Each of these goals requires a different approach to the survey in order to get the most out of the data. For example, effective benchmarking needs a meaningful benchmark database (which often is not available at any price--I've been generally underwhelmed by the quality of commercially available benchmarks), and an apples-to-apples data collection method. This requirement will often force you to use a particular survey technique even if it might not be otherwise optimal.
Identifying highly skilled agents, on the other hand, will usually require a statistically meaningful sample for each agent, forcing the survey to have tens or even hundreds of thousands of responses. Simple economics may force you to use an automated survey even though they tend to be less flexible, less insightful, and can send a negative message to customers. Worse, though, is that the agents will have a financial incentive to "game" the survey, and some automated surveys are very easy to manipulate. If you're not extremely careful, you'll wind up identifying not the most skilled agents, but the ones who have figured out how to manipulate the survey.
Making this all the more complicated is that survey goals often change over time. The survey may have been created to help the call center track service levels from month to month, but a year later the boss decides to start giving bonuses to the best performing agents. A survey which may have been entirely appropriate for tracking overall performance might be wildly inappropriate for determining who gets the big end-of-year bonus.
When we sit down for the first time with a new client (or prospective client), the first two questions are usually, "Who is going to be using the survey data," and "What other surveys are you currently doing?"
The first question helps to make sure we have everyone at the table who needs to have input into the survey design. I've found that everyone who uses survey data has a slightly different view as to the survey objectives, and getting everyone's view at the beginning helps avoid surprises later.
Looking at other active surveys also helps fit the customer service survey into the "mosaic" of feedback that a company is collecting from its customers. We may, for example, want to repeat questions from other surveys, use other surveys for internal comparisons and benchmarking, and take a closer look at how the company is actually using the survey data.
The survey objectives are going to influence our recommendations for the survey size, method, sampling technique, and the survey questions--in other words, nearly everything about the survey. So if a client comes to us and says that they want to "measure satisfaction," my response is "what for?"