The Customer Service Survey

Survey Design and Technique

Customer Feedback 101

by Peter Leppik on Wed, 2018-10-31 14:36

I've noticed that to newcomers to Customer Experience sometimes want a quick reference for the best practices we've developed for customer feedback programs, and an easy way to get smart about some of the jargon we use.

We've added a new section to our website, Customer Feedback 101. Here we're collecting short articles (most of them are just a few paragraphs) to explain key concepts in customer feedback programs.

People who are new to this space can use this as an easy way to come up to speed; whether you're completely unfamiliar with customer feedback, or coming from a market research background (we do some things just a little differently here), I hope you'll find this a helpful resource.

The first set of a little over a dozen articles covers topics including what type of survey to use, how to pick the best metric, and how to calculate the margin of error. We plan to expand this to include many other articles.

I'd love to hear your feedback, and any suggestions you have for new topics!

Net Promoter Score (NPS) in B2B Surveys

by Peter Leppik on Fri, 2017-02-03 13:42

If you're thinking about using the Net Promoter Score (NPS) on a business-to-business survey, there's some extra factors you should consider before committing to this metric.

Net Promoter Score is a common, and somewhat controversial, measurement of a customer's relationship to a company. It's based on responses to the question, "How likely are you to recommend us to a friend or colleague?", and is calculated by subtracting the percentage of low responses from the percentage of high responses. It has the advantage of being a simple and standardized metric, but also gets heavily criticized for being too simplistic and oversold. NPS also tends to get shoehorned into situations where it doesn't make any sense, an unfortunate side-effect of being heavily (and wrongly, in my opinion) promoted as "the only survey score that matters."

What is NPS Measuring?

The Net Promoter question asks the customer to imagine a scenario where they might be asked to make a recommendation about some product or service. The idea is that telling a friend to buy from a particular company is something most people will only do if they feel strongly about that company. Customers willing to make that commitment are valuable and potentially a source of word-of-mouth marketing (in the jargon of NPS, they are "Promoters"); whereas customers who feel the opposite can actively damage a brand (these are called "Detractors"). Subtracting the promoters from detractors gives you a single simple number, while emphasizing the fact that you want to have a lot more Promoters than Detractors.

In the Business-to-Consumer world this all seems sensible. But in the Business-to-Business world there's additional complexity.

B2B NPS Challenges

A number of common situations challenge the validity of NPS in the B2B world:

  • Some companies prohibit their employees from making vendor recommendations to third parties. We regularly see B2B customers give "0" on the Net Promoter question, and give the reason as "I'm not allowed to make recommendations."
  • Buying decisions are much more complex in the B2B environment than B2c, and the recommendations of any one person might matter a lot or not at all. It doesn't matter if nine out of ten people are Promoters if the one Detractor is the CEO.
  • Relationships are much more complex in the B2B environment, and where many different people at the customer company interact with the vendor company, most of those people probably have only a limited view into the entire relationship.

In short, NPS is already simplistic in the B2C world, and trying to apply it to more complex B2B relationships is a challenge.

Fortunately, Most Customers Know What You Meant

This doesn't mean that NPS is useless in the B2B world. Fortunately, we've found that in the real world most people don't answer the Net Promoter question literally (though some do)--as demonstrated in the story from a few years ago about what happens when you try to get Promoters to actually recommend the product to a friend.

Instead, most people answer the Net Promoter question in a more generic way, providing a high level view of how they feel about the company overall. The question is being interpreted as something closer to, "How would you rate the company?" This is confirmed by the fact that on surveys where we ask both a Net Promoter question and a more generic Customer Satisfaction question, the answers tend to correlate almost perfectly.

This means we can get away with using Net Promoter even in situations where it might not strictly apply, because most customers will know what we're trying to get at and answer the question behind the question. We still get some people who try to respond to the literal question (the "I'm not allowed to make recommendations" crowd), but they are in the minority.

Practical B2B NPS

I would not generally recommend using NPS in a B2B survey--instead, I would ask a more tailored question that gets at what you're really trying to understand in the customer relationship. But if you do choose to use NPS for B2B (or if you're required to use it), keep these things in mind:

  1. Understand that in B2B, there's reasons to not make recommendations that have nothing to do with your business relationship. Look beneath the metric at the reasons customers give for their scores.
  2. Be aware of the fact that NPS will not capture the full complexity of the business relationship. It may be useful to look at separate scores by title or role; and a simple average NPS score could be very misleading.
  3. Account for the fact that individuals might not have any visibility at all into important aspects of the buying decision (price, for example, or backend integration). Make an effort to understand what factors do and do not influence each respondent's survey rating.

Paying attention to these will help you get the most out of NPS in your B2B survey.

Humility After the Election

by Peter Leppik on Fri, 2016-11-18 15:34

After the results of the 2016 presidential election came in, the first reaction of many people was that the polls were wrong. A more detailed analysis seems to show that the polls in 2016 were about as accurate (or inaccurate) as they usually are, but many people treated them as more precise than they really are.

I think the surprise (to many) election of Donald Trump serves as an important reminder of the limitations of survey research. Surveys aren't a precision instrument. That's partly because of inaccuracies and biases in sampling, but it's also because surveys are trying to measure opinions, and opinions are inherently fuzzy, malleable, and context-dependent.

In fact, given the limitations of surveys, it's remarkable that political polls are as accurate as they are. Predicting the outcome of an election is easily the most challenging application for a survey, given that you are trying to predict the future behavior of the general population, races are often decided by margins smaller than the margin of error, and you don't get credit for being close if you predicted the wrong winner.

This year's campaign should serve as an important reminder to be humble when interpreting survey results. A solid voice-of-the-customer program isn't as challenging as election forecasting, but customer surveys can still have important biases and inaccuracies. Keep in mind that:

  • Low response rates mean that you are more likely to hear from customers with strong opinions.
  • The survey process may be excluding some customer segments that have different experiences than the customers who can take the survey.
  • If you're giving employees bonuses for hitting survey targets, they may be trying to manipulate the survey.
  • Customers may be interpreting the survey questions differently than you intended.

Keep this in mind, and you will be less likely to make the mistake of being too confident when trying to understand what your customers are trying to tell you.

Surveys Leave Brand Impressions

by Peter Leppik on Wed, 2016-07-06 16:30

Surveys don't just collect data from participants. Surveys also give the participants insights into what your priorities are, and this can impact your brand image.

Computer game company Ubisoft learned this the hard way recently, when they sent a survey to their customers. The first question asks the customer's gender. Customers who selected "Female" were immediately told that their feedback was not wanted for this survey.

While I'm sure this was not the intended message, it definitely came across to some Ubisoft customers as insensitive to women who enjoy playing games like Assassin's Creed (such people do exist). The company quickly took the survey down and claimed it was a mistake in the setup of the survey.

Whether this was a genuine mistake or an amazingly bad decision by a market researcher who got a little too enthusiastic about demographic screening, it definitely reinforces the image of the game industry as sexist and uninterested in the half of the market with two X chromosomes.

While this might be a particularly egregious example, it's important to remember that customer feedback really is a two-way street. While your customers are telling you how they feel about you, you are also telling your customers a lot about your attitudes towards them. For example:

  • Do you respect the customer's time by keeping the survey short and relevant?
  • Do you genuinely want to improve by following up and following through on feedback?
  • Do you care about things that are relevant to the customer?
  • Do you listen to the customer's individual story?

The lesson is that you should always think about a survey from the customer's perspective, since the survey is leaving a brand impression on your customers. While your mistakes might not be as embarrassing as Ubisoft's, you do want to make sure the impression you leave is a positive one.

Mistakes about Margin of Error

by Peter Leppik on Wed, 2016-06-29 16:01

Pop quiz time!

Suppose a company measures its customer satisfaction using a survey. In May, 80% of the customers in the survey said they were "Very Satisfied." In June 90% of the customers in the survey said they were "Very Satisfied." The margin of error for each month's survey is 5 percentage points. Which of the following statements is true:

  1. If the current trend continues, in August 110% of customers will be Very Satisfied.
  2. Something changed from May to June to improve customer satisfaction.
  3. More customers were Very Satisfied in June than in May.

Answer: We can't say with certainty that any of the statements is true.

The first statement can't be true, of course, since outside of sports metaphors you don't ever get more than 100% of anything. And the second statement seems like it might be true, but we don't have enough information to know whether the survey is being manipulated.

But what about the third statement?

Since the survey score changed by more than the margin of error, it would seem that the third statement should be true. But that's not what the margin of error is telling you.

As it's conventionally defined for survey research, the margin of error means that if you repeated the exact same survey a whole bunch of times but with a different random sample each time, there's an approximately 95% chance that the difference between the results of the original survey and the average of all the other surveys would be less than the margin of error.

That's a fairly wordy description, but what it boils down to is that the margin of error is an estimate of how wrong the survey might be solely because you used a random sample.

But you need to keep in mind two important things about the margin of error: First, it's only an estimate. There is a probability (about 5%) that the survey is wrong by more than the margin of error.

Second, the margin of error only looks at the error caused by the random sampling. The survey can be wrong for other reasons, such as a bias in the sample, poorly designed questions, active survey manipulation, and many many others.

Margin of Error Mistakes

I see two very common mistakes when trying to understand that the Margin of Error in a survey.

First, many people forget that the Margin of Error is only an estimate and doesn't represent some magical threshold beyond which the survey is accurate and precise. I've had clients ask me to calculate the Margin of Error to two decimal places, as though it really mattered whether it was 4.97 points or 5.02 points. I've actually stopped talking in terms of whether something is more or less than the margin of error, instead using phrases like "probably noise" if it's much less than the margin of error, "suggestive" for things that are close to the margin of error, and "probably real" for things that are bigger than the margin of error and I don't have any reason to disbelieve them. This intentionally vague terminology is actually a lot more faithful to what the data is saying than the usual binary statements about whether something is statistically significant or not.

Second, many people forget that there's lots of things that can change survey scores other than what the survey was intended to measure, and Margin of Error doesn't provide any insight into what else might be going on. Intentional survey manipulation is the one we always worry about (for good reason, it's common and sometimes hard to detect), but there are many things that can push survey scores one way or another.

It's important to keep in mind what the Margin of Error does and does not tell you. Do not assume that just because you have a small margin of error the survey is automatically giving accurate results.

Sweating the Big Stuff

by vocalabs on Thu, 2016-05-12 15:54

"Don't sweat the small stuff" as the old saying goes. Meaning: don't waste your time on things that are unimportant.

Unfortunately there's a lot of sweat-soaked small stuff in the world of customer feedback. Here are some things that often suck up a lot of time and effort relative to how important they are:

  • The exact wording of survey questions.
  • The graphic design and layout of reports and dashboards.
  • The precise details of how metrics are calculated (spending too much time on this is a warning sign that you might have fallen into the Metric-Centric Trap).
  • Deciding whether individual surveys should be thrown out (this is another warning sign of the Metric-Centric Trap).
  • Presenting survey trend data to senior leadership.

Instead, you should be focusing on some of these things that don't get nearly as much attention for the amount of benefit they bring:

  • Closing the loop with customers.
  • Providing immediate coaching of employees based on customer feedback.
  • Regularly updating the survey to keep it relevant to the business.
  • Doing root cause analysis of common customer problems.
  • Keeping senior leadership actively engaged in advancing a customer-centric culture.

The difference between the small stuff and the big stuff is that the small stuff tends to be (relatively) easy and feels important. The big stuff often requires sustained effort and commitment but pays off in the end. It's the difference between an exercise bicycle and a real bike: both take work, but only one will get you anywhere.

What Are Your Goals?

by Peter Leppik on Wed, 2015-12-09 14:57

Before you get into the nuts and bolts of designing a survey program, spend some time sharpening up what you hope to accomplish. A good understanding of the business goals of the survey will really help figure out the right sampling, questions, channel, and reporting. A lot of the time when I hear companies say they want to do a survey for the purpose of collecting customer feedback, it really means that they haven't thought a lot about what they plan to do with the feedback once it's collected. It's like saying you want to do a survey for the purpose of conducting a survey.

The basic ingredients are straightforward. Most surveys have as their goals some combination of:

  • Tracking metrics: Requires using a very consistent set of survey questions with a random sample selected to give an acceptable margin of error for calculating metrics. 
  • Improving the performance of individual employees: Requires targeting the survey sample to collect adequate feedback on each individual employee, asking open-ended questions about the experience, and delivering the feedback to front-line supervisors in real time. Recorded customer interviews are particularly valuable.
  • Identifying customer pain points: Requires a lot of open-ended questions and potentially additional follow-ups. Customers should be invited to tell their stories.
  • Testing or validating changes to the customer experience: Requires careful attention to test and control group samples, and a consistent set of metrics for the different test cases (see A/B Testing for Customer Experience).
  • Persuading the organization/leadership to make a change to the customer experience: Requires collecting a valid statistical sample that supports the proposed change, as well as persuasive customer stories which will carry emotional weight with others in the organization. Recorded customer interviews are particularly valuable.
  • Providing individual customers a chance to be heard: Requires offering the survey very broadly, even if that means a low response rate or far more completed surveys than would otherwise be needed. A robust closed-loop process is not optional

So for example, if you've never done any transactional feedback before, your goal is probably going to be mostly about identifying customer pain points (i.e. trying to find out what you don't know) with a dash of tracking metrics thrown in. That probably means asking a couple of tracking questions and a lot of open-ended questions, and a random sample in the range of 400 completed surveys per reporting segment (enough to get a 5-point margin of error).

But if your goal is more directed to improving employee performance, things will be different. You will want to bias the survey sample to ensure each employee gets enough feedback to be useful (which also means un-biasing the sample to calculate metrics). You will probably also want to use customer interviews rather than automated surveys, since a recorded interview with the customer is much more effective at changing behavior than written comments and statistics.

Whatever your goals are, the most important thing is to have them. Surveys done for the sake of doing surveys tend to not be very useful.

Customer Survey Mistakes Almost All Companies Make

by Peter Leppik on Wed, 2015-12-09 14:37

It's easy to do a survey, but it's hard to run an effective customer feedback program that leads to changes in a company's actions and improved customer experience. There are a number of common mistakes: so common that nearly all companies make at least one of these mistakes, and a lot of companies manage to hit the entire list:

Not Understanding the Purpose of the Customer Survey

If you don't know what you expect to accomplish through a customer feedback program, it's hard to structure it in a way that will meet your goals. For example, a survey designed to help improve the performance of customer-facing employees will be very different than one merely intended to track metrics. When I ask companies why they are running a survey, often I hear answers like, "To collect customer feedback," or "Because it's a best practice." Answers like that tell me that they don't have a clear sense of why they need a survey, other than for the sake of having a survey.

Asking Too Many Questions

Long surveys generally have a poorer response rate than shorter surveys, can leave the customer with a bad feeling about the survey, and often don't produce any more useful feedback than shorter surveys. In many cases, there is no good reason to ask a lot of questions, other than a need to appease a large group of internal stakeholders each of whom is overly attached to his or her favorite question or metric. It's easy to find the questions you don't need on your survey: go through all the questions and ask yourself, "Have we ever actually taken any action based on this question?" If the answer is no, the question should go.

Focusing on Metrics, Not Customers

Metrics are easy to fit into a numbers-driven business culture, but metrics are not customers. At best, metrics are grossly oversimplified measurements of your aggregate performance across thousands (or millions) of customer interactions. But behind those numbers are thousands (or millions) of actual human beings, each of whom had their own experience. Many companies focus solely on the metrics and forget the customers behind them. Metrics make sense as a progress marker, but the goal is not to improve metrics but to improve customer experiences.

Not Pushing Useful Data to the Front Lines Fast Enough

In many cases, creating a great customer experience isn't about installing the right platform or systems, it's making sure that thousands of little decisions all across the company are made the right way. Those people making those decisions need to know how their individual performance is helping contribute to the overall customer experience, and the best way to do that is give them access to immediate, impactful feedback from customers. Too often, though, customer feedback gets filtered through a centralized reporting team, or boiled down to dry statistics, or delivered in a way that masks the individual employee's contribution to the whole.

Not Closing the Loop

Closed-loop feedback is one of the most powerful tools for making sure a customer survey inspires action in the company, yet even today most companies do not have a formal system in place to close the loop with customers. There are actually three loops that need to be closed: you need to close the loop with the customer, with the business, and with the survey. If you're not closing all three loops, then your survey is not providing the value you should be expecting.

Always Using the Same Survey

Companies change and evolve. Markets shift. Customer's expectations are not static. Entire industries transform themselves in just a few years. So why do so many customer surveys remain unchanged for years (or decades)? Surveys should be structured to respond to changing business needs and evolve over time, otherwise you're not collecting feedback that's relevant to current business problems. Surveys that never change quickly become irrelevant.

Not Appreciating Customers' For Their Feedback

Finally, a lot of companies forget that when they do a survey they are asking a customer--a human being--to take time out of their day to help out. And they're asking for hundreds or thousands of these favors on an ongoing basis. But when the reports come out and the statistics are compiled, all those individual bits of human helpfulness are lost in the data machine. I know it's not practical to individually and personally thank thousands of customers for doing a survey, but it's not that hard to let customers know that you're listening to them and taking their feedback seriously. All too often the customer experience of completing a survey involves taking several minutes to answer a lot of questions and provide thoughtful feedback, and then it disappears into a black hole. You don't need to pay customers for taking a survey (in fact, that's often a bad idea), but you should at least stop and think about how helpful your customers are being and appreciate their efforts.

Amateurs Talk Strategy, Professionals Talk Execution

by Peter Leppik on Wed, 2015-08-26 15:16

Amateurs Talk Strategy, Professionals Talk Logistics

That's an old military quote that sometimes gets pulled out at business leadership conferences. Strategy is the easy part. The hard part, the stuff the pros worry about, is the nuts and bolts of getting everything lined up and in the right place at the right time so the strategy can work.

It's an important message for customer feedback programs, too.

Developing a survey strategy is easy, and a lot of people have a lot of opinions on how to do it (some better than others).

But actually building an effective feedback program requires a lot of attention to detail. You need to:

  • Determine who to ask to participate in the survey
  • Decide what questions to ask
  • Determine the right time and channel to invite the customer to take the survey
  • Offer the survey to the customer in a way that makes the customer want to help
  • Route the survey responses to service recovery teams in real-time when appropriate
  • Coach front-line employees based on their individual survey responses
  • Deliver data to business users throughout the organization in a way that's timely and tailored to their individual needs
  • Monitor the survey process for signs of manipulation or gaps in the process
  • Adjust all aspects of the process on an ongoing basis as business needs change
  • Focus the entire organization on using customer feedback as an important tool to support both operational and strategic decision making

(As an aside: one thing not on this list is "Track your metrics and set goals," because tracking metrics is both easy and low-value. Everyone does it, but many organizations stop at that point in the mistaken belief that improved customer experience will magically follow.)

So just as military pros understand that wars are won and lost in the unglamorous details of moving people and supplies to the right place at the right time, survey pros understand that the effectiveness of a feedback program is built on the nitty-gritty of collecting and delivering the right data to the right people at the right time to help them do a better job.

What the amateurs don't recognize is that you can't just move an army on a whim, or improve customer experience by throwing some survey metrics at it.

A/B Testing for Customer Experience

by Peter Leppik on Wed, 2015-07-29 16:26

A/B testing is one of the most powerful tools for determining which of two (or more) ways to design a customer experience is better. It can be used for almost any customer experience, and provides definitive data on which design is better based on almost any set of criteria.

Stripping off the jargon, an A/B test is really just a controlled experiment like what we all learned about in 8th grade science class. "A" and "B" are two different versions of whatever you're trying to evaluate: it might be two different website designs, two different speech applications, or two different training programs for contact center agents. The test happens when you randomly assign customers to either "A" or "B" for some period of time and collect data about their performance.

Conducting a proper A/B test isn't difficult but it does require some attention to detail. A good test must have:

  • Proper Controls: You want the "A" and "B" test cases to be as similar as possible except for the thing you are actually testing, and you want to make sure customers are being assigned as randomly as possible to one case or the other.
  • Good Measurements: You should have a good way to measure whatever you're using for the decision criteria. For example, if the goal of the test is to see which option yields the highest customer satisfaction, make sure you're actually measuring customer satisfaction properly (through a survey, as opposed to trying to infer satisfaction levels from some other metric).
  • Enough Data: As with any statistical sampling technique, the accuracy goes up as you get data from more customers. I recommend at least 400 customers in each test case (400 customers experience version A and 400 experience version B, for 800 total if you are testing two options). Smaller samples can be used, but the test will be less accurate and that needs to be taken into consideration when analyzing the results.

In the real world it's not always possible to do everything exactly right. Technical limitations, project timetables, and limited budgets can all force compromises in the study design. Sometimes these compromises are OK and don't significantly affect the outcome, but sometimes they can cause problems.

For example, if you're testing two different website designs and your content management system doesn't make it easy to randomly assign visitors to one version or the other, you may be forced to do something like switch to one design for a week, then switch to the other for a week. This is probably going to work, but if one of the test weeks also happens to be during a major promotion, then the two weeks aren't really comparable and the test data isn't very helpful.

But as long you pay attention to the details, A/B testing will give you the best possible data to decide which customer experience ideas are worth adopting and which should be discarded. This is a tool which belongs in every CX professional's kit.