The Customer Service Survey

Customer Feedback 101

by Peter Leppik on Wed, 2018-10-31 14:36

I've noticed that to newcomers to Customer Experience sometimes want a quick reference for the best practices we've developed for customer feedback programs, and an easy way to get smart about some of the jargon we use.

We've added a new section to our website, Customer Feedback 101. Here we're collecting short articles (most of them are just a few paragraphs) to explain key concepts in customer feedback programs.

People who are new to this space can use this as an easy way to come up to speed; whether you're completely unfamiliar with customer feedback, or coming from a market research background (we do some things just a little differently here), I hope you'll find this a helpful resource.

The first set of a little over a dozen articles covers topics including what type of survey to use, how to pick the best metric, and how to calculate the margin of error. We plan to expand this to include many other articles.

I'd love to hear your feedback, and any suggestions you have for new topics!

Recreating That Airport Experience

by Peter Leppik on Thu, 2018-08-23 11:32

This past weekend I went to the movies on a date night with my wife. We went to the AMC megaplex to see Crazy Rich Asians. Normally we go to a smaller (and cheaper) theater that's a lot closer to home, but we had been given some AMC passes so we made the drive.

It's been several years since I last went to an AMC theater, and the first thing I noticed when we went in the door was two different lines to buy tickets: one for ordinary people, and a second for premium members of AMC's loyalty program. A similar two-line setup was visible at the concession stand.

"Just like the airport," was my gut reaction.

Fortunately there were no lines at the theater, so it didn't really matter that we weren't part of the exclusive club.

When we got to the ticket booth, we found that the showtime we wanted was in a premium "Dolby" theater, which required a surcharge in addition to our passes. It wasn't clear what the difference was between the regular theater and the super-fancy once, but faced with the choice of paying extra or waiting an hour for a cheaper theater, we decided to pay the random, unexpected surcharge.

"Also just like the airport," I thought.

But before we could buy the tickets, we had to go through an overly-complicated process of checking in because the theater only offered reserved seating despite the fact that it was two-thirds empty. (It didn't help that the screen for choosing seats was in tiny type which was difficult for my middle-aged eyes to read.)

"Someone at AMC really has a thing for air travel," I concluded.

The premium Dolby theater was definitely nicer than your standard movie theater, though I suspect a fluffy rom-com was probably not the best vehicle for showing off whatever fancy sound and projection gear the theater was equipped with. And we definitely enjoyed the movie.

But my overall impression was that next time we should stick with our local theater. Because while the AMC theater was definitely bigger and fancier, it just wasn't as pleasant. I suspect I'm not the only AMC customer to think "airport" when faced with AMC's premium lines, unexpected upcharges, and unnecessary hoops.

I suspect that I'm also not the only AMC customer for whom "airport" is not a positive association.

You May Be P-Hacking and Don't Even Know It

by Peter Leppik on Fri, 2018-07-27 10:42

P-Hacking is a big problem. It can lead to bad decisions, wasted effort, and misplaced confidence in how your business works.

P-Hacking sounds like something you do to pass a drug test. Actually, it's something you do to pass a statistical test. "P" refers to the "P" value, the probability that an observed result is the result of random chance and not something real. "Hacking" in this case means manipulating, so P-Hacking is manipulating an experiment in order to make the P value look more significant than it really is, so that it looks like you discovered a real effect when in fact there may be nothing there.

It's the equivalent of smoke and mirrors for statistics nerds. And it's really really common. So common that some of the foundational research in the social sciences has turned out to not be true. It's led to a "Replication Crisis" in some fields. forcing a fresh look at many important experiments.

And as scientific techniques like A/B testing have become more common in the business world, P-Hacking has followed. A recent analysis of thousands of A/B tests through a commercial platform found convincing evidence of P-Hacking in over half the tests where a little P-Hacking might make the difference between a result that's declared "significant" and one that's just noise.

The problem is that P-Hacking is subtle: it's easy to do without realizing it, hard to detect, and extremely tempting when there's an incentive to produce results.

One common form of P-Hacking, and the one observed the recent analysis, is stopping an A/B test early when it shows a positive result. This may seem innocuous, but in reality it distorts the P value and gives you a better chance of hitting your threshold for statistical significance.

Think of it this way: If you consider a P value of less than 0.05 to be "significant" (a common threshold), that means that there's supposed to be a 5% chance that you would have gotten the same result by random chance if there was actually no difference between your A and B test cases. It's the equivalent of rolling one of those 20-sided Dungeons and Dragons dice and declaring that "20" means you found something real.

But if you peek at the results of your A/B test early, that's a little like giving yourself extra rolls of the dice. So Monday you roll 8 and keep the experiment running. Tuesday you roll 12 and keep running. Wednesday you roll 20 and declare that you found something significant and stop. Maybe if you had continued the experiment you would have kept rolling 20 on Thursday and Friday, but maybe not. You don't know because you stopped the experiment early.

The point is that by taking an early look at the results and deciding to end the test as soon as the results crossed your significance threshold, you're getting to roll the dice a few more times and increase the odds of showing a "significant" result when in fact there was no effect.

If there is a real effect, we expect the P value to keep dropping (showing more and more significance) as we collect more data. But the P value can bounce around, and even when the experiment is run perfectly with no P-Hacking there's still a one-in-20 chance that you'll see a "significant" result that's completely bogus. If you're P-Hacking, the odds of a bogus result can increase a lot.

What makes this so insidious is that we are all wired to want to find something. Null results--finding the things that don't have any effect--are boring. Positive results are much more interesting. We all want to go to our boss or client and talk about what we discovered, not what we didn't discover.

How can you avoid P-Hacking? It's hard. You need to be very aware of what your statistical tests mean and how they relate to the way you designed your study. Here's some tips:

  • Be aware that every decision you make while an A/B test is underway could be another roll of the dice. Don't change anything about your study design once data collection has started.
  • Every relationship you analyze is also another roll of the dice. If you look at 20 different metrics that are just random noise, you actually expect that one of them will show a statistically significant trend with p < 0.05.
  • When in doubt, collect more data. When there's a real effect or trend, the statistical significance should improve as you collect more data. Bogus effects tend to go away.
  • Don't think of statistical significance as some hard threshold. In reality, this is just a tool for estimating whether or not the results of an analysis are real or bogus, and there's nothing magical about crossing p < 0.05, p <0.01, or any other threshold.

Another useful tip is to change the way you think and speak about statistical significance. When I discuss data with clients, I prefer to avoid the phrase "statistically significant" entirely: I'll use descriptive phrases like, "there's probably something real" when the P value is close to the significance threshold, and "there's almost certainly a real effect" when the P value is well below the significance threshold.

I find this gives my clients a much better understanding of what the data really means. All statistics are inherently fuzzy, and anointing some results as "statistically significant" tends to give a false impression of Scientific Truth.

Designing Hybrid Surveys

by Peter Leppik on Fri, 2018-07-13 15:48

There's two elements to designing a hybrid survey program which combines the depth of actionable feedback from a live-person phone interview with the ability to cost-effectively collect huge sample sizes with an online survey. In this article I'll explore designing the survey questions and how the two feedback channels relate to each other. In a future article I'll write about designing the survey process itself, and some of the considerations which go into sampling and channel selection.

To get the most benefit from a hybrid survey we want to play to the strengths of each feedback channel. Online surveys are cost effective for collecting tens of thousands or millions of survey responses, while phone interviews let you collect a lot of details about individual customers. Online surveys are good for calculating metrics, and phone interviews give you insights into the individual customers' stories.

Keep The Online Survey Short and Sweet

The online survey is where you get to cast a very wide net, including large numbers of customers in the survey process. This is also where most of your tracking metrics will come from. But it's not the place to try to collect lots of detailed feedback from each customer: long survey forms often don't get a good response rate.

I recommend limiting the online survey to a handful of key metrics plus one box for customers to enter any other comments or suggestions they may have. The particular metrics you choose will depend on your survey goals, but I tend to think that one metric is too few, but more than five will just make the survey longer without yielding much (if any) new information.

It's also good practice to give customers a Service Recovery option, usually as a question at the end of the survey along the lines of, "Do you want a representative to contact you to resolve any outstanding issues?" Just make sure that those requests get routed to the right department and promptly handled.

And please please please please don't make any of your questions mandatory. Required questions serve no purpose other than frustrating customers and should be stricken from the survey toolbox.

Go Deep In Phone Interviews

You can ask a surprising number of questions in a typical five-minute phone interview. This is the place to ask follow-up questions, maybe include some metrics that had to be eliminated from the online survey due to length (you did keep it short, right?), and most importantly, give the customer a chance to really tell her story.

I usually start with the questions from the online survey and add to them. We may need to adjust the wording of some of the questions--not every question that looks good written will sound good when read aloud--but we want to cover the same ground. One of the purposes is to compare the results from the online survey to the interview, since we normally expect the interview to give us a truer reading of the survey metrics. If metrics for the interview and online survey diverge, that's an indication that something may be going wrong in the survey process.

It's a good idea to keep the interview questions flexible. Unlike the core metrics in the online survey, which need to stay consistent over time, the interview questions may need to be updated frequently depending on changing business needs or the particular reason a customers was selected for an interview rather than an automated survey.

I also bias heavily towards open-ended questions on the interview. This gives the customer a chance to use their own words and will often surface unexpected feedback. If needed, the interviewer can code the responses (along with providing a written summary) to allow for tracking of the types of feedback you're getting.

The end result is going to be a handful of metrics, with a healthy dollop of open-ended questions to explore the reasons behind the ratings. The metrics should be comparable to the online survey, so it can serve as a check on the validity of the high volume feedback process, but the true value will be in understanding individual customer stories.

Of Course You Don't Intend to Kill Puppies. But Hypothetically Speaking....

by Peter Leppik on Fri, 2018-05-11 13:55

Recently, Sun Country Airlines, formerly a Minnesota-Nice local favorite reconstituted as a Nickel And Dime Ultra Low Cost carrier, stranded a couple planeloads of customers in Mexico because of a blizzard in Minneapolis.

Cancelled flights are not all that unusual in Minnesota during the winter, but Sun Country's strategy for dealing with it was. They simply refunded the passengers' remaining airfare and told them to find their own way home. The (now former) customers were not offered any meals or lodging, nor any assistance in finding another way home.

Nor did Sun Country offer to book those passengers on another Sun Country flight: as the airline patiently explained to passengers and the media, those were the last Sun Country flights from Los Cabos and Mazatlan for the season and the airplanes were needed elsewhere. And because there were no more flights, there were no Sun Country representatives on the ground to help passengers, either.

Naturally this didn't go over well. The couple hundred dollars for the refunded tickets (remember, Sun Country is now an Ultra Low Cost Carrier) wasn't even remotely enough to book a last-minute flight on another airline. It's likely that some passengers had to borrow from credit cards or friends just to get home.

The crazy thing is, Sun Country was probably within its rights. The company's Contract of Carriage (a 30-page document almost no passenger, including myself, has read in detail) only says is that the airline will try to provide transportation if they cancel a flight "to the extent reasonably possible." In this situation, someone at the airline made the decision that it would be too hard or too difficult to figure something out. So they decided to exercise their option to punt.

This highlights the huge gap between what companies are legally or contractually required to do, and what their customers expect them to do. Short of war or bankruptcy, no passenger expects that their airline will just leave them sitting in an airport in Mexico with no money, no ticket home, and no assistance. But Sun Country wrote its contract so it could do exactly this, and I'm sure that every other airline does exactly the same thing.

Nor are the airlines alone. These sorts of one-sided, unreasonable terms of service are everywhere, from Wells Fargo claiming its contract prohibits customers from suing it even when the reason for the lawsuit is because the customer never agreed to the contract, to Facebook giving itself permission to track everything you do and everywhere you go, online and off, and use that information in almost any way it wants.

It's fair to assume that these contracts go far far beyond what customers think they're signing up for when they click "agree." Not even the companies think these contracts are reasonable, judging from how quickly they back down from the PR stink when they actually try to exercise these rights they've given themselves: after a few days of bad publicity, Sun Country announced it would reimburse customers for the cost of getting home. Wells Fargo eventually said it wouldn't force customers with fraudulent accounts into binding arbitration. And Facebook admitted that letting people's personal profiles into the hands of shady political operatives was a mistake.

The problem is that Terms of Service agreements are generally written by lawyers, and the job of a lawyer writing a contract is to reduce or eliminate the chances that the client will be sued. So when a company is allowed to write its own ToS agreement without any negotiation with an actual customer, the lawyer will cover every base possible and write it so the customer agrees to everything that might possibly happen whether or not the company plans to operate that way.

(I can just imagine a lawyer at a conference table advising his client, "Of course you don't intend to kill your customers' puppies. But hypothetically speaking, some day a customer's puppy might die because of some mistake the company made, and the customer could claim that you meant to do it, or even enjoyed it. Juries love puppies, so we should head this off and just put in the contract that the customer gives us permission to maliciously and gleefully murder his puppy.")

When a crisis comes up it's tempting for a company to think it's OK to actually do the things it wrote into the terms of service. If leaving passengers stuck in Mexico gets your airline out of an expensive pickle and you're legally allowed to do it, it's hard to say no. If you can interpret your customer contract in a way that avoids years of litigation around millions of fraudulent accounts, the temptation will be extreme. But of course this will only make things worse, because at the end of the day meeting your customers' expectations (to say nothing of regulators' and lawmakers' expectations) is far more important than merely staying within the bounds of what's legally required.

The solution is both obvious and very difficult in the real world: Write terms of service agreements according to how you intend to treat customers, not according to what will minimize legal risk in every possible situation. If you don't give yourself the contractual right to abandon customers in Mexico, you're much less likely to actually do it when the crunch happens.

This requires having someone to advocate on behalf of customers, someone to ask the lawyers, "If you don't intend to kill my puppy, why give yourself the right to do it?" With the recent Facebook data leaks and the new GDPR regulations in Europe, there's been more of a spotlight on how long, unreadable, and egregious many of these contracts really are. So maybe we're in a moment when consumer contracts can be rewritten to reflect the customer experience companies intend to provide, rather than giving them legal cover for almost any sort of misbehavior.

One can only dream...

When to Use Hybrid Surveys

by Peter Leppik on Fri, 2018-04-13 16:47

Most customer feedback surveys today are done either automated online surveys or live telephone interviews. There's still a few automated phone surveys (IVR) and paper surveys, but those seem to be disappearing quickly as they tend to be at a disadvantage for both cost and quality.

In general, online surveys provide low cost per response, while phone interviews are substantially more expensive but have a much higher response rate and provide higher quality data and richer qualitative feedback.

Combining these two approaches can give you the best of both worlds: a large number of survey responses at a reasonable cost, plus a more representative sample and more detailed customer comments.

Here's some scenarios where you should consider a hybrid online survey and phone interview:

  • If you need more detailed qualitative feedback from your automated survey, or want the ability to ask more follow-up questions.
  • If the response rate from your online survey is too low and you need to validate the results (I get concerned when the response rate drops below about 10%).
  • If you need more responses in your phone interview but don't have the budget.
  • If limiting your survey to either phone or online will exclude a significant fraction of your customers (for example: if you don't have email addresses or phone numbers for many customers).

There are also some situations where it probably makes sense to stick with a single survey channel:

  • If your overall survey program isn't large enough to support both an online and phone component economically. The cost of an online survey tends to be relatively fixed no matter the number of responses, so a small online survey can actually be more expensive than the same number of phone interviews, and there's no savings to be realized by doing both.
  • If you have high value customers where you want to always provide the more personal attention of a phone interview.
  • If you don't have the organizational maturity to make effective use of the qualitative feedback offered by a phone interview. There's little sense paying for feedback you can't take advantage of.

Designing and running a hybrid survey program may sound like something that requires Ninja-level Customer Experience skills, but the reality is that it's not that much more complicated than any other ongoing customer feedback process. In the right circumstances and executed well, a hybrid survey can give you far more value for the dollar than any other survey strategy.

Why are we so obsessed with metrics?

by Peter Leppik on Fri, 2018-03-30 16:50

Next week our local chapter of CXPA will be hosting a session called "Battle of the Metrics." I'm looking forward to it: it should be an informative and (I hope) entertaining meeting. If there's one thing that can spark a lively discussion among Customer Experience professionals, it's someone who takes a strong stand for or against any particular metric.

But why do we spend so much time and effort worrying about metrics?

Most reasonable CX metrics provide directionally similar results: when Customer Satisfaction or Net Promoter improve, chances are very good that Customer Effort,  Customer Loyalty, or any scorecard composed of customer survey responses will also improve. The numbers will be different, but they should all tell a similar story. Viewed in that way, arguing about which metric is best is a little like arguing about whether miles or kilometers are better.

Though come to think of it, when the United States tried unsuccessfully to go metric a half century ago, it turned out that a lot of people suddenly felt very strongly about whether to measure highways in miles or kilometers. So maybe it's not so surprising that we also have strong feelings about which CX metric to use.

When used properly, it shouldn't matter all that much which metric we choose. Most of the real CX action is below the level of the metrics: it's about finding ways to improve individual customer journeys, most often by helping people at all levels of the organization put themselves in those customers' shoes. Metrics, like signposts on the highway, give us some sense of how far we've gone and whether we're moving in the right direction. Miles or kilometers, either one will tell us that we're making progress.

And to the extent that different metrics give us different results, that's a sign that something unexpected is happening and we need to pay attention. Because while different CX metrics usually move together, they do measure somewhat different things. So if Net Promoter (which measures the strength of a customer's overall relationship) improves while Customer Effort (which measures how smoothly a particular transaction went) is getting worse, that could be a sign that something's afoot. It may be that there are some operational problems which your customers are willing to forgive (for now); or it may be that you are benefitting from a competitor's misstep. Whatever the situation, it's worth spending some effort to dig deeper.

In the end, I think metrics appeal to us because they give us a simple view into a complex reality. Boiling down our CX efforts to one number makes it easier to explain the impact of Customer Experience, and it makes it easier to show leadership what exactly it is that we're trying to achieve.

This is fine, but it comes with a steep price. Because in the end, it's not the metric that matters. It's everything that goes into the metric, all those thousands or millions of individual customers and their individual stories that matter. The metric, while it makes it possible to think about the bigger picture, conceals far more than it reveals.

2017 Results for the National Customer Service Survey are out

by Peter Leppik on Tue, 2018-02-27 16:30

We recently released our 2017 report for the National Customer Service Survey on banking customer service, tracking customer service performance at Bank of America, Chase, Citi, and Wells Fargo. An executive summary is available for download.

In our 2017 data we find that Citi has rebounded significantly from its decline over the past two years, and is now generally scoring in-line with industry peers in our survey. Bank of America has declined meaningfully since 2015, and in 2017 had the lowest scores in seven of our nine key metrics. Wells Fargo has the highest scores in our three Business Outcome metrics.

This unique independent research is underwritten and conducted by Vocalabs on an ongoing basis to benchmark industry trends in phone-based customer service. We perform in-depth interviews with customers of each company immediately after a customer service call. High-level business metrics are correlated to specific events during the customer service call, allowing us to directly measure the drivers of customer satisfaction and loyalty. 

This report is based on over 5,000 telephone interviews conducted between 2011 and December 2017. The National Customer Service Survey is an ongoing research study, and we are collecting new customer feedback about each company continually throughout the year. We publish an annual Executive Summary highlighting key trends, and Vocalabs clients have access to more in-depth analysis and can develop proprietary benchmarks against our syndicated research. Please contact us for more information.

Reputation Is an Effect, Not a Cause

by Peter Leppik on Fri, 2018-01-12 15:05

One of the reasons to invest in improved Customer Experience is the positive effects it will have on your company's reputation and word-of-mouth.

That's great and well-deserved for companies that truly internalize CX. But I've seen a few companies where they treat their Customer Experience as a marketing campaign, and it never ends well.

This has been on my mind lately because Comcast, everyone's favorite CX bad boy, has been making noise lately about how they're mending their ways. They even had their EVP of Customer Service in the cable division, Tom Karinshak, do a puff-piece interview for a customer experience podcast.

But I wonder if this is a true conversion, because while they're saying all the right things it isn't clear to me that any of the root causes of Comcast's reputation have changed. For example, Comcast is still an effective monopoly in most of its markets and doesn't seem to have much of an incentive to care.

I'm not the only one to have this reaction. Jim Tincher noticed some recent fine-print changes on Comcast's website, and his take is that Comcast still cares more about maximally monetizing its subscribers than building relationships with them.

I saw a similar dynamic play out at Sprint almost a decade ago. Sprint, like Comcast today, was known for bottom of the barrel customer service. Sprint invested heavily in improving its customer service, and heavily promoted research (including Vocalabs' research) showing a positive effect. And then Sprint's attention turned elsewhere and the service went right back to where it had been.

This nicely encapsulates the difference between internalizing Customer Experience and treating it like a marketing campaign. When you do CX right, it becomes part of the core fiber of the company. It's hard, and it requires ongoing effort, but the positive benefits are long-lasting and build over time. But if it's just a PR initiative, once the campaign is over things will go right back to the way they were. There might not even be time for the company's reputation to improve in any meaningful way before the old bad habits settle in again. Worse, management may conclude that Customer Experience doesn't pay off because they didn't see any sustained benefit. That will make it a harder sell the next time around.

Companies which are CX leaders understand that Customer Experience isn't something you do, it's something you are. Companies which invest in CX looking for good PR and short-term financial gain are likely to fail on all counts.

Customer Experience Non-Trends for 2018

by Peter Leppik on Fri, 2018-01-05 15:46

It's the beginning of a new year, which means it's time for pundits and prognosticators to pull out their crystal balls and make predictions about the twelve months to come.

Bruce Temkin, for example, has declared that the Customer Experience Theme for 2018 is "Humanity".

Who am I to disagree?

But in my view, such trend articles miss the bigger picture, which is that the important facts of the Customer Experience profession will be pretty much the same in 2018 as they were in 2017, 2016, and earlier years. These are the non-trends, the things that don't change, and most of them are more important than the trends.

So here I present my Customer Experience Non-Trends for 2018. Not only are most of these non-trends more important to the average CX professional than the Trends, you can read these safe in the knowledge that in January 2019 I can just republish the same article with a different date, just as this year's article is the same as my 2017 Non-Trends article, which in turn is the same as my 2016 Non-Trends article.

Non-Trend 1: Engaged Leadership Is The Single Most Important Element in CX

The companies delivering a great customer experience almost always have leadership actively engaged in continuously trying to deliver a better experience. Conversely, companies where leadership views CX as a one-time project, or something to delegate, generally don't succeed in delivering a superior experience.

The lesson here is simple: if you want to improve the customer experience in your organization, the most important thing you can do is get the senior leadership to care and make it a personal priority.

Non-Trend 2: Great CX Is About Getting a Thousand Things Right

Sweat the details. A grand strategy or a new piece of technology will not, by itself, move the needle on your customer experience (though the right strategy and tools definitely make the job easier).

Unfortunately, "sweat the details" is not a sexy message and it doesn't help sell software and services. Many vendors make the empty promise that their solution will transform your CX effort. Don't believe it. There is no magic bullet.

Non-Trend 3: Customer Experience Professionals Often Have a Tough Job

The field of Customer Experience has made great strides over the last decade or so, but it's still not easy. We've finally gotten to the point where most companies will at least say that the Customer Experience is a priority, but many of them have yet to internalize it. The leadership doesn't yet care enough to dedicate the needed resources, or they think that because they have a CX team the problem is solved and they can mostly ignore it.

So in a lot of places, the role of the CX professional will continue to revolve around getting leadership attention, finding the easy wins, and internal evangelism. This, unfortunately, is not likely to change any time soon.

Non-Trend 4: Great CX Drives Customer and Employee Passion, Which Creates Better CX

The sweet spot of customer experience is when your whole organization is focused on creating a better experience for customers, which makes customers want to do more business with you, and that makes employees want to help customers even more. Customer Experience becomes a positive feedback loop.

The unacknowledged truth is that most employees genuinely want to do a good job and have a positive impact on their customers. It's one of the most satisfying things we can do in our careers. A strong focus on CX creates not just more satisfied customers but also more satisfied employees.

Here's hoping for a terrific 2016 2017 2018!

How Can We Help You?

Let us put our expertise in customer feedback to work for you.

Please tell us about how we can help you improve your customer feedback process.