Vocalabs Newsletter: Quality Times

Issue
92

A/B Testing for Customer Experience

In This Issue


A/B Testing for Customer Experience

A/B testing is one of the most powerful tools for determining which of two (or more) ways to design a customer experience is better. It can be used for almost any customer experience, and provides definitive data on which design is better based on almost any set of measurable criteria.

Stripping off the jargon, an A/B test is really just a controlled experiment like what we all learned about in 8th grade science class. "A" and "B" are two different versions of whatever you're trying to evaluate: it might be two different website designs, two different speech applications, or two different training programs for contact center agents. The test happens when you randomly assign customers to either "A" or "B" for some period of time and collect data about their experience.

Conducting a proper A/B test isn't difficult but does require attention to detail. A good test must have:

  • Proper Controls: You want the "A" and "B" test cases to be as similar as possible except for the thing you are actually testing, and you want to make sure customers are being assigned as randomly as possible to one case or the other.
  • Good Measurements: You should have a good way to measure whatever you're using for the decision criteria. For example, if the goal of the test is to see which option yields the highest customer satisfaction, make sure you're actually measuring customer satisfaction properly through a survey (as opposed to trying to infer satisfaction levels from some other metric).
  • Enough Data: As with any statistical sampling technique, the accuracy goes up as you get data from more customers. I recommend at least 400 customers in each test case (400 customers experience version A and 400 experience version B, for 800 total if you are testing two options). Smaller samples can be used, but the test will be less accurate and that needs to be taken into consideration when analyzing the results.

In the real world it's not always possible to do everything exactly right. Technical limitations, project timetables, and limited budgets can all force compromises in the study design. Sometimes these compromises are OK and don't significantly affect the outcome, but sometimes they can cause problems.

For example, if you're testing two different website designs and your content management system doesn't make it easy to randomly assign visitors to one version or the other, you may be forced to do something like switch to one design for a week, then switch to the other for a week. This is probably going to work, but if one of the test weeks also happens to be during a major promotion, then the two weeks aren't really comparable and the test data isn't very helpful.

But as long you pay attention to the details, A/B testing will give you the best possible data to decide which customer experience ideas are worth adopting and which should be discarded. This is a tool which belongs in every CX professional's kit.


No Customer Problem is Unimportant or Unfixable

In a couple of Vocalabs' clients, we've noticed an uncommon attitude towards the customer experience.

Where most companies often push back on trying to solve customer problems, these unusual companies take the opposite approach. They assume that No customer problem is unimportant or unfixable.

Compare that to the litany of reasons most companies give for not fixing their customer experience problems:

  • "Only a few customers are complaining about that."
  • "It would be very expensive to provide that level of service."
  • "That would require major investment of IT resources."
  • "That customer is just trying to get something for free."
  • "If we did that our customers would scam us."
  • "The way we're doing it now is better."
  • "You can't please every customer all the time."

What makes these excuses so insidious is that they are very occasionally true. Some problems really do arise from freak circumstances (but usually if one customer complains, there are many others who have the same problem and aren't complaining). Sometimes systems are so big and outdated that it would be uneconomical to fix them (but at some point they will have to be replaced, and next time around you shouldn't let your technology get so far behind the curve). Some customers really are trying to scam you (but the overwhelming majority of customers are honest). And it is true that some customers will never be satisfied no matter what you do, but those customers are very rare.

Often one (or more) of those reasons is trotted out as a way to avoid taking a serious look at fixing some issue with the customer experience:

"What are we going to do about the complaints about how we verify customers' identities over the phone?"

"Only a few customers are complaining about that. Plus, if we changed the authentication then people would scam us."

"Oh, then I guess we shouldn't change that."

But if you take the attitude that No customer problem is unimportant or unfixable, then the conversation becomes completely different:

"What are we going to do about the complaints about how we verify customers' identities over the phone?"

"Only a few customers are complaining about that. Plus, if we changed the authentication then people would scam us."

"You might be right. But No customer problem is unimportant or unfixable, and this is definitely important enough to some of our customers that they took the time to complain. So we should at least explore some options and see if there's a better way to do things."

This attitude, that No customer problem is unimportant or unfixable, can dramatically shift a culture towards being customer-centric, especially when it comes straight from the top.

It's not an easy shift, because it directly attacks the deep resistance to change in many organizations. But try making this your catch-phrase and see how it changes the discussion.

Newsletter Archives