Vocalabs Newsletter: Quality Times

Issue
22

In This Issue:

  • Does Forcing Callers to Use Self-Service Work?

Does Forcing Callers to Use Self-Service Work?

By Peter Leppik
pleppik@vocalabs.com

The most common business goal we hear for self-service projects is to save money. The logic is simple: customer service reps are expensive, and automation is relatively inexpensive.

As a result, automation rate (or "containment" rate, a term which makes customers sound like toxic waste) is often used as a key measurement of the success of an IVR or speech recognition system. This metric is easy to calculate - just divide the number of calls that don't go to an agent by the total number of calls - but at VocaLabs we've long thought that this simple measurement gives an incomplete and misleading picture.

Part of the problem is that this metric can lead people to try a nasty trick: increasing the automation rate by intentionally making it difficult to reach an agent. Hiding or disabling the option of dialing "0" to reach a human has become a common practice despite warnings from industry experts that this leads to angry, frustrated callers.

What's Really Going On?

It is true that making it hard to reach an agent usually has an immediate effect on the number of calls contained within the IVR. But this simple view leaves many important questions unanswered:

  • Are callers really using self-service more when you make it hard to reach an agent, or are they hanging up out of frustration (and possibly calling back until they figure out how to reach a human)?
  • Does this technique impact caller satisfaction?
  • Is it possible to increase automation rates without making it hard to reach an agent?
  • What effect does self-service have on single-call resolution?
  • What effect does making it hard to reach an agent have on single-call resolution?

To answer these questions, we analyzed data we've gathered from about 60 separate studies in our SectorPulse program from various companies across the mobile phone, airline, and financial services industries. This is a rich dataset, since our process gathers customer opinions from before the call, tracks the customer service interaction (through multiple calls if the caller has to call back), and gathers feedback again after the interaction is over.

Defining Terms

The first step is to make sure we're measuring the right things. We depart from the traditional IVR-centric definition of "Automation Rate." 
  • Automation Rate: We depart from the traditional IVR-centric definition of automation by defining an "automated" caller as one who (a) reported that he or she did not need to talk to an agent, and (b) reported successfully resolving the issue he or she called about. One person making repeat calls about the same transaction is counted as a single caller.

    If someone gets stuck in a self-service system, hangs up, and calls back to reach an agent, we consider that a single non-automated caller. A traditional measurement of "containment" would count one successfully automated call plus one agent call. Similarly, if a caller stays inside the self-service system but fails to resolve his or her issue, we count that as not automated, whereas traditional measurements count it as automated.

  • Frustration Rate: "Frustration" is our name for a measurement of how hard callers find it to reach an agent. The Frustration Rate is simply the percentage of callers who report that it was difficult or impossible to reach an agent during the call, excluding those who reported that they did not need to talk to a person.
  • Completion Rate: The Completion Rate is the percentage of callers who both made a single call and reported that they accomplished what they set out to do. A lower completion rate generally translates into higher call volume and operating costs for the call center, since it means people are making multiple calls to handle their problems.
  • Satisfaction: We ask a standard question about how satisfied callers were overall, and the Satisfaction score is the net percentage of "Very Satisfied" callers vs. "Dissatisfied" or "Very Dissatisfied" callers.

Frustration vs. Automation There's a Price to be Paid

Plotting data from these 60 studies shows some revealing trends. First we examine the Automation Rate in relation to Frustration. It turns out that there is a correlation between Frustration and Automation, but not a strong one. The R2 value of 0.17 is statistically significant, but not very large.
 

Frustration vs. Completion

[Statistical aside: R2 is a measure of how strong the relationship is between two variables. You can think of it as roughly the percentage of the change in one measurement which is explained by a change in the other, so R2 of 0.17 means that 17% of the difference in Automation Rate between two different systems is explained by the difference in Frustration Rate. In these studies, an R2 value above 0.10 is statistically significant.]

In English, this means that making it harder to reach an agent may bump up the number of successful self-service transactions, but the effect will probably be small.

On the other hand, two other effects of making it hard to reach an agent are big. An increased Frustration Rate correlates strongly to a drop in Completion (R2 of 0.49). And even more striking, increased Frustration correlates to a dramatic drop in Satisfaction, with an R2 of 0.61. In fact, we have not found anything with a stronger correlation to Satisfaction than how hard callers said it was to reach an agent.

Or, in English, this means that making it hard to reach an agent will do more to upset your callers than any other factor we've looked at, and will lead to a lot more callers hanging up and calling back or just giving up.

Self-Service Is Not the Problem

Frustration vs. Completion

It's reasonable to ask if automation itself is the problem: after all, curmudgeons never tire of complaining about how terrible self-service systems are, and how much happier we'd all be if companies simply got rid of the things.

To answer this, we compared both Satisfaction and Completion to Automation. In neither case was there a statistically significant relationship.

The data shows it is possible to create automated self-service which satisfies callers just as well as a live agent, and successfully completes just as many transactions on the first call.

Other research we've done has shown that with a well-designed system, callers prefer to use self-service over an agent. The key is making the system good enough and understanding the limitations of automation.

We believe that most callers know before they pick up the phone if they would prefer to talk to an agent or use self-service. For routine tasks, such as checking an account balance, people are generally willing to use self-service as it is seen as faster and more convenient. But complicated problems, such as resolving a billing problem, will always require a human agent. Putting barriers between the caller and the agent only makes the situation more frustrating.

The data are very clear: making it hard to reach an agent has only a slight effect on automation rate when you take multiple calls into account, but the price in terms of lower satisfaction and reduced single-call completion is very high.

Newsletter Archives