Vocalabs Newsletter: Quality Times


In This Issue:

Using VocaLabs Grades

By Peter Leppik

When you look at the results of a VocaLabs survey, probably the first thing you'll see are three letter grades: one for Caller Satisfaction, one for Call Completion, and one for Call Consistency.

If you got one or more A's, congratulations! Your operation is among the best we've surveyed. In fact, as of this writing, only one system has qualified for A's in all three benchmarks.

If your scores were lower, you have an opportunity to improve, which can have a dramatic impact on operational costs and revenue.

In this newsletter, we'll quantify the value of a VocaLabs "A," and provide some tips for improving your operation, if your grades were less than stellar.

What's An "A" Worth?

By Rick Rappe

You've got your VocaLabs survey done, and you are the proud owner of a shiny new "A" for customer satisfaction.

An "A" for Caller Satisfaction is worth
more revenue per call, assuming the average sale is $100



Everyone likes to have satisfied customers, but what's that grade really worth?

Consider this: in our research across multiple industries, we've found that an average of 47% of callers who report being "Very Satisfied" with a customer service experience are also likely to buy from that company in the next 12 months. Only 11% of dissatisfied callers are likely to buy from that company in the next 12 months.

In the average "A" rated system for caller satisfaction, 55% of callers were "Very Satisfied" and only 8% were dissatisfied. In the average "D" rated system for caller satisfaction, only 9% of callers were very satisfied, and 52% were dissatisfied. Doing some math, the difference between an "A" and a "D" means capturing revenue from an additional 16% of callers. If your average sale is $100, then the difference between an "A" and a "D" for customer service is a whopping $16 per call in additional revenue. This holds true even if your call center does not directly generate revenue.

What about Call Completion?

An "A" for Call Completion saves
in expenses per caller, assuming the average repeat call costs $10



Every caller who completes his or her goal in a single call is one caller who won't need to repeat that call. To earn an "A" in Call Completion, the average system had a single call completion rate of 91%. In contrast, the average "D" operation managed to complete the caller's task only 59% of the time in a single call.

If you assume that each repeated call costs about $10 (if handled by a live agent), then the difference between an "A" and a "D" is an average of $3.20 per caller, just in the reduced call volume from repeat calls. This doesn't even consider the fact that a caller whose needs are handled promptly and efficiently is also more likely to be satisfied.

Improving your Grades

By Peter Leppik

Many factors can lower your grades, from technical glitches, to poor design, to undertrained agents, to not enough staff to handle call volumes.

Each situation is unique, so we suggest you work with a professional familiar with your own operation. Here are some things to keep in mind when trying to improve your grades:

  1. Technical problems, such as a poorly-tuned speech recognition system, or bugs in an application, will drag down all three grades.
  2. Long messages at the beginning of a call tend to lower Caller Satisfaction and Call Completion scores, but often raise Call Consistency. This just means that the caller's experience is consistently bad.
  3. A poor grade for Call Consistency often means that callers are taking different paths through an IVR or speech applications to complete the same task, or that call center agents vary widely in how effectively they are handling the individual calls.
  4. Caller Satisfaction and Call Completion grades tend to be correlated: improving the Call Completion grade will often yield improved Caller Satisfaction as a bonus.
  5. Free response answers and call recordings often contain valuable insights into problems. Focus on panelists who were very dissatisfied with their experience, couldn't complete the task, or who made multiple calls.
  6. Keep in mind that prototypes tend to score lower than finished systems.
  7. Surprisingly, self-service and agent-assisted calls tend to score about the same for satisfaction and completion. Automated systems tend to be more consistent.
  8. Depending on the application, not all of the three letter grades are equally important. You need to decide which grade is most important to your particular situation.

How We Grade

By Dan Taylor

Our letter grades are based on an operation's performance as compared to others we've tested in the past. Our historical database includes a wide variety of operations, including completely automated applications, agent-based operations, and systems which include both self-service and live agents.

In addition, we regularly perform surveys on non-customer operations to make sure our benchmarks reflect what's really out there, and not just our clients.

To calculate the grades, we first generate a numerical score for each parameter: For Caller Satisfaction, this score is the percentage of callers who were "Very Satisfied" with their experience, minus the percentage of callers who were "Dissatisfied" or "Very Dissatisfied."

For Call Completion, the score is the percentage of callers who reported completing the assigned task, and who made only one call.

For Call Consistency, the score is calculated using the standard deviation of call duration, expressed as a percentage of average call duration, and calculated separately for each scenario.

Then, we assign letter grades by comparing the system's scores against our historical database. Operations in the 75th percentile in any of the three categories get an A for that benchmark. 50th to 75th percentile gets a B, 25th to 50th gets a C, and below the 25th percentile gets a D.

Newsletter Archives