In This Issue:
- Testing Customer Service Applications: The Good, The Bad, and the Ugly
- Case Study 1: The Good
- Case Study 2: The Bad
- Aftermath: The Ugly
By Peter Leppik
It is hard to over emphasize the importance of good user interface design in a speech application. Design testing early and often is essential to building a good application.
In this issue, we're presenting two case studies of speech applications VocaLabs has evaluated in the past year, one good and one bad. We've changed the names of the companies and some identifying details.
In the first case study, we present a company which brought in professional design expertise early in the project cycle, tested an early version, and re-tested to make further improvements before finalizing the design.
The second case study, in contrast, is a situation where no testing was performed until we were brought in after the application was already being used by live customers. By that point, there was no budget to fix the numerous design flaws, and the project ultimately failed.
Committing to a strong development process can dramatically improve the odds of project success. These two case studies illustrate that point.
By Peter Leppik
Company Profile: Mid-sized software company.
Application: Capture customer data.
Approach: This mid-sized software company had an existing speech-recognition interface for capturing customer data including names, addresses, and phone numbers. The goal of the project was to optimize the performance of this speech interface, to reduce the number of calls which had to be handled manually.
Capturing customer data is a challenging problem with speech, since there are a large number of names to recognize. The natural fall-back of having the caller spell a name is also problematic, given that so many letters sound alike (i.e. "B", "D", "P", etc.).
At an early stage, the company brought in an outside consultant with considerable expertise in designing speech-based user interfaces, as well as VocaLabs to evaluate the design and performance of the application. After a VocaLabs study on an early version of the application, the consultant suggested a set of design improvements. A follow-up study on the improved design is being used to make additional refinements, as well as validate the initial set of changes.
Outcome: The initial VocaLabs study found a number of areas where the speech interface could be improved, including tuning up the speech recognition, and improving the callflow.
On the first pass, this application scored about average on the three VocaLabs benchmarks (Caller Satisfaction, Call Completion, and Call Consistency).
Based on the data collected during this first study, the consultant recommended a number of changes, including:
* Adjust the timing of messages and prompts to give callers more time to begin speaking their answers.
* Adjusting recognizer parameters to improve overall recognition accuracy.
* Change the callflow to reduce the number of failed attempts before going to a manual process for capturing customer data.
After making these changes, the application improved dramatically in all three VocaLabs benchmarks, scoring two out of three A's--an accomplishment achieved by only about 15% of operations we've studied.
By Peter Leppik
Company Profile: Large financial institution.
Application: Account information.
Approach: This financial institution had considerable experience building tone-based IVR applications in-house, and planned to replace an existing account information system with speech recognition to improve customer satisfaction and automation rates.
This experience building tone-based applications did not translate into building a more sophisticated speech-recognition interface. Nor did the company consider bringing in outside expertise until late in the design cycle.
VocaLabs was brought in to perform a study on this application after it had been in limited deployment for about a week. Performance was well below expectations, and analysis of a sample of call recordings showed that callers were having difficulty, but there was disagreement on how to fix the problems.
Outcome: The VocaLabs study revealed a number of problems, and the application scored well below average for both Caller Satisfaction and Call Completion. Interestingly, it scored an A for Call Consistency, though having an application which provides consistently poor service is of dubious value.
The accuracy of the speech recognition was reasonable, but there were many usability issues. This was a classic bad design implemented well. Specific problems included:
* Use of industry jargon which callers didn't understand.
* The application frequently interrupted callers as they were speaking.
* Confusing menus which made it hard for callers to find the information they were looking for.
The results of this study showed that the user interface needed to be completely rewritten. By the time the study was performed, however, the project's budget had been spent, and there were no resources available to fix the problems.
The application was, to our knowledge, never completely deployed.
By Peter Leppik
Shortly after we completed our research on the financial application in the second case study, nearly the entire project team was laid off.
We certainly don't want to imply that not using VocaLabs for early design evaluation will always lead to getting fired. Consider, however, that being part of a successful project is the best career insurance possible. The best way to ensure a successful project is to commit to a strong development process which includes frequent design evaluation beginning at the start of the project.
The worst case scenario is to complete a new application, have it perform below expectations, and not understand why. By that point, it is usually too late to do anything without significant additional time and budget.