Vocalabs Newsletter: Quality Times

Issue
41

In This Issue


VUI Research at SpeechTEK 2009

A number of companies have recently released research findings on speech applications using Vocalabs’ Usability Survey process. Here is a brief summary of results, including research presented at SpeechTEK 2009.

Prompting Strategies
Jared Strawderman of Nuance created an experiment to evaluate different prompting strategies in speech and DTMF applications. The same application was built with six different types of prompts. Strategies ranged from open-ended speech prompts (“what would you like to do?”), to pure DTMF (“to add money, press 1”). Approximately 2,000 consumers each tried one version of the application and provided feedback along with demographic information and the type of telephone they were calling from.

Analysis is ongoing. Strawderman intends to determine which prompting strategies are most effective depending on type of question being asked, the caller’s type of telephone, and the caller’s demographics. Metrics include task completion rates, recognizer error states, caller satisfaction, ease of use, and call duration.

Alphanumeric String Recognition
Jenni McKienzie of Travelocity set out to discover strategies for improving recognition strategies for arbitrary strings of alphanumeric characters. This is a common situation in the travel industry where confirmation codes can have nearly any combination of letters and numbers. Approximately 500 participants were each asked to read eight strings of six characters each.

McKenzie analyzed the data to determine which pairs of characters are more or less likely to be accurately recognized, and what strategies can work to improve the accuracy of this difficult speech recognition problem. She found that recognizer confidence scores are not particularly helpful in determining the correct string of the nbest list of possible matches. It may be helpful to query the database with the top ten matches rather than the top one or two matches, since often times the correct code is not in the top two.

Effect of Pause Duration on Turn-Taking
Silke Witt-Ehsani of Tuvox examined the effect of pause duration on turn-taking in speech recognition applications. Using seven different pause durations in a mock-up of a banking application, Witt-Ehsani measured turn-taking problems, call duration, impatient barge-in behaviors, and callers’ average response time. Approximately 700 consumers participated in the study and provided feedback.

While the analysis is still tentative, she found that pauses of 1.5 to 2.0 seconds between list items led to a very high rate of turn-taking problems. Both longer and shorter pauses created fewer issues. She also found that optimal pause duration depends on the type of question and whether the information is well known to the caller. For example, asking a birthday or social security number demands a shorter pause than asking the caller to state the reason for their call.

Highly Personal Questions in Speech Applications
Leah Eyler of Dimension Data performed an experiment in customers’ willingness to engage with a speech application and VUI that leveraged highly personalized information to determine thresholds on personalization in VUI. A Wizard of Oz prototype of a pharmacy application was built to simulate filling or refilling a prescription for an anti-depressant medication. During the call, the automated system asked a series questions based on a standard diagnostic used by physicians in gauging a caller’s depressive symptoms. These included questions like quality of sleep, appetite and of a sensitive nature as agitation and thoughts of suicide.

The experience was designed to simulate a real-life pharmacy transaction. Thirty consumers, selected because they were currently or had been previously been diagnosed with depression and received prescription medication and talk therapy as part of their treatment, used the application and provided feedback on their symptoms. In addition to determining some guidelines around engaging customers in sensitive dialog using personal information, Eyler found that customers were surprisingly willing to supply this information to an automated application and, indeed, some preferred it over the embarrassment of discussing these symptoms with a real person.

For more information, contact:

Jared Strawderman, Nuance - jared.strawderman@nuance.com

Jenni McKienzie, Travelocity - jenni.mckienzie@travelocity.com

Silke Witt-Ehsani, Tuvox - switt@tuvox.com

Leah Eyler, Dimension Data - leah.eyler@us.didata.com


Vocalabs Wins Speech Technology Award

We're pleased to announce that Vocalabs' Usability Survey service received the Speech Technology Excellence Award for 2009. The award is presented by TMC’s Customer Interaction Solutions magazine, a leading publication covering CRM, call centers and teleservices since 1982. Read more about the award here.

Newsletter Archives