, Volume 10, Issue 2-3, pp 75-87
Date: 06 May 2009

Comparing machine and human performance for caller’s directory assistance requests

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


To understand how to systematically construct the machine models for automating Directory Assistance (DA) that are capable of reaching the performance level of human DA operators, we conducted a number of studies over the years. This paper describes the methods used for such studies and the results of laboratory experiments. These include a series of benchmark tests configured specifically for DA related tasks to evaluate the performance of state-of-the-art and commercially-available Hidden Markov Model (HMM) based Automatic Speech Recognition (ASR) technologies. The results show that the best system achieves a 57.9% task completion rate on the city-state-recognition benchmark test. For the most frequently-requested-listing benchmark test, the best system achieves a 40% task completion rate.