International Journal of Speech Technology

, Volume 10, Issue 2, pp 75–87

Comparing machine and human performance for caller’s directory assistance requests


DOI: 10.1007/s10772-009-9020-1

Cite this article as:
Chang, H.M. Int J Speech Technol (2007) 10: 75. doi:10.1007/s10772-009-9020-1


To understand how to systematically construct the machine models for automating Directory Assistance (DA) that are capable of reaching the performance level of human DA operators, we conducted a number of studies over the years. This paper describes the methods used for such studies and the results of laboratory experiments. These include a series of benchmark tests configured specifically for DA related tasks to evaluate the performance of state-of-the-art and commercially-available Hidden Markov Model (HMM) based Automatic Speech Recognition (ASR) technologies. The results show that the best system achieves a 57.9% task completion rate on the city-state-recognition benchmark test. For the most frequently-requested-listing benchmark test, the best system achieves a 40% task completion rate.


Speech recognitionDirectory assistanceSpoken dialog systemsAutomated telephone operator servicesSystem performance

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.AT&T Labs—ResearchFlorham ParkUSA