Task interference with a discrete word recognizer

  • Caryn Hubbard
  • James H Bradford
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 753)


Speaker dependent, discrete word recognition is the simplest and most successful form of automatic speech recognition. In the near future, it is likely that this technique will be the basis for a variety of commercial speech interfaces. However, discrete word recognition requires users to insert relatively long pauses between each word of an utterance. This paper describes an experiment that was performed to determine whether this unusual way of speaking will interfere with the performance of complex tasks.


Video Game Speech Recognition Automatic Speech Recognition Perceptual Load Task Interference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    W.D. Byblow: Effects of redundancy in the comparison of speech and pictorial displays in the cockpit environment. Applied Ergonomics 21 (2), 121–128 (1990)Google Scholar
  2. 2.
    S.P. Casali, B.H. Williges, R.D. Dryden: Effects of recognition accuracy and vocabulary size of a speech recognition system on task performance and user acceptance. Human Factors 32 (2), 183–196 (1990)Google Scholar
  3. 3.
    R.E. Hicks: Interhemispheric response competition between vocal and unimanual performance in normal adult human males. Journal of Comparative and Physiological Psychology 89, 50–60 (1975)Google Scholar
  4. 4.
    D. Jones, K. Hapeshi, C. Frankish: Design guidelines for speech recognition interfaces. Applied Ergonomics 20 (1), 47–52 (1989)Google Scholar
  5. 5.
    A.G. Kahmi, J.J. Masterson: The reliability of the time-sharing paradigm. Brain and Language 29, 324–341 (1986)Google Scholar
  6. 6.
    K. Lee, A.G. Hauptmann, A.I. Rudnicky: The spoken word. Byte 1, 225–232 (1990)Google Scholar
  7. 7.
    R.G. Leiser: Improving natural language and speech interfaces by the use of metalinguistic phenomena. Applied Ergonomics 20 (3), 168–173 (1989)Google Scholar
  8. 8.
    J. Makhoul, F. Jelnik, L. Rabiner, C. Weinstein, V. Zue: Spoken language systems. Annual Review of Computer Science 4, 481–501Google Scholar
  9. 9.
    J.M. Noyes, R. Haigh, A.F. Starr: Automatic speech recognition for disabled people. Applied Ergonomics 20 (4), 293–298 (1989)Google Scholar
  10. 10.
    M.R. Taylor: DVI and its role in future avionic systems. Research Report: Smiths Industries Aerospace and Defence Systems, UK (1990)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Caryn Hubbard
    • 1
  • James H Bradford
    • 1
  1. 1.Department of Computer ScienceBrock UniversitySt. CatharinesCanada

Personalised recommendations