Advertisement

International Journal of Speech Technology

, Volume 5, Issue 4, pp 371–388 | Cite as

A Listening Keyboard for Users with Motor Impairments—A Usability Study

  • Bill Manaris
  • Valanne Macgyvers
  • Michail Lagoudakis
Article

Abstract

Computer users with motor impairments find it difficult and, in many cases, impossible to access PC functionality through the physical keyboard-and-mouse interface. Studies show that even able-bodied users experience similar difficulties when interacting with mobile devices; this is due to the reduced size/usability of the input interfaces. Advances in speech recognition have made it possible to design speech interfaces for alphanumeric data entry and indirect manipulation (cursor control). Although several related commercial applications exist, such systems do not provide a complete solution for arbitrary keyboard and mouse access, such as the access needed for, say, typing, compiling, and executing a C++ program.

We carried out a usability study to support the development of a speech user interface for arbitrary keyboard access and mouse control. The study showed that speech interaction with an ideal listening keyboard is better for users with motor impairments than handstick, in terms of task completion time (37% better), typing rate (74% better), and error rates (63% better). We believe that these results apply to both permanent and task-induced motor impairments. In particular, a follow-up experiment showed that handstick approximates conventional modes of alphanumeric input available on mobile devices (e.g., PDAs, cellular phones, and personal organizers). These modes of input include miniaturized keyboards, stylus “soft” keyboards, cellular phone numberpads, and handwriting recognition software. This result suggests that a listening keyboard would be an effective mode for alphanumeric input on future mobile devices.

This study contributed to the development of SUITEKeys—a speech user interface for arbitrary keyboard and mouse access available for MS platforms as freeware.

assistive technology motor impairments speech user interfaces listening keyboard usability evaluation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Calverley, B. (1999). Machine Demonstrates superhuman speech recognition abilities. USC News Service, news release 0999025. Also available at http://uscnews.usc.edu/newsreleases.Google Scholar
  2. Christian, K., Kules, B., Shneiderman, B., and Youssef, A. (2000). A comparison of voice-controlled and mouse-controlled web browsing. Proceedings of the Fourth International ACM Conference on Assistive Technologies (ASSETS 2000). New York: ACM Press, pp. 72–79.Google Scholar
  3. Comerford, R. (1998). Pocket computers ignite OS battle. IEEE Spectrum, 35(5):43–48.Google Scholar
  4. Danis, C., Comerford, L., Janke, E., Davies, K., DeVries, J., and Bertrand, A. (1994). StoryWriter: A speech-oriented editor. Proceedings of Human Factors in Computing Systems (CHI '94)-Conference Companion New York: ACM Press, pp. 277–278.Google Scholar
  5. Fell, H.J., MacAuslan, J., Ferrier, L.J., and Chenausky, K. (1999). Automatic babble recognition for early detection of speech related disorders. Behaviour & Information Technology, 18(1):56–63.Google Scholar
  6. Goldstein, M., Book, R., Alsio, G., and Tessa, S. (1998). Ubiquitous input for wearable computing: QWERTY keyboard without a board. Proceedings of the First Workshop on Human Computer Interaction with Mobile Devices, Glasgow, Scotland. Available at www.dcs.gla.ac.uk/~johnson/papers/mobile/HCIMD1.html.Google Scholar
  7. Gould, J.D., Conti, J., and Hovanyecz, T. (1983). Composing letters with a simulated listening typewriter. Communications of the ACM, 26(4): 295–308.Google Scholar
  8. Hofstadter, D.R. (1989). Gödel, Escher, Bach: An Eternal Golden Braid. New York: Vintage Books.Google Scholar
  9. Karl, L., Pettey, M., and Shneiderman, B. (1993). Speech-activated versus mouse-activated commands for word processing applications. International Journal of Man-Machine Studies, 39(4):667–687.Google Scholar
  10. Kelley, J.F. (1984). An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Office Information Systems, 2(1):26–41.Google Scholar
  11. Lazzaro, J.J. (2001). Speech-enabling applications. Byte Magazine, April 4, 2001. Also available at www.byte.com/column/ BYT20010404S0005.Google Scholar
  12. Leggett, J. and Williams, G. (1984). An empirical investigation of voice as an input modality for computer programming. International Journal of Man-Machine Studies, 21(1):493–520.Google Scholar
  13. Levitt, H. (1994). Speech processing for physical and sensory disabilities. In D.B. Roe and J.G. Wilpon (Eds.), Voice Communication Between Humans and Machines. Washington, DC: National Academy of Sciences, pp. 311–343.Google Scholar
  14. MacKenzie, I.S., Zhang, S.X., and Soukoreff, R.W. (1999). Text entry using soft keyboards. Behaviour & Information Technology, 18(4):235–244.Google Scholar
  15. Malkewitz, R. (1998). Head pointing and speech control as a handsfree interface to desktop computing. Proceedings of The Third International ACM Conference on Assistive Technologies (ASSETS '98) New York: ACM Press, pp. 182–188.Google Scholar
  16. Manaris, B. and Dominick, W.D. (1993). NALIGE: A user interface management system for the development of natural language interfaces. International Journal of Man-Machine Studies, 38(6):891–921.Google Scholar
  17. Manaris, B. and Harkreader, A. (1997). SUITE: Speech understanding interface tools and environments. Proceedings of 10th International Florida AI Research Symposium (FLAIRS-97). Menlo Park, CA: AAAI Press, pp. 247–252.Google Scholar
  18. Manaris, B. and Harkreader, A. (1998). SUITEKeys: A speech understanding interface for the motor-control challenged. Proceedings of The Third International ACM Conference on Assistive Technologies (ASSETS '98). New York: ACM Press, pp. 108–115.Google Scholar
  19. Manaris, B., MacGyvers, V., and Lagoudakis, M. (1999). Universal access to mobile computing devices through speech input. Proceedings of 12th International Florida AI Research Symposium (FLAIRS-99). Menlo Park, CA: AAAI Press, pp. 286–92.Google Scholar
  20. Manaris, B., McCauley, R., and MacGyvers, V. (2001). An intelligent interface for keyboard and mouse control-Providing full access to PC functionality via speech. Proceedings of 14th International Florida AI Research Symposium (FLAIRS-01). Menlo Park, CA: AAAI Press, pp. 182–188.Google Scholar
  21. Markowitz, J.A. (1996). Using Speech Recognition. Upper Saddle River, NJ: Prentice Hall.Google Scholar
  22. McAlindon, P.J. and Staney, K.M. (1996). The Keybowl: An ergonomically designed document processing device. Proceedings of The Second International ACM Conference on Assistive Technologies (ASSETS '96). New York: ACM Press, pp. 86–93.Google Scholar
  23. Morrison, D.L., Green, T.R.G., Shaw, A.C., and Payne, S.J. (1984). Speech-controlled text-editing: Effects of input modality and of command structure. International Journal of Man-Machine Studies, 21(1):49–63.Google Scholar
  24. Mostow, J. and Aist, G. (1999). Reading and pronunciation tutor. U.S. Patent 5,920,838. Also see Mostow, J., Roth, S.F., Hauptmann, A.G., and Kane, M. (1994).Aprototype reading coach that listens. Proceedings of 12th National Conference on Artificial Intelligence (AAAI-94). Menlo Park, CA: AAAI Press, pp. 785–792.Google Scholar
  25. Murray, J.T., Van Praag, J., and Gilfoil, D. (1983). Voice versus keyboard control of cursor motion. Proceedings of the 27th Annual Meeting of Human Factors Society. Santa Monica, CA: Human Factors Society, p. 103. (cited in Shneiderman, 1997).Google Scholar
  26. Napier, H.A., Lane, D.M., Batsell, R.R., and Guadango, N.S. (1989). Impact of a restricted natural language interface on ease of learning and productivity. Communications of the ACM, 32(10):1190–1198.Google Scholar
  27. Pausch, R. and Leatherby, J.H. (1991). An empirical study: Adding voice input to a graphical editor. Journal of the American Voice Input/Output Society, 9(2):55–66.Google Scholar
  28. Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., and Carey, T. (1994). Human-Computer Interaction. Reading, MA: Addison Wesley.Google Scholar
  29. Prentke Romich, Co. (1997). WiVik2 on-screen keyboard programs forWindows.Available atwww.prentrom.com/access/wivik.html.Google Scholar
  30. Raman, T.V. (1996). Emacspeak-direct speech access. Proceedings of The Second International ACM Conference on Assistive Technologies (ASSETS '96). New York: ACM, Press, pp. 72–79.Google Scholar
  31. Roy, D. and Pentland, A. (1998). A phoneme probability display for individuals with hearing disabilities. Proceedings of The Third International ACM Conference on Assistive Technologies (ASSETS '98) New York: ACM Press, pp. 165–168.Google Scholar
  32. Schwartz, E. (2000). PDAs Learn to Listen Up. Info World, 22(6):1 and 10.Google Scholar
  33. Shneiderman, B. (1997). Designing the User Interface, 3rd ed. Reading, MA: Addison-Wesley.Google Scholar
  34. Smith, A., Dunaway, J., Demasco, P., and Peischl, D. (1996). Multimodal input for computer access and augmentative communication. Proceedings of The Second International ACM Conference on Assistive Technologies (ASSETS '96). New York: ACM Press, pp. 80–85.Google Scholar
  35. Trewin, S. (1996). A study of input device manipulation difficulties. Proceedings of The Second International ACMConference on Assistive Technologies (ASSETS '96). New York: ACM Press, pp. 15–22.Google Scholar
  36. Weiser, M. (1994). Theworld is not a desktop. Interactions, 1(1):7–8.Google Scholar

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • Bill Manaris
    • 1
  • Valanne Macgyvers
    • 2
  • Michail Lagoudakis
    • 3
  1. 1.Department of Computer ScienceCollege of CharlestonCharlestonUSA
  2. 2.Department of PsychologyUniversity of Louisiana at LafayetteLafayetteUSA
  3. 3.Department of Computer ScienceDuke UniversityDurhamUSA

Personalised recommendations