Voice Games: Investigation Into the Use of Non-speech Voice Input for Making Computer Games More Accessible

  • Susumu Harada
  • Jacob O. Wobbrock
  • James A. Landay
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6946)

Abstract

We conducted a quantitative experiment to determine the performance characteristics of non-speech vocalization for discrete input generation in comparison to existing speech and keyboard input methods. The results from the study validated our hypothesis that non-speech voice input can offer significantly faster discrete input compared to a speech-based input method by as much as 50%. Based on this and other promising results from the study, we built a prototype system called the Voice Game Controller that augments traditional speech-based input methods with non-speech voice input methods to make computer games originally designed for the keyboard and mouse playable using voice only. Our preliminary evaluation of the prototype indicates that the Voice Game Controller greatly expands the scope of computer games that can be played hands-free using just voice, to include games that were difficult or impractical to play using previous speech-based methods.

Keywords

Computer games accessible games speech recognition non-speech vocalization 

Supplementary material

Electronic Supplementary material (10,077 KB)

References

  1. 1.
    Ahn, L.V., Dabbish, L.: Designing games with a purpose. Commun. ACM. 51, 58–67 (2008)Google Scholar
  2. 2.
    Sacks, J.J., Helmick, C.G., Luo, Y., Ilowite, N.T., Bowyer, S.: Prevalence of and annual ambulatory health care visits for pediatric arthritis and other rheumatologic conditions in the United States in 2001-2004. Arthritis Rheum. 57, 1439–1445 (2007)CrossRefGoogle Scholar
  3. 3.
    Second Life Official Site, http://secondlife.com/
  4. 4.
    Riemer-Reiss, M.L., Wacker, R.R.: Factors associated with assistive technology discontinuance among individuals with disabilities. Journal of Rehabilitation 66, 44–50 (2000)Google Scholar
  5. 5.
    Tse, E., Greenberg, S., Shen, C., Forlines, C.: Multimodal multiplayer tabletop gaming. Computers in Entertainment (CIE) 5, 12 (2007)CrossRefGoogle Scholar
  6. 6.
    Harada, S., Landay, J.A., Malkin, J., Li, X., Bilmes, J.A.: The Vocal Joystick: evaluation of voice-based cursor control techniques for assistive technology. Disabil. Rehabil.: Assistive Technology 3, 22 (2008)CrossRefGoogle Scholar
  7. 7.
    Harada, S., Wobbrock, J.O., Landay, J.A.: VoiceDraw: a hands-free voice-driven drawing application for people with motor impairments. In: Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 27–34. ACM, Tempe (2007)CrossRefGoogle Scholar
  8. 8.
    Igarashi, T., Hughes, J.F.: Voice as sound: using non-verbal voice input for interactive control. In: Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, pp. 155–156. ACM, Orlando (2001)CrossRefGoogle Scholar
  9. 9.
    de Mauro, C., Gori, M., Maggini, M., Martinelli, E.: Easy access to graphical interfaces by Voice Mouse. Università di Siena (2001)Google Scholar
  10. 10.
    Sporka, A.J., Kurniawan, S.H., Slavik, P.: Whistling User Interface (U3I). In: Stary, C., Stephanidis, C. (eds.) UI4ALL 2004. LNCS, vol. 3196, pp. 472–478. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  11. 11.
    Malkin, J., Li, X., Harada, S., Landay, J., Bilmes, J.: The Vocal Joystick Engine v1.0. Computer Speech & Language 25, 535–555 (2011) Google Scholar
  12. 12.
    Harada, S., Wobbrock, J.O., Malkin, J., Bilmes, J.A., Landay, J.A.: Longitudinal study of people learning to use continuous voice-based cursor control. In: Proceedings of the 27th International Conference on Human Factors in Computing Systems, pp. 347–356. ACM, Boston (2009)CrossRefGoogle Scholar
  13. 13.
    Gibbs, M., Wadley, G., Benda, P.: Proximity-based chat in a first person shooter: using a novel voice communication system for online play. In: Proceedings of the 3rd Australasian Conference on Interactive Entertainment, pp. 96–102. Murdoch University, Perth (2006)Google Scholar
  14. 14.
    Wadley, G., Gibbs, M.R., Benda, P.: Towards a framework for designing speech-based player interaction in multiplayer online games. In: Proceedings of the 2nd Australasian Conference on Interactive Entertainment, pp. 223–226. Creativity & Cognition Studios Press, Sydney (2005)Google Scholar
  15. 15.
  16. 16.
    Voice Buddy Interactive Voice Control Version 3.0, http://www.edimensional.com/index.php?cPath=23
  17. 17.
    VR Commander - Voice Command and Control for you Computer, http://www.vrcommander.com/
  18. 18.
    Vocola - A Voice Command Language, http://vocola.net/
  19. 19.
    Windows Speech Recognition Macros, http://code.msdn.microsoft.com/wsrmacros
  20. 20.
    Harada, S., Landay, J.A., Malkin, J., Li, X., Bilmes, J.A.: The Vocal Joystick: evaluation of voice-based cursor control techniques. In: Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 197–204. ACM, Portland (2006)CrossRefGoogle Scholar
  21. 21.
    Konami Digital Entertainment, Inc.: Karaoke Revolution, http://www.konami.com/kr/
  22. 22.
  23. 23.
    PAH! - The first voice-controlled and activated game on the web, http://games.designer.co.il/pah/
  24. 24.
  25. 25.
  26. 26.
    Sporka, A.J., Kurniawan, S.H., Mahmud, M., Slavík, P.: Non-speech input and speech recognition for real-time control of computer games. In: Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 213–220. ACM, Portland (2006)CrossRefGoogle Scholar
  27. 27.
    Card, S.K., Newell, A., Moran, T.P.: The Psychology of Human-Computer Interaction. Laurence Erlbaum Associates Inc., Hillsdale (1983)Google Scholar
  28. 28.
    Izdebski, K.: Effects of prestimulus interval on phonation initiation reaction times. Journal of Speech and Hearing Research 23, 485–489 (1980)Google Scholar
  29. 29.
    Shipp, T., Izdebski, K., Morrissey, P.: Physiologic stages of vocal reaction time. Journal of Speech and Hearing Research 27, 173–178 (1984)Google Scholar
  30. 30.
    Nebes, R.D.: Vocal versus manual response as a determinant of age difference in simple reaction time. Journal of Gerontology 33, 884–889 (1978)Google Scholar
  31. 31.
    Venables, P.H., O’connor, N.: Reaction times to auditory and visual stimulation in schizophrenic and normal subjects. Q. J. Exp. Psychol. 11, 175 (1959)CrossRefGoogle Scholar
  32. 32.
  33. 33.

Copyright information

© IFIP International Federation for Information Processing 2011

Authors and Affiliations

  • Susumu Harada
    • 1
  • Jacob O. Wobbrock
    • 2
  • James A. Landay
    • 3
  1. 1.IBM Research – TokyoYamato-shiJapan
  2. 2.The Information School, DUB GroupUniversity of WashingtonSeattleUSA
  3. 3.Computer Science and Engineering, DUB GroupUniversity of WashingtonSeattleUSA

Personalised recommendations