Advertisement

Universal Access in the Information Society

, Volume 4, Issue 3, pp 237–245 | Cite as

Acoustic control of mouse pointer

  • Adam J. SporkaEmail author
  • Sri H. Kurniawan
  • Pavel Slavík
SHORT PAPER

Abstract

This paper describes the design and implementation of a system for controlling mouse pointer using non-verbal sounds such as whistling and humming. Two control modes have been implemented—an orthogonal mode (where the pointer moves with variable speed either horizontally or vertically at any one time) and a melodic mode (where the pointer moves with fixed speed in any direction). A preliminary user study with four users indicates that the orthogonal control was easier to operate and that the humming was less tiring for the users than whistling. The developed system may contribute as an inexpensive, alternative pointing device for people with motor disabilities.

Keywords

Pointing devices Motor disabilities Acoustic input Assistive technologies Melodic interaction 

References

  1. 1.
    Baier G, Herman T (2004) The sonification of rhythms in human electro-encephalorgam. In: Barrass S, Vickers P (eds) Proceedings of the 10th International Conference on Auditory Display, Sydney (ICAD), 2004, pp 1–5Google Scholar
  2. 2.
    Barra M, Cillo T, De Santis A, Umberto FP, Negro A, Scarano V, Matlock T, Maglio PP (2001) Personal webmelody: customized sonification of web servers. In: Hiipakka J, Zacharov N, Takala T (eds) Proceedings of the 7th International Conference on Auditory Display. Laboratory of Acoustics and Audio Signal Processing and the Telecommunications Software and Multimedia Laboratory, Helsinki University of Technology, Espoo, pp 1–9Google Scholar
  3. 3.
    Basson S (2002) Speech recognition and accessible education. Speech Technol Mag 7(4) [on-line]. http://www.speechtechmag.com/issues/7_4/avios/
  4. 4.
    Blattner MM, Sumikawa DA, Greenberg RM (1989) Earcons and icons: their structure and common design principles. Hum–Comput Interact 4:11–44 (Lawrence Erlbaum, Hillsdale, NJ)Google Scholar
  5. 5.
    van Buskirk R, LaLomia M (1995) The just noticeable difference of speech recognition accuracy. Proceedings of ACM CHI’95, Conference on Human Factors in Computing Systems, vol 2. ACM Press, New York, p 96Google Scholar
  6. 6.
    Franklin KM, Roberts JC (2003) Pie chart sonification. Proceedings of the Seventh International Conference on Information Visualization, IEEE, London, pp 4–9Google Scholar
  7. 7.
    Frigo M, Johnson SG (2005) The design and implementation of FFTW3. Proceedings of the IEEE. Special Issue on Program Generation, Optimization, and Platform Adaptation, vol 93, pp 216–231Google Scholar
  8. 8.
    Gaver WW (1993) Sythesizing auditory icons. ACM INTERCHI’93 Conference on Human Factors in Computing Systems. ACM Press, New York, pp 228–235Google Scholar
  9. 9.
    Hämäläinen P, Mäki T, Pulkki V, Airas M (2004) Musical computer games played by singing. In: Evangelista G, Testa I (eds) Proceedings of the Seventh International Conference on Digital Audio Effects, NaplesGoogle Scholar
  10. 10.
    Igarashi T, Hughes JF (2001) Voice as Sound: using non-verbal voice input for interactive control. In: Proceedings of UIST 2001. ACM Press, Orlando, FL, pp 155–156Google Scholar
  11. 11.
    Kitto KL (1993) Development of a low-cost sip and puff mouse. In: Proceedings of 16th Annual Conference of RESNA. RESNA Press, Las Vegas, pp 452–454Google Scholar
  12. 12.
    Nicol C, Brewster SA, Gray PD (2004) A system for manipulating auditory interfaces using timbre spaces. In: Jacob R, Limbourg Q, Vanderdonckt J (eds) Proceedings of CADUI. ACM Press, Madeira, pp 366–379Google Scholar
  13. 13.
    Nicol C, Brewster S, Gray P (2004) Designing sound Towards a system for designing audio interfaces using timbre spaces. In: Barrass S, Vickers P (eds) Proceedings of the 10th International Conference on Auditory Display, Sydney 2004, pp 1–5Google Scholar
  14. 14.
    Pirhonen A, Brewster S, Holguin C (2002) Gestural and audio metaphors as a means of control for mobile devices. Proceedings of the CHI 2002 Conference on Human Factors in Computing Systems. ACM Press, New York, pp 291–298Google Scholar
  15. 15.
    Rabiner L, Juang BH (1993) Fundamentals of speech recognition. Prentice Hall, Englewood Cliffs, NJ (ISBN 0130151572)Google Scholar
  16. 16.
    Sibert LE, Jacob RJK (2000) Evaluation of eye gaze interaction. Proceedings of CHI 2000 Conference on Human Factors in Computing Systems. ACM Press, The Hague, pp 281–288Google Scholar
  17. 17.
    U3I Project homepage (2005) [On-line] http://www.u3i.info
  18. 18.
    Walker A, Brewster SA (2000) Spatial audio in small display screen devices. Pers Technol 4:144–154CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2005

Authors and Affiliations

  • Adam J. Sporka
    • 1
    Email author
  • Sri H. Kurniawan
    • 2
  • Pavel Slavík
    • 1
  1. 1.Department of Computer Science and Engineering, Faculty of Electrical EngineeringCzech Technical University in PraguePraha 2Czech Republic
  2. 2.School of InformaticsUniversity of ManchesterManchesterUK

Personalised recommendations