Explicative Document Reading Controlled by Non-speech Audio Gestures

  • Adam J. Sporka
  • Pavel Žikovský
  • Pavel Slavík
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4188)


There are many situations in which listening to a text produced by a text-to-speech system is easier or safer than reading, for example when driving a car. Technical documents, such as conference articles, manuals etc., usually are comprised of relatively plain and unequivocal sentences. These documents usually contain words and terms unknown to the listener because they are full of domain specific terminology. In this paper, we propose a system that allows the users to interrupt the reading upon hearing an unknown or confusing term by a non-speech acoustic gesture (e.g. “uhm?”). Upon this interruption, the system provides a definition of the term, retrieved from Wikipedia, the Free Encyclopedia. The selection of the non-speech gestures has been made with a respect to the cross-cultural applicability and language independence. In this paper we present a set of novel tools enabling this kind of interaction.


Free Encyclopedia Technical Text Audio Book Conference Article Speech Gesture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Microsoft Speech Application Program Interface (SAPI) Version 5.0 (retrieved March 20, 2006), Online:
  2. 2.
    Project Gutenberg (retrieved March 20, 2006), Online:
  3. 3.
    Hämäläinen, P., Mäki-Patola, T., Pulkki, V., Airas, M.: Musical computer games played by singing. In: Evangelista, G.T.I. (ed.) Proc. 7th Intl. Conf. on Digital Audio Effects, Naples, Italy, pp. 367–371 (2004)Google Scholar
  4. 4.
    Igarashi, T., Hughes, J.F.: Voice as sound: using non-verbal voice input for interactive control. In: UIST 2001: Proc. 14th Annual ACM Symp. on User Interface Software and Technology, pp. 155–156. ACM Press, New York (2001)CrossRefGoogle Scholar
  5. 5.
    Sporka, A.J., Kurniawan, S.H., Slavík, P.: Acoustic control of mouse pointer. Universal Access in Information Society 4(3), 237–245 (2006)Google Scholar
  6. 6.
    Rabiner, L.R.: On the use of autocorrelation analysis for pitch detection. IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-25(1), 24–33 (1977)CrossRefGoogle Scholar
  7. 7.
    Sporka, A.J., Kurniawan, S.H., Slavík, P.: Non-speech operated emulation of keyboard. In: Designing Accessible Technology. Springer, London (2006)Google Scholar
  8. 8.
    Žikovský, P., Pěšina, T., Slavík, P.: Processing of logical expressions for visually impaired users. In: Sojka, P., Kopeček, I., Pala, K. (eds.) TSD 2004. LNCS (LNAI), vol. 3206, pp. 553–560. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    Wikimedia Foundation, Inc.: Wikipedia, the free encyclopedia,

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Adam J. Sporka
    • 1
  • Pavel Žikovský
    • 2
  • Pavel Slavík
    • 1
  1. 1.Department of Computer Science and EngineeringCzech Technical University in Prague, Faculty of Electrical EngineeringPraha 2Czech Republic
  2. 2.Musical Acoustics Research Centre, Music Faculty, Academy of Performing Arts in PraguePraha 1Czech Republic

Personalised recommendations