Client and Speech Detection System for Intelligent Infokiosk

  • Andrey Ronzhin
  • Alexey Karpov
  • Irina Kipyatkova
  • Miloš Železný
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6231)


Timely attraction of a client and detection of his/her speech message in real noisy conditions are main difficulties at deployment of speech and multimodal interfaces in information kiosks. Combination of sound source localization, voice activity and face detection technologies allowed to determine client mouth coordinates and extract boundaries of speech signal appeared in the kiosk dialogue area. Talking head model based on audio-visual speech synthesis immediately greets the client, when her face is captured in the video-monitoring area, in order to attract him/her to the information service before leaving the interaction area. Client’s face tracking is also used for turning the talking head in direction to the client that significantly improves the naturalness of interaction. The developed infokiosk set in the institute hall provides information about structure and staff of laboratories. Statistics of human-kiosk interaction is accumulated within last six months in 2009.


Multimodal interfaces information kiosk sound source localization face tracking talking head voice activity detection 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    McCauley, L., D’Mello, S.: MIKI: a Speech Enabled Intelligent Kiosk. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 132–144. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Johnston, M., Bangalore, S.: MATCHkiosk: A Multimodal Interactive City Guide. In: Association of Computational Linguistics ACL 2004, Barcelona, pp. 223–226 (2004)Google Scholar
  3. 3.
    Gatica-Perez, D., Lathoud, G., Odobez, J., McCowan, I.: Multimodal Multispeaker Probabilistic Tracking in Meetings. In: 7th International Conference on Multimodal Interfaces ICMI 2005, pp. 183–190. ACM, Trento (2005)CrossRefGoogle Scholar
  4. 4.
    Zhang, C., et al.: Boosting-Based Multimodal Speaker Detection for Distributed Meeting Videos. MultMed. 10(8), 1541–1552 (2008)Google Scholar
  5. 5.
    Lienhart, R., Maydt, J.: An Extended Set of Haar-like Features for Rapid Object Detection. In: 9th IEEE International Conference on Image Processing ICIP 2002, Rochester, New York, pp. 900–903 (2002)Google Scholar
  6. 6.
    Karpov, A., Tsirulnik, L., Krnoul, Z., Ronzhin, A., Lobanov, B., Zelezny, M.: Audio-Visual Speech Asynchrony Modeling in a Talking Head. In: 10th International Conference Interspeech-2009, pp. 2911–2914. ISCA, Brighton (2009)Google Scholar
  7. 7.
    Karpov, A., Ronzhin, A.: Information Enquiry Kiosk with Multimodal User Interface. Pattern Recognition and Image Analysis 19(3), 546–558 (2009)CrossRefGoogle Scholar
  8. 8.
    Brandstein, M., Ward, D.: Microphone Arrays. Springer, Heidelberg (2000)Google Scholar
  9. 9.
    Knapp, C., Carter, G.: The Generalized Correlation Method for Estimation of Time Delay. IEEE Transactions on Acoustics, Speech and Signal Processing 24(4), 320–327 (2003)CrossRefGoogle Scholar
  10. 10.
    Lathoud, G., McCowan, I.: A Sector-Based Approach for Localization of Multiple Speakers with Microphone Arrays. In: ISCA Workshop on Statistical and Perceptual Audio Processing SAPA 2004. ISCA, Jeiu (2004)Google Scholar
  11. 11.
    Omologo, M., Svaizer, P.: Acoustic Event Localization Using a Crosspower-Spectrum Phase Based Technique. In: International Conference on Acoustics, Speech and Signal Processing ICASSP 1994, pp. 273–276. IEEE, Australia (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Andrey Ronzhin
    • 1
  • Alexey Karpov
    • 1
  • Irina Kipyatkova
    • 1
  • Miloš Železný
    • 2
  1. 1.SPIIRASSt. Petersburg Institute for Informatics and Automation of RASRussia
  2. 2.University of West BohemiaPilsenCzech Republic

Personalised recommendations