Adding Speech to a Robotics Simulator

Conference paper

Abstract

We present a demo showing different levels of emergent verbal behaviour that arise when speech is added to a robotics simulator. After showing examples of (silent) robot activities in the simulator, adding speech output enables the robot to give spoken explanations of its behaviour. Adding speech input allows the robot´s movements to be guided by voice commands. In addition, the robot can modify its own verbal behaviour when asked to talk less or more. The robotics toolkit supports different behavioural paradigms, including finite state machines. The demo shows an example state transition based spoken dialogue system implemented within the robotics framework. Other more experimental combinations of speech and robot behaviours will also be shown.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Blank, D., Kumar, D., Meeden, L., Yanco, H.: The Pyro toolkit for AI and robotics. AI Magazine 27(1), 39–50 (2006)Google Scholar
  2. 2.
    Fong, T., Nourbaksh, I., Dautenhahn, K.: A survey of socially interactive robots. Robotics and Autonomous Systems 42, 143–166 (2003)MATHCrossRefGoogle Scholar
  3. 3.
    Gundlach, M.: pyspeech: Python speech recognition and text-to-speech module for Windows (2011). http://code.google.com/p/pyspeech/
  4. 4.
    Jokinen, K., McTear, M.: Spoken Dialogue Systems. Morgan & Claypool (2009)Google Scholar
  5. 5.
    Jokinen, K., Wilcock, G.: Emergent verbal behaviour in human-robot interaction. In: Proceedings of 2nd International Conference on Cognitive Infocommunications (CogInfoCom 2011). Budapest (2011)Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.University of HelsinkiHelsinkiFinland

Personalised recommendations