Adding Speech to a Robotics Simulator
We present a demo showing different levels of emergent verbal behaviour that arise when speech is added to a robotics simulator. After showing examples of (silent) robot activities in the simulator, adding speech output enables the robot to give spoken explanations of its behaviour. Adding speech input allows the robot´s movements to be guided by voice commands. In addition, the robot can modify its own verbal behaviour when asked to talk less or more. The robotics toolkit supports different behavioural paradigms, including finite state machines. The demo shows an example state transition based spoken dialogue system implemented within the robotics framework. Other more experimental combinations of speech and robot behaviours will also be shown.
Unable to display preview. Download preview PDF.
- 1.Blank, D., Kumar, D., Meeden, L., Yanco, H.: The Pyro toolkit for AI and robotics. AI Magazine 27(1), 39–50 (2006)Google Scholar
- 3.Gundlach, M.: pyspeech: Python speech recognition and text-to-speech module for Windows (2011). http://code.google.com/p/pyspeech/
- 4.Jokinen, K., McTear, M.: Spoken Dialogue Systems. Morgan & Claypool (2009)Google Scholar
- 5.Jokinen, K., Wilcock, G.: Emergent verbal behaviour in human-robot interaction. In: Proceedings of 2nd International Conference on Cognitive Infocommunications (CogInfoCom 2011). Budapest (2011)Google Scholar