Evaluation of WikiTalk – User Studies of Human-Robot Interaction
The paper concerns the evaluation of Nao WikiTalk, an application that enables a Nao robot to serve as a spoken open-domain knowledge access system. With Nao WikiTalk the robot can talk about any topic the user is interested in, using Wikipedia as its knowledge source. The robot suggests some topics to start with, and the user shifts to related topics by speaking their names after the robot mentions them. The user can also switch to a totally new topic by spelling the first few letters. As well as speaking, the robot uses gestures, nods and other multimodal signals to enable clear and rich interaction. The paper describes the setup of the user studies and reports on the evaluation of the application, based on various factors reported by the 12 users who participated. The study compared the users’ expectations of the robot interaction with their actual experience of the interaction. We found that the users were impressed by the lively appearance and natural gesturing of the robot, although in many respects they had higher expectations regarding the robot’s presentation capabilities. However, the results are positive enough to encourage research on these lines.
KeywordsEvaluation multimodal human-robot interaction gesturing Wikipedia
Unable to display preview. Download preview PDF.
- 1.Norman, D.A.: The Psychology of Everyday Things. Basic Books, New York (1988)Google Scholar
- 3.Jokinen, K., McTear, M.: Spoken Dialogue Systems. Synthesis Lectures on Human Language Technologies, vol. 2(1). Morgan & Claypool (2009)Google Scholar
- 4.Jokinen, K., Wilcock, G.: Constructive interaction for talking about interesting topics. In: Proceedings of Eigth Language Resources and Evaluation Conference (LREC 2012), Istanbul, pp. 404–410 (2012)Google Scholar
- 5.Csapo, A., Gilmartin, E., Grizou, J., Han, J., Meena, R., Anastasiou, D., Jokinen, K., Wilcock, G.: Multimodal conversational interaction with a humanoid robot. In: Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), Kosice, pp. 667–672 (2012)Google Scholar
- 6.Han, J., Campbell, N., Jokinen, K., Wilcock, G.: Integrating the use of non-verbal cues in human-robot interaction with a Nao robot. In: Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), pp. 679–683. Kosice (2012)Google Scholar
- 7.Meena, R., Jokinen, K., Wilcock, G.: Integration of gestures and speech in human-robot interaction. In: Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), pp. 673–678. Kosice (2012)Google Scholar
- 8.Kendon, A.: Gesture. Visible Action as Utterance. Cambridge University Press, Cambridge (2005)Google Scholar
- 10.Jokinen, K., Hurtig, T.: User expectations and real experience on a multimodal interactive system. In: Proceedings of 9th International Conference on Spoken Language Processing (Interspeech 2006), Pittsburgh (2006)Google Scholar
- 11.Wilcock, G.: WikiTalk: a spoken Wikipedia-based open-domain knowledge access system. In: Proceedings of the COLING-2012 Workshop on Question Answering for Complex Domains, Mumbai, pp. 57–69 (2012)Google Scholar