Smart Gesture Selection with Word Embeddings Applied to NAO Robot

  • Mario Almagro-Cádiz
  • Víctor Fresno
  • Félix de la Paz López
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10338)

Abstract

Nowadays, Human-Robot Interaction (HRI) field is growing by the day, a fact which is evidenced by the increasing number of existing projects as well as the application of increasingly advanced techniques from different areas of knowledge and multi-disciplinary approaches. In a future where technology automatically controls services such as health care, pedagogy or construction, social interfaces would be one of the necessary pillars of HRI field. In this context, gesture plays an important role in the transmission of information and is one of fundamental mechanisms relevant to human-robot interaction. This work proposes a new methodology for gestural annotation in free text through a semantic similarity analysis using distributed representations based on word embeddings. The intention with this is to endow NAO robot with an intelligent mechanism for gesture allocation.

Keywords

Word embeddings Co-verbal gesture HRI NAO robot 

References

  1. 1.
    Bateman, J., Delin, J.: Rhetorical structure theory. In: Encyclopedia of Language and Linguistics, 2nd edn. Elsevier, Oxford (2005)Google Scholar
  2. 2.
    Bennewitz, M., Faber, F., Joho, D., Behnke, S.: Fritz-a humanoid communication robot. In: The 16th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2007, pp. 1072–1077. IEEE (2007)Google Scholar
  3. 3.
    Bergmann, K., Kahl, S., Kopp, S.: Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) IVA 2013. LNCS, vol. 8108, pp. 203–216. Springer, Heidelberg (2013). doi:10.1007/978-3-642-40415-3_18 CrossRefGoogle Scholar
  4. 4.
    Cassell, J., Bickmore, T., Campbell, L., Vilhjalmsson, H.: Human conversation as a system framework: designing embodied conversational agents. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.) Embodied Conversational Agents, pp. 29–63. MIT Press, Cambridge (2000)Google Scholar
  5. 5.
    Cassell, J., Vilhjálmsson, H.H., Bickmore, T.: BEAT: the behavior expression animation toolkit. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters, pp. 163–185. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Chiu, C.-C., Morency, L.-P., Marsella, S.: Predicting co-verbal gestures: a deep and temporal modeling approach. In: Brinkman, W.-P., Broekens, J., Heylen, D. (eds.) IVA 2015. LNCS, vol. 9238, pp. 152–166. Springer, Cham (2015). doi:10.1007/978-3-319-21996-7_17 CrossRefGoogle Scholar
  7. 7.
    Dahlbäck, N., Jönsson, A., Ahrenberg, L.: Wizard of Oz studies-why and how. Knowl.-Based Syst. 6(4), 258–266 (1993)CrossRefGoogle Scholar
  8. 8.
    Endrass, B., Damian, I., Huber, P., Rehm, M., André, E.: Generating culture-specific gestures for virtual agent dialogs. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. 6356, pp. 329–335. Springer, Heidelberg (2010). doi:10.1007/978-3-642-15892-6_34 CrossRefGoogle Scholar
  9. 9.
    Haasch, A., Hohenner, S., Hüwel, S., Kleinehagenbrock, M., Lang, S., Toptsis, I., Fink, G.A., Fritsch, J., Wrede, B., Sagerer, G.: BIRON-the bielefeld robot companion. In: Proceedings of the International Workshop on Advances in Service Robotics, pp. 27–32. Stuttgart, Germany (2004)Google Scholar
  10. 10.
    Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS, vol. 3881, pp. 188–199. Springer, Heidelberg (2006). doi:10.1007/11678816_22 CrossRefGoogle Scholar
  11. 11.
    Hato, Y., Satake, S., Kanda, T., Imai, M., Hagita, N.: Pointing to space: modeling of deictic interaction referring to regions. In: 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 301–308. IEEE (2010)Google Scholar
  12. 12.
    Itoh, K., Miwa, H., Matsumoto, M., Zecca, M., Takanobu, H., Roccella, S., Carrozza, M.C., Dario, P., Takanishi, A.: Various emotional expressions with emotion expression humanoid robot we-4RII. In: First IEEE Technical Exhibition Based Conference on Robotics and Automation, TExCRA 2004, pp. 35–36. IEEE (2004)Google Scholar
  13. 13.
    Kendon, A.: Current issues in the study of gesture. Biol. Found. Gestures: Motor Semiot. Asp. 1, 23–47 (1986)Google Scholar
  14. 14.
    Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Comput. Animat. Virtual Worlds 15(1), 39–52 (2004)CrossRefGoogle Scholar
  15. 15.
    Krebs, H.I., Hogan, N.: Therapeutic robotics: a technology push. Proc. IEEE 94(9), 1727–1738 (2006)CrossRefGoogle Scholar
  16. 16.
    Lebret, R., Legrand, J., Collobert, R.: Is deep learning really necessary for word embeddings? Technical report, Idiap (2013)Google Scholar
  17. 17.
    Levine, S., Theobalt, C., Koltun, V.: Real-time prosody-driven synthesis of body language. ACM Trans. Graph. (TOG) 28, 172 (2009). ACMCrossRefGoogle Scholar
  18. 18.
    McNeill, D.: Hand and Mind: What Gestures Reveal About Thought. University of Chicago Press, Chicago (1992)Google Scholar
  19. 19.
    McNeill, D., Levy, E.: Conceptual Representations in Language Activity and Gesture. ERIC Clearinghouse, Columbus (1980)Google Scholar
  20. 20.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  21. 21.
    Minato, T., Shimada, M., Ishiguro, H., Itakura, S.: Development of an Android robot for studying human-robot interaction. In: Orchard, B., Yang, C., Ali, M. (eds.) IEA/AIE 2004. LNCS, vol. 3029, pp. 424–434. Springer, Heidelberg (2004). doi:10.1007/978-3-540-24677-0_44 CrossRefGoogle Scholar
  22. 22.
    Murthy, G., Jadon, R.: A review of vision based hand gestures recognition. Int. J. Inf. Technol. Knowl. Manag. 2(2), 405–410 (2009)Google Scholar
  23. 23.
    Neff, M., Kipp, M., Albrecht, I., Seidel, H.P.: Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Trans. Graph. (TOG) 27(1), 5 (2008)CrossRefGoogle Scholar
  24. 24.
    Nehaniv, C.L., Dautenhahn, K., Kubacki, J., Haegele, M., Parlitz, C., Alami, R.: A methodological approach relating the classification of gesture to identification of human intent in the context of human-robot interaction. In: IEEE International Workshop on Robot and Human Interactive Communication, ROMAN 2005, pp. 371–377. IEEE (2005)Google Scholar
  25. 25.
    Nespoulous, J.L., Lecours, A.R.: Gestures: nature and function. Biol. Found. Gestures: Motor Semiot. Asp., 49–62 (1986)Google Scholar
  26. 26.
    Ng-Thow-Hing, V., Luo, P., Okita, S.: Synchronized gesture and speech production for humanoid robots. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4617–4624. IEEE (2010)Google Scholar
  27. 27.
    Niewiadomski, R., Bevacqua, E., Mancini, M., Pelachaud, C.: Greta: an interactive expressive ECA system. In: Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems, vol. 2, pp. 1399–1400. International Foundation for Autonomous Agents and Multiagent Systems (2009)Google Scholar
  28. 28.
    Padró, L., Stanilovsky, E.: Freeling 3.0: towards wider multilinguality. In: LREC 2012 (2012)Google Scholar
  29. 29.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543 (2014)Google Scholar
  30. 30.
    Riek, L.D., Rabinowitch, T.C., Bremner, P., Pipe, A.G., Fraser, M., Robinson, P.: Cooperative gestures: effective signaling for humanoid robots. In: 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 61–68. IEEE (2010)Google Scholar
  31. 31.
    Robinson, H., MacDonald, B., Broadbent, E.: The role of healthcare robots for older people at home: a review. Int. J. Soc. Robot. 6(4), 575–591 (2014)CrossRefGoogle Scholar
  32. 32.
    Rogalla, O., Ehrenmann, M., Zollner, R., Becher, R., Dillmann, R.: Using gesture and speech control for commanding a robot assistant. In: Proceedings of the 11th IEEE International Workshop on Robot and Human Interactive Communication, pp. 454–459. IEEE (2002)Google Scholar
  33. 33.
    Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Towards meaningful robot gesture. In: Ritter, H., Sagerer, G., Dillmann, R., Buss, M. (eds.) Human Centered Robot Systems, pp. 173–182. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  34. 34.
    Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Towards an integrated model of speech and gesture production for multi-modal robot behavior. In: RO-MAN 2010, pp. 614–619. IEEE (2010)Google Scholar
  35. 35.
    Stone, M., DeCarlo, D., Oh, I., Rodriguez, C., Stere, A., Lees, A., Bregler, C.: Speaking with hands: creating animated conversational characters from recordings of human performance. ACM Trans. Graph. (TOG) 23, 506–513 (2004). ACMCrossRefGoogle Scholar
  36. 36.
    Sugiyama, O., Kanda, T., Imai, M., Ishiguro, H., Hagita, N.: Natural deictic communication with humanoid robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2007, pp. 1441–1448. IEEE (2007)Google Scholar
  37. 37.
    Tepper, P., Kopp, S., Cassell, J.: Content in context: generating language and iconic gesture without a gestionary. In: Proceedings of the Workshop on Balanced Perception and Action in ECAs at AAMAS, vol. 4, p. 8 (2004)Google Scholar
  38. 38.
    Ting, C.H., Yeo, W.H., King, Y.J., Chuah, Y.D., Lee, J.V., Khaw, W.B.: Humanoid robot: a review of the architecture, applications and future trend. Res. J. Appl. Sci. Eng. Technol. 7, 1364–1369 (2014)Google Scholar
  39. 39.
    Voutilainen, A.: Part-of-speech tagging. In: The Oxford Handbook of Computational Linguistics, pp. 219–232 (2003)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Mario Almagro-Cádiz
    • 1
  • Víctor Fresno
    • 1
  • Félix de la Paz López
    • 2
  1. 1.Departamento de Lenguajes y Sistemas InformáticosUniversidad Nacional de Educación a Distancia (UNED)MadridSpain
  2. 2.Departamento de Inteligencia ArtificialUniversidad Nacional de Educación a Distancia (UNED)MadridSpain

Personalised recommendations