, Volume 2, Issue 2, pp 82–93 | Cite as

Towards an articulation-based developmental robotics approach for word processing in face-to-face communication

  • Bernd J. KrögerEmail author
  • Peter Birkholz
  • Christiane Neuschaefer-Rube
Review Article


While we are capable of modeling the shape, e.g. face, arms, etc. of humanoid robots in a nearly natural or humanlike way, it is much more difficult to generate human-like facial or body movements and human-like behavior like e.g. speaking and co-speech gesturing. In this paper it will be argued for a developmental robotics approach for learning to speak. On the basis of current literature a blueprint of a brain model will be outlined for this kind of robots and preliminary scenarios for knowledge acquisition will be described. Furthermore it will be illustrated that natural speech acquisition mainly results from learning during face-to-face communication and it will be argued that learning to speak should be based on human-robot face-to-face communication. Here the human acts like a caretaker or teacher and the robot acts like a speech-acquiring toddler. This is a fruitful basic scenario not only for learning to speak, but also for learning to communicate in general, including to produce co-verbal manual gestures and to produce co-verbal facial expressions.


developmental robotics humanoid robotics conversational agents face-to-face-communication speech speech acquisition speech production speech perception 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C, 2009. Cognitive developmental robotics: A survey. IEEE transactions on Autonomous Mental Development 1, 12–34.CrossRefGoogle Scholar
  2. Aziz-Sadeh L, Damasio A, 2008. Embodied semantics for actions: Findings from functional brain imaging. Journal of Physiology-Paris 102, 35–39.CrossRefGoogle Scholar
  3. Baily G, Raidt S, Elisei F, 2010. Gaze, conversational agents and face-to-face communication. Speech Communication 52, 598–612.CrossRefGoogle Scholar
  4. Bergmann K, Kopp S, 2009. Increasing the Expressiveness of Virtual Agents — Autonomous Generation of Speech and Gesture for Spatial Description Tasks. In: Decker K, Sichman J, Sierra C, Castelfranchi C (eds.) Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009), pp. 361–368.Google Scholar
  5. Birkholz P, Kröger BJ, 2006. Vocal tract model adaptation using magnetic resonance imaging. Proceedings of the 7th International Seminar on Speech Production (Belo Horizonte, Brazil) pp. 493–500.Google Scholar
  6. Birkholz P, Kröger BJ, Neuschaefer-Rube C, in press. Modelbased reproduction of articulatory trajectories for consonant-vowel sequences. IEEE Transactions on Audio, Speech and Language Processing. DOI:10.1109/TASL.2010.2091632Google Scholar
  7. Brandl H, 2009. A computational model for unsupervised childlike speech acquisition. Unpublished Doctoral Thesis (University of Bielefeld, Bielefeld, Germany)Google Scholar
  8. Breazeal C, 2003. Towards sociable robots. Robotics and Autonomous Systems 42, 167–175.zbMATHCrossRefGoogle Scholar
  9. Breazeal C, 2004. Function meets style: Insights from emotion theory applied to HRI. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 34, 187–194.CrossRefGoogle Scholar
  10. Brooks RA, Breazeal C, Marjanovic M, Scassellati B, Williamson MM, 1999. The cog project: building a humanoid robot. In: Nehaniv CL (ed.) Computation for metaphors, analogy, and agents (Springer Verlag, Berlin), pp. 52–87.CrossRefGoogle Scholar
  11. Caligiore D, Ferrauto T, Parisi D, Accornero N, Capozza M, Baldassarre G, 2008. Using motor babbling and Hebb rules for modeling the development of reaching with obstacles and grasping. In: Dillmann R, Maloney C, Sandini G, Asfour T, Cheng G, Metta G, Ude A (eds.) International Conference on Cognitive Systems, CogSys2008 (University of Karlsruhe, Karlsruhe, Germany)Google Scholar
  12. Cangelosi A, Riga T, 2006. An embodied model for sensorimotor grounding and grounding transfer: experiments with epigenetic robots. Cognitive Science 30, 673–689.CrossRefGoogle Scholar
  13. Coleman J, 1999. Cognitive reality and the phonological lexicon: A review. Journal of Neurolinguistics 11, 295–320.CrossRefGoogle Scholar
  14. Dehaene-Lambertz G, Hertz-Pannier L, Dubois J, Dehaene S, 2008. How Does Early Brain Organization Promote Language Acquisition in Humans? European Review 16, 399–411.CrossRefGoogle Scholar
  15. Demiris Y, Dearden A, 2005. From motor babbling to hierarchical learning by imitation: a robot developmental pathway. In: Berthouze L, Kaplan F, Kozima H, Yano H, Konczak J, Metta G, Nadel J, Sandini G, Stojanov G, Balkenius C (eds.) Proceedings of the Fifth International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems (Lund University Cognitive Studies 123, Lund), pp. 31–37.Google Scholar
  16. Desmurget M, Grafton ST, 2000. Forward modeling allows feedback control for fast reaching movements. Trends in Cognitive Sciences 4, 423–431.CrossRefGoogle Scholar
  17. Dohen M, Schwartz, JL, Bailly G, 2010. Speech and face-to-face communication — An introduction. Speech Communication 52, 477–480.CrossRefGoogle Scholar
  18. Fehr E, Fischbacher U, Gächter S, 2002. Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature 13, 1–25.CrossRefGoogle Scholar
  19. Fujie S, Fukushima K, Kobayashi T, 2004. A conversation robot with backchanel feedback function based on linguistic and nonlinguistic information. Proceedings of the 2nd International conference on Autonomous Robots and Agents (Palmerston North, New Zealand), pp. 379–384.Google Scholar
  20. Fukui K, Nishikawa K, Ikeo S, Shintaku E, Takada K, Takanobu H, Honda M, Takanishi A, 2005. Development of a talking robot with vocal cords and lips having human-like biological structures. Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (Edmonton, Alberta, Canada), pp. 2023–2028.Google Scholar
  21. Galantuci B, Steels L, 2008. The embodied communication in artificial agents and humans. In: Wachsmuth I, Lenzen M, Knoblich G (eds.), Embodied Communication in Humans and Machines (Oxford University Press, Oxford) pp. 229–256.Google Scholar
  22. Goldstein MH, Schwade J, 2008. Social Feedback to Infants’ Babbling Facilitates Rapid Phonological Learning. Psychological Science 19, 515–523.CrossRefGoogle Scholar
  23. Goldstein MH, Schwade J, Briesch J, Syal S, 2010. Learning While Babbling: Prelinguistic Object-Directed Vocalizations Indicate a Readiness to Learn. Infancy 15, 362–391.CrossRefGoogle Scholar
  24. Golfinopoulos E, Tourville JA, Bohland JW, Ghosh SS, Nieto-Castanon A, Guenther FH, 2011. fMRI investigation of unexpected somatosensory feedback perturbation during speech. NeuroImage 55, 1324–1338.CrossRefGoogle Scholar
  25. Grosberg S, 2010. The link between brain learning, attention, and consciousness. In: Carsetti A (ed.) Causality, Meaningful Complexity and Embodied Cognition (Springer, Dordrecht), pp. 3–45.CrossRefGoogle Scholar
  26. Grosmann T, Johnson MH, Lloyd-Fox S, Blasi A, Deligianni F, Elwell C, Csibra G, 2008. Early cortical specialization for face-to-face communication in human infants. Proceedings of the Royal Society B: Biological Sciences 275, 2803–2811.CrossRefGoogle Scholar
  27. Guenther FH, Ghosh SS, Tourville JA, 2006. Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language 96, 280–301.CrossRefGoogle Scholar
  28. Haikonen POA, 2009. The role of associative processing in cognitive computing. Cognitive Computation 1, 42–49.CrossRefGoogle Scholar
  29. Hashimoto T, Kato N, Kobayashi H, 2010. Study on educational application of android robot SAYA: Field trial and evaluation at elementary school. In: Lui H, Ding H, Xiong Z, Zhu X (eds.) Intelligent Robotics and Applications. LNCS 6425 (Springer, Berlin), pp. 505–516.CrossRefGoogle Scholar
  30. Hickok G, Poeppel D, 2007. Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences 4, 131–138.CrossRefGoogle Scholar
  31. Indefrey P, Levelt WJM, 2004. The spatial and temporal signatures of word production components. Cognition 92, 101–144.CrossRefGoogle Scholar
  32. Iverson JM, Capirci O, Longobardi E, Caselli MC, 1999. Gesturing in mother-child interactions. Cognitive Development 14, 57–75.CrossRefGoogle Scholar
  33. Kanda T, Hirano T, Eaton D, 2004. Interactive robots as social partners and peer tutors for children: a field trial. Human-Computer Interaction 19, 61–84.CrossRefGoogle Scholar
  34. Kanda T, Kamasima M, Imai M, Ono T, Sakamoto D, Ishiguro H, Anzai Y, 2007. A humanoid robot that pretends to listen to route guidance from a human. Journal of Autonomous Robots 22, 87–100.CrossRefGoogle Scholar
  35. Kanda T, Miyashita T, Osada T, Haikawa Y, Ishiguro H, 2008. Analysis of humanoid appearance in human-robot interaction. IEEE Transactions on Robotics 24, 725–735.CrossRefGoogle Scholar
  36. Kandel ER, Schwartz JH, Jessell TM, 2000. Principles of Neural Science. 4th edition (McGraw-Hill, New York).Google Scholar
  37. Kiebel SJ, Daunizeau J, Friston KJ, 2008. A Hierarchy of Time-Scales and the Brain. PLoS Comput Biol 4(11): e1000209. doi:10.1371/journal.pcbi.1000209.CrossRefGoogle Scholar
  38. Kipp M, Ne M, Kipp KH, Albrecht I, 2007. Towards Natural Gesture Synthesis: Evaluating gesture units in a data-driven approach to gesture synthesis. In: Pellachaud C, Martin JC, Andre E, Chollet G, Karpouzis K, Pele D (eds.), Intelligent Virtual Agents. LNAI 4722 (Springer, Berlin), pp. 15–28.CrossRefGoogle Scholar
  39. Kohonen T, 2001. Self-Organizing Maps (Springer, Berlin).zbMATHCrossRefGoogle Scholar
  40. Kopp S, Bergmann K, Buschmeier H, Sadeghipour A, 2009. Requirements and Building Blocks for Sociable Embodied Agents. In: Mertsching B, Hund M, Aziz Z (eds.) Advances in Artificial Intelligence. LNCS 5803 (Springer, Berlin), pp. 508–515.Google Scholar
  41. Kopp S, Gesellensetter L, Krämer NC, Wachsmuth I, 2005. A Conversational Agent as Museum Guide — Design and Evaluation of a Real-World Application. In: Panayiotopoulos T, Gratch J, Aylett R, Ballin D, Oliver P, Rist T (eds.), Intelligent Virtual Agents. LNCS 3661 (Springer, Berlin), pp. 329–343.CrossRefGoogle Scholar
  42. Kosuge K, Hirata Y, 2004. Human-robot interaction. Proceedings of the 2004 IEEE International Conference on Robotics and Biometrics (Xhenyang, China), pp. 8–11.Google Scholar
  43. Kröger BJ, Birkholz P, 2007. A gesture-based concept for speech movement control in articulatory speech synthesis. In: Esposito A, Faundez-Zanuy M, Keller E, Marinaro M (eds.) Verbal and Nonverbal Communication Behaviours. LNAI 4775 (Springer, Berlin), pp. 174–189.CrossRefGoogle Scholar
  44. Kröger BJ, Birkholz P, 2009. Articulatory Synthesis of Speech and Singing: State of the Art and Suggestions for Future Research. In: Esposito A, Hussain A, Marinaro M (eds) Multimodal Signals: Cognitive and Algorithmic Issues. LNAI 5398 (Springer, Berlin), pp. 306–319.CrossRefGoogle Scholar
  45. Kröger BJ, Kannampuzha J, Neuschaefer-Rube C, 2009. Towards a neurocomputational model of speech production and perception. Speech Communication 51, 793–809.CrossRefGoogle Scholar
  46. Kröger BJ, Birkholz P, Lowit A, 2010. Phonemic, sensory, and motor representations in an action-based neurocomputational model of speech production (ACT). In: Maassen B, van Lieshout P (eds.), Speech Motor Control: New developments in basic and applied research. (Oxford University Press, New York), pp. 23–36.Google Scholar
  47. Kröger BJ, Kopp S, Lowit A, 2010. A model for production, perception, and acquisition of actions in face-to-face communication. Cognitive Processing 11, 187–205.CrossRefGoogle Scholar
  48. Kuhl PK, 2004. Early language acquisition: cracking the speech code. Nature Reviews Neuroscience 5, 831–843.CrossRefGoogle Scholar
  49. Kuhl PK, 2007. Is speech learning „gated by the social brain? Developmental Science 10, 110–120.CrossRefGoogle Scholar
  50. Lau EF, Phillips C, Poeppel D, 2008. A cortical network for semantics: (de)constructing the N400. Nature Reviews Neuroscience 9, 920–933.CrossRefGoogle Scholar
  51. Levelt WJM, Roelofs A, Meyer A, 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences 22, 1–75.Google Scholar
  52. Li P, Fakas I, MacWhinney B, 2004. Early lexical development in a self-organizing neural network. Neural Networks 17, 1345–1362.CrossRefGoogle Scholar
  53. Li Y, Kurata S, Morita S, Shimizu S, Munetaka D, Nara S, 2008. Application of chaotic dynamics in a recurrent neural network to control: hardware implementation into a novel autonomous roving robot. Biological Cybernetics 99, 185–196.MathSciNetzbMATHCrossRefGoogle Scholar
  54. Lindblom J, Ziemke T, 2003. Social situatedness of natural and artificial intelligence: Vygotsky and beyond. Adaptive Behavior 11, 79–96.CrossRefGoogle Scholar
  55. Lungarela M, Metta G, Pfeiffer R, Sandini, 2003. Developmental robotics: a survey. Connection Science 15, 151–190.CrossRefGoogle Scholar
  56. Madden C, Hoen M, Dominey PF, 2010. A cognitive neuroscience perspective on embodied language for human-robot cooperation. Brain and Language 112, 180–188.CrossRefGoogle Scholar
  57. McGurk H, MacDonald J, 1976. Hearing lips and seeing voices. Nature 264, 746–748.CrossRefGoogle Scholar
  58. Mitchel CJ, De Houwer J, Lovibond PF, 2009. The propositional nature of human associative learning. Behavioral and Brain Sciences 32, 183–198.CrossRefGoogle Scholar
  59. Ogawa H, Watanabe T, 2000. Interrobot: A speech driven embodied interaction robot. Proceedings of the 2000 IEEE International Workshop on Robot and Human Interactive Communication (Osaka, Japan), pp. 322–327.Google Scholar
  60. Özçalkan S, Goldin-Meadow S, 2005. Gesture is at the cutting edge of early language development. Cognition 96, B101–B113.CrossRefGoogle Scholar
  61. Parisi D, 2010. Robots with language. Frontiers in Neurorobotics 4. DOI: 10.3389/fnbot.2010.00010Google Scholar
  62. Paterson K, Nestor PJ, Rogers TT, 2007. Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience 8, 976–987.CrossRefGoogle Scholar
  63. Pelachaud C, Poggi I, 2002. Subtleties of facial expressions in embodied agents. The Journal of Visualization and Computer Animation 13, 301–312.zbMATHCrossRefGoogle Scholar
  64. Pierrehumbert JB, 2003. Phonetic diversity, statistical learning, and acquisition of phonology. Language and Speech 46, 115–154.CrossRefGoogle Scholar
  65. Plebe A, Mazzone M, de la Cruz V, 2010. First word learning: a cortical model. Cognitive Computation 2, 217–229.CrossRefGoogle Scholar
  66. Prince CG, Demiris Y, 2003. Introduction to the special issue on epigenetic robotics. Adaptive Behavior 11, 75–77.CrossRefGoogle Scholar
  67. Rich C, Ponsler B, Holroyd A, Sidner CL, 2010. Recognizing engagement in human-robot interaction. Proceedings of the 5th ACM/IEEE International conference on Human-Robot Interaction (Osaka, Japan), pp. 375–382.Google Scholar
  68. Riecker A, Mathiak K, Wildgruber D, Erb A, Hertrich I, Grodd W, Ackermann H, 2005. fMRI reveals two distinct cerebral networks subserving speech motor control. Neurology 64, 700–706.CrossRefGoogle Scholar
  69. Rizolati G, 2005. The mirror neuron system and its function in humans. Anatomy and Embryology 210, 419–421.CrossRefGoogle Scholar
  70. Roy AC, Craighero L, Fabbri-Destro, M, Fadiga L, 2008. Phonological and lexical motor facilitation during speech listening: A transcranial magnetic stimulation study. Journal of Physiology-Paris 102, 101–105.CrossRefGoogle Scholar
  71. Saunders JA, Knill DC, 2004. Visual Feedback Control of Hand Movements. The Journal of Neuroscience 24, 3223–3234.CrossRefGoogle Scholar
  72. Schaal S, 1999. Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences 3, 233–242.CrossRefGoogle Scholar
  73. Shiomi M, Kanda T, Miralles N, Miyashita T, 2004. Face-to-face interactive humanoid robot. Proceedings of the 2004 IEEE International Conference on Intelligent Robots and Systems (Sendai, Japan), pp. 1340–1346.Google Scholar
  74. Shiwa T, Kanda T, Imai M, Ishiguro H, Hagita N, 2008. How quickly should communication robots respond? Proceedings of 2008 ACM Conference of Human Robot Interaction (Amsterdam, Netherlands), pp. 153–160.Google Scholar
  75. Sidner CL, Lee C, Kidd CD, Lesh N, Rich C, 2005. Explorations in engagement for humans and robots. Artificial Intelligence 166, 140–164.CrossRefGoogle Scholar
  76. Steels L, 2003. Evolving grounded communication for robots. Trends in Cognitive Sciences 7, 308–312.CrossRefGoogle Scholar
  77. Tani J, Ito M, 2003. Self-organization of behavioral primitives as multiple attractor dynamics: a robot experiment. IEEE Transactions on Systems, Man, and Cybernetics — Part A: Systems and Humans 33, 481–488.CrossRefGoogle Scholar
  78. Tani J, Nishimoto R, Namikawa J, Ito M, 2008. Codevelopmental learning between human and humanoid robot using a dynamic neural network model. IEEE Transactions on Systems, Man, and Cybernetics — Part B: Cybernetics 38, 43–59.CrossRefGoogle Scholar
  79. Thompson RF, 1986. The neurobiology of learning and memory. Science 233, 941–947.CrossRefGoogle Scholar
  80. Tomaselo M, 2000. First steps towards a usage-based theory of language acquisition. Cognitive Linguistics 11, 61–82.CrossRefGoogle Scholar
  81. Trappenberg T, Hartono P, Rasmusson D, 2009. Top-Down Control of Learning in Biological Self-Organizing Maps. In: Principe JC, Miikkulainen R (eds.), Advances in Self-Organizing Maps. LNCS 5629 (Springer, Berlin), pp. 316–324.CrossRefGoogle Scholar
  82. Yoshikawa Y, Shinozawa K, Ishiguro H, Hagita N, Miyamoto T, 2006. The effects of responsive eye movement and blinking behavior in a communication robot. Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (Beijing, China), pp. 4564–4569.Google Scholar
  83. Vaz M, Brandl H, Joublin F, Goerick C, 2009. Learning from a tutor: Embodied speech acquisition and imitation learning. Proceedings of the IEEE 8th International Conference on Development and Learning (Shanghai, China), pp. 1–6.Google Scholar
  84. Vilhjálmson H, 2009. Representing communicative function and behavior in multimodal communication. In: Esposito A, Hussain A, Marinaro M, Martone R (eds.) Multimodal Signals: Cognitive and Algorithmic Issues. LNCS 5398 (Springer, Berlin), pp. 47–59.CrossRefGoogle Scholar
  85. Weng J, 2004. Developmental robotics: Theory and experiments. International Journal of Humanoid Robotics 1, 199–236.CrossRefGoogle Scholar
  86. Weng J, McClelland J, Pentland A, Sporns O, Stockman I, Sur M, Thelen E, 2001. Autonomous mental development by robots and animals. Science 291, 599–600.CrossRefGoogle Scholar

Copyright information

© © Versita Warsaw and Springer-Verlag Wien 2011

Authors and Affiliations

  • Bernd J. Kröger
    • 1
    Email author
  • Peter Birkholz
    • 1
  • Christiane Neuschaefer-Rube
    • 1
  1. 1.Department of Phoniatrics, Pedaudiology, and Communication DisordersRWTH Aachen UniversityAachenGermany

Personalised recommendations