, Volume 2, Issue 2, pp 82-93
Date: 23 Jun 2011

Towards an articulation-based developmental robotics approach for word processing in face-to-face communication

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


While we are capable of modeling the shape, e.g. face, arms, etc. of humanoid robots in a nearly natural or humanlike way, it is much more difficult to generate human-like facial or body movements and human-like behavior like e.g. speaking and co-speech gesturing. In this paper it will be argued for a developmental robotics approach for learning to speak. On the basis of current literature a blueprint of a brain model will be outlined for this kind of robots and preliminary scenarios for knowledge acquisition will be described. Furthermore it will be illustrated that natural speech acquisition mainly results from learning during face-to-face communication and it will be argued that learning to speak should be based on human-robot face-to-face communication. Here the human acts like a caretaker or teacher and the robot acts like a speech-acquiring toddler. This is a fruitful basic scenario not only for learning to speak, but also for learning to communicate in general, including to produce co-verbal manual gestures and to produce co-verbal facial expressions.