Goal Understanding and Self-generating Will for Autonomous Humanoid Robots

  • P. Nauth
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 99)


An intelligent robot has been developed which understands the goal a user wants to be met, recognizes its environment, develops strategies to achieve the goal and operates autonomously. By means of a speech recognition sensor the robot listens to the command spoken by a user and derives the goal, i.e. the task the user wants the robot to perform such as to bring a specific object. Next, the robot uses its smart camera and other sensors to scan the environment and to search for the demanded object. After it has found and identified the object, it grabs it and brings it to the user. Additionally, a method for generating a will is proposed. This enables the robot to operate optimally even under conflicting requirements.


Humanoid Robot Sensor Fusion Word Number Proximity Sensor Soccer Robot 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Dietrich et al. 2009]
    Dietrich, D., Bruckner, D., et al.: Psychoanalytical model for automation and robotics. In: IEEE Africon, Nairobi, Kenya (2009)Google Scholar
  2. [Doeben-Henisch 2009]
    Doeben-Henisch, G.: Humanlike computational learning theory. A Computational Semiotics Perspective. In: IEEE Africon, Nairobi, Kenya (2009)Google Scholar
  3. [Eres 2009]
    Eres, D.: Object recognition for humanoid robots using intelligent sensors. Diploma Thesis, Fachhochschule Frankfurt a.M., Germany (2009) (in German)Google Scholar
  4. [Gonzalez and Woods]
    Gonzalez, R., Woods, R.: Digital image processing. Prentice Hall, Englewood Cliffs (2008)Google Scholar
  5. [Hirai et al. 1998]
    Hirai, K., Hirose, M.Y., et al.: The development of Honda humanoid robot. In: IEEE International Conference on Robotics and Automation, pp. 1321–1326 (1998)Google Scholar
  6. [Jin et al. 2003]
    Jin, T., Lee, B., et al.: AGV Navigation using a space and time sensor fusion of an active camera. Int. J. of Navigation and Port Research 27(3), 273–282 (2003)CrossRefGoogle Scholar
  7. [Nauth 2005]
    Nauth, P.: Embedded intelligent systems. Oldenbourg Verlag, München/Wien (2005)Google Scholar
  8. [Sun 2007]
    Sun, R.: Cognitive social simulation incorporating cognitive architectures. In: IEEE Intelligent Systems, september 2007, pp. 33–39 (2007)Google Scholar
  9. [Stubbs and Wettergreen 2007]
    Stubbs, K., Wettergreen, D.: Anatomy and common ground in human-robot interaction: a field study. In: IEEE Intelligent Systems, pp. 42–50 (2007)Google Scholar
  10. [Toth 2009]
    Toth, D.: Object recognition of humanoid robots using visual sensors. Diploma Thesis, Fachhochschule Frankfurt a.M., Germany (2009) (in German)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • P. Nauth
    • 1
  1. 1.Department of Engineering and Computer SciencesUniversity of Applied SciencesFrankfurtGermany

Personalised recommendations