To Learn or To Be Taught? Design Issues Towards Cognitive Robotics

  • Minoru Asada
  • Koh Hosoda
Conference paper


This paper discusses the teaching methods as one of the external learning structure for the cognitive robotics with three different topics. The first one deals with a trade-off between self-learning and teaching by coping with cross perceptual aliasing problem caused by the state space difference between the learner and the teacher. The second one argues about the teaching by showing an exact motion methods from a viewpoint of the internal observer with less a priori knowledge from the external observer’s viewpoint such as global positioning or kinematic parameters of its own body. The third one argues about the internal structure to cope with less instructions, that is, teaching by showing only the visual target Finally, we summarize these issues from a viewpoint of the internal observer toward cognitive robotics.


Reinforcement Learning Humanoid Robot Real Robot Epipolar Line Movement Primitive 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Minoru Asada, Hiroshi Ishiguro, and Yasuo Kuniyoshi. Toward cognitive robotics (in japanese). Journal of the Robotics Society of Japan, 17 (1): 26, 1999.Google Scholar
  2. [2]
    M. Asada, S. Noda, S. Tawaratumida, and K. Hosoda. Purposive behavior acquisition for a real robot by vision-based reinforcement learning. Machine Learning, 23: 279–303, 1996.Google Scholar
  3. [3]
    M. Asada, K. Hosoda, and S. Suzuki. Vision-based learning and development for emergence of robot behaviors. In Y. Shirai and S. Hirose, editors, Robotics Research, The Seventh International Symosium, pages 327–338. Springer, 1998.Google Scholar
  4. [4]
    Minoru Asada, Eiji Uchibe, and Koh Hosoda. Cooperative behavior acquisition for mobile robots in dynamically changing real worlds via vision-based reinforcement learning and development. Artificial Intelligence, 110: 275–292, 1999.CrossRefMATHGoogle Scholar
  5. [5]
    Y. Takahashi, M. Asada, and K. Hosoda. Reasonable performance in less learning time by real robot based on incremental state space segmentation. In Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems 1996 (IROS96), pages 1518–1524, 1996.Google Scholar
  6. [6]
    C. Mishima and M. Asada. Active learning from cross perceptual aliasing caused by direct teaching. In Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems 1999 (IROS ‘89), page (to appear), 1999.Google Scholar
  7. [7]
    Koh Hosoda and Minoru Asada. How does a robot find redundancy by itself? a control architecture for adaptive multi-dof robots. In Proceedings of 7-th European Workshop on Learning Robots, EWLR-7, page (to appear), 1999.Google Scholar
  8. [8]
    S. D. Whitehead and D. H. Ballard. “Active perception and reinforcement learning”. In Proc. of Workshop on Machine Learning-1990, pages 179–188, 1990.Google Scholar
  9. [9]
    Gillian Hayes and Jhon Demiris. “a robot controller using learning by imitation”. In Proceeding of the 2nd Intern Symposium on Intelligent Robotic System, 1994.Google Scholar
  10. [10]
    Aude Billard and Gillian Hayes. “Grounting sequences of actions in an autonomous doll robot”. In Adaptive Behavior Using Dynamic Recurrent Nero! Nets SAB 98 Workshop, 1998.Google Scholar
  11. [11]
    Axel Grossman and Riccardo Poil. “Continual robot learning with constructive neural networks”. In 6th European Workshop on Learning Robots EWLR-6, pages 14–21, 1997.Google Scholar
  12. [12]
    J. H. Connel and S. Mahadevan, editors. Robot Learning. Kluwer Academic Publishers, 1993.CrossRefGoogle Scholar
  13. [13]
    Long-Ji Lin. “scaling up reinforcement learning for robot control”. In International Conference on Machine Learning, pages 182–189, 1992.Google Scholar
  14. [14]
    Jeffery A. Clouse and Paul E. Utgoff. “a teaching method for reinforcement learning”. In International Conference on Machine Learning, pages 92–101, 1992.Google Scholar
  15. [15]
    J.Ross.Quinlan. C4.5 PROBRAMS FOR MA-CHINE LEARNING. Morgan Kaufmann Pub-ushers.Google Scholar
  16. [16]
    Hiroaki Kitano, editors. Minoru Asada and RoboCup-98: Robot Soccer World Cup II. Springer, Lecture Note in Artificail Intelligence 1604, 1999.Google Scholar
  17. [17]
    Stefan Schaal. Is imitation learning the route to humanoid robots? Trends in Cognitive Science, 1999.Google Scholar
  18. [18]
    G. Rizzolatti and M. A. Arbib. Language within our grasp. Trends Neuroscience, 21: 188–194, 1998.CrossRefGoogle Scholar
  19. [19]
    K. Ikeuchi and T. Suehiro. Toward an assembly plan from observation. IEEE Trans. on REM, 10: 368–385, 1994.Google Scholar
  20. [20]
    Y. Kuniyoshi, M. Inaba, and H. Inoue. Learning by watching. IEEE Trans. on REM, 10: 799–822, 1994.Google Scholar
  21. [21]
    Yuichi Yoshikawa and Minoru Asada. Imitation by observation without 3-d reconstruction. In Proceedings of 5-th Robotics Symposia (in Japanese), page (to appear), 2000.Google Scholar
  22. [22]
    H. C. Longuet-Higgins. A computer algorithm for reconstructing a scene from two projections. Nature, 293: 133–135, 1981.CrossRefGoogle Scholar
  23. [23]
    K. Hosoda and M. Asada. Versatile visual servoing without knowledge of true jacobian. In Proc. of IROS’94, pages 186–193, 1994.Google Scholar
  24. [24]
    R. A. Brooks. “A robust layered control system for a mobile robot”. IEEE J. Robotics and Automation, RA-2: 14–23, 1986.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2000

Authors and Affiliations

  • Minoru Asada
    • 1
  • Koh Hosoda
    • 1
  1. 1.Adaptive Machine SystemsOsaka UniversitySuita, OsakaJapan

Personalised recommendations