Advertisement

Emergent Reasoning from Coordination of Perception and Action: An Example Taken from Robotics

  • Darío Maravall
  • Javier de Lope
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2809)

Abstract

The paper presents a manipulator arm that is able to acquire primitive reasoning abilities from the pure coordination of perception and action. First, the problem of dynamic collision avoidance is considered, as a test-bed for autonomous coordination of perception and action. The paper introduces a biomimetic approach that departs from the conventional, analytical approach, as it does not employ formal descriptions of the locations and shape of the obstacles, nor does it solve the kinematic equations of the robotic arm. Instead, the method follows the perception-reason-action cycle and is based on a reinforcement learning process guided by perceptual feedback. From this perspective, obstacle avoidance is modeled as a multi- objective optimization process. The paper also investigates the possibilities for the robot to acquire a very simple reasoning ability by means of if-then-else rules, that trascend its previous reactive behaviors based on pure interaction between perception and action.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Scheier, C., Pfeifer, R.: The embodied cognitive science approach. In: Tschacher, W., Dauwelder, J.-P. (eds.) Dynamics Synergetics Autonomous Agents, pp. 159–179. World Scientific, Singapore (1999)CrossRefGoogle Scholar
  2. 2.
    Kortenkamp, D., Bonasso, R.P., Murphy, R.: Artificial Intelligence and Mobile Robots. The AAAI Press/The MIT Press (1998)Google Scholar
  3. 3.
    Mira, J., Delgado, A.E.: Where is knowledge in robotics? Some methodological issues on symbolic and connectionist perspectives of AI. In: Zhou, C., Maravall, D., Ruan, D. (eds.) Autonomous Robotic Systems: Soft Computing and Hard Computing Methodologies and Applications, pp. 3–34. Physica-Verlag, Springer (2003)Google Scholar
  4. 4.
    Fodor, J., Pylyshyn, Z.: Connectionism and cognitive architecture: A critical analysis. Cognition 28, 3–71 (1988)CrossRefGoogle Scholar
  5. 5.
    Van Gelder, T.: The dynamical alternative. In: Johnson, D.M., Erneling, C.E. (eds.) The Future of the Cognitive Revolution, pp. 227–244. Oxford University Press, Oxford (1997)Google Scholar
  6. 6.
    Latombe, J.-C.: Robot algorithms. In: Goldberg, K., Halperis, D., Latombe, J.C., Wilson, R. (eds.) Algorithmic Foundations of Robotics, pp. 1–18. A.K. Peters, Wellesley, Massachussets (1995)Google Scholar
  7. 7.
    Zhou, C., Maravall, D., Ruan, D.: Autonomous Robotic Systems: Soft Computing and Hard Computing Methodologies and Applications. Physica-Verlag, Springer (2003)Google Scholar
  8. 8.
    Siciliano, B.: Robot control. In: Samad, T. (ed.) Perspectives in Control Engineering, pp. 442–461. IEEE Press, New York (2001)Google Scholar
  9. 9.
    Mendel, J.M.: Fuzzy logic systems for engineering: A tutorial. Proceedings of the IEEE 83(3), 345–377 (1999)CrossRefGoogle Scholar
  10. 10.
    Hitchings, M., Vlacic, L., Kecman, V.: Fuzzy control. In: Vlacic, L., Parent, M., Harashima, F. (eds.) Intelligent Vehicle Technologies, pp. 289–331. Butterworth & Heinemann, Oxford (2001)CrossRefGoogle Scholar
  11. 11.
    Kong, S.G., Kosko, B.: Comparison of fuzzy and neural track backer-upper control systems. In: Kosko, B. (ed.) Neural Networks and Fuzzy Systems, pp. 339–361. Prentice-Hall, Englewood Cliffs (1992)Google Scholar
  12. 12.
    Lewis, F.L., Jagannathan, S., Yesildirek, A.: Neural Network Control of Robot Manipulators and Nonlinear Systems. Taylor & Francis, London (1999)Google Scholar
  13. 13.
    Maravall, D., de Lope, J.: A reinforcement learning method for dynamic obstacle avoidance in robotic mechanisms. In: Ruan, D., D’Hondt, P., Kerre, E.E. (eds.), pp. 485–494. World Scientific, Singapore (2002)Google Scholar
  14. 14.
    Maravall, D., de Lope, J.: A bio-inspired robotic mechanism for autonomous locomotion in unconventional environments. In: Zhou, C., Maravall, D., Ruan, D. (eds.) Autonomous Robotic Systems: Soft Computing and Hard Computing Methodologies and Applications, pp. 263–292. Physica-Verlag, Springer (2003)Google Scholar
  15. 15.
    Kawato, M.: Cerebellum and motor control. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, 2nd edn., pp. 190–195. The MIT Press, Cambridge (2003)Google Scholar
  16. 16.
    Jordan, M.I., Rumelhart, D.E.: Forward models: Supervised learning with a distal teacher. Cognitive Science 16, 307–354 (1992)CrossRefGoogle Scholar
  17. 17.
    De Lope, J., Maravall, D.: Integration of reactive utilitarian navigation and topological modeling. In: Zhou, C., Maravall, D., Ruan, D. (eds.) Autonomous Robotic Systems: Soft Computing and Hard Computing Methodologies and Applications, pp. 103–139. Physica- Verlag, Springer (2003)Google Scholar
  18. 18.
    Maravall, D., de Lope, J.: Integration of artificial potential field theory and sensory-based search in autonomous navigation. In: Proc. of the IFAC 2002 World Congress. Elsevier, Amsterdam (2003) (to appear)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Darío Maravall
    • 1
  • Javier de Lope
    • 1
  1. 1.Department of Artificial IntelligenceFaculty of Computer Science, Universidad Politécnica de MadridMadridSpain

Personalised recommendations