ART-Based Fusion of Multi-modal Information for Mobile Robots

  • Elmar Berghöfer
  • Denis Schulze
  • Marko Tscherepanow
  • Sven Wachsmuth
Conference paper

DOI: 10.1007/978-3-642-23957-1_1

Volume 363 of the book series IFIP Advances in Information and Communication Technology (IFIPAICT)
Cite this paper as:
Berghöfer E., Schulze D., Tscherepanow M., Wachsmuth S. (2011) ART-Based Fusion of Multi-modal Information for Mobile Robots. In: Iliadis L., Jayne C. (eds) Engineering Applications of Neural Networks. IFIP Advances in Information and Communication Technology, vol 363. Springer, Berlin, Heidelberg

Abstract

Robots operating in complex environments shared with humans are confronted with numerous problems. One important problem is the identification of obstacles and interaction partners. In order to reach this goal, it can be beneficial to use data from multiple available sources, which need to be processed appropriately. Furthermore, such environments are not static. Therefore, the robot needs to learn novel objects. In this paper, we propose a method for learning and identifying obstacles based on multi-modal information. As this approach is based on Adaptive Resonance Theory networks, it is inherently capable of incremental online learning.

Keywords

sensor data fusion incremental learning Adaptive Resonance Theory 
Download to read the full conference paper text

Copyright information

© International Federation for Information Processing 2011

Authors and Affiliations

  • Elmar Berghöfer
    • 1
  • Denis Schulze
    • 1
    • 2
  • Marko Tscherepanow
    • 1
  • Sven Wachsmuth
    • 1
    • 2
  1. 1.Applied Informatics, Faculty of TechnologyBielefeld UniversityBielefeldGermany
  2. 2.CITEC, Cognitive Interaction Technology, Center of ExcellenceBielefeld UniversityBielefeldGermany