Abstract
We present a visual robot whose associated neural controller develops a realistic perception of affordances. The controller uses known insect brain principles; particularly the time stabilized sparse code communication between the Antennal Lobe and the Mushroom Body. The robot perceives the world through a webcam and canny border openCV routines. Self-controlled neural agents process this massive raw data and produce a time stabilized sparse version, where implicit time-space information is encoded. Preprocessed information is relayed to a population of neural agents specialized in cognitive activities and trained under self-critical isolated conditions. Isolation induces an emergent behavior which makes possible the invariant visual recognition of objects. This later capacity is assembled into cognitive strings which incorporate time-elapse learning resources activation. By using this assembled capacity during an extended learning period the robot finally achieves perception of affordances. The system has been tested in real time with real world elements.
Chapter PDF
Similar content being viewed by others
References
Chang, O.: Recognizing a moving object by using neural nets and ocular micro tremor. Revista de la Facultad de Ingenieria Universidad Central de Venezuela 28, 49–55 (2013). http://ucv.academia.edu/OscarChang/Papers
Chang, O.: Reliable object recognition by using cooperative neural agents. In: 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, pp. 2571–2578, July 2014. http://ucv.academia.edu/OscarChang/Papers
Gibson, J.: The Ecological Approach To Visual Perception. Taylor & Francis (2013). http://books.google.co.ve/books?id=yv_9hU_26KEC
Goertzel, B., Bugaj, S.V.: Stages of cognitive development in uncertain-logic-based AI systems. In: Proceedings of the 2007 Conference on Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop 2006, pp. 174–194. IOS Press, Amsterdam (2007). http://dl.acm.org/citation.cfm?id=1565455.1565468
Hermans, T., Rehg, J.M., Bobick, A.: Affordance prediction via learned object attributes. In: IEEE International Conference on Robotics and Automation (ICRA): Workshop on Semantic Perception, Mapping, and Exploration (May 2011)
Horton, T.E., Chakraborty, A., St. Amant, R.: Affordances for robots: a brief survey. AVANT. Pismo Awangardy Filozoficzno-Naukowej 2, 70–84 (2012)
Huerta, R.: Learning pattern recognition and decision making in the insect brain. In: AIP Conference Proceedings, vol. 1510, pp. 101–119 (2013)
Mateas, M.: Expressive AI: A semiotic analysis of machinic affordances. In: Proceedings of the 3rd Conference on Computational Semiotics and New Media, University of Teesside, UK (2003)
Minsky, M.: The Society of Mind. A Touchstone Book, Simon & Schuster (1986). http://books.google.co.ve/books?id=veVOAAAAMAAJ
Saccani, R., Valentini, N.C., Pereira, K.R., Müller, A.B., Gabbard, C.: Associations of biological factors and affordances in the home with infant motor development. Pediatrics International 55(2), 197–203 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Chang, O. (2015). A Bio-Inspired Robot with Visual Perception of Affordances. In: Agapito, L., Bronstein, M., Rother, C. (eds) Computer Vision - ECCV 2014 Workshops. ECCV 2014. Lecture Notes in Computer Science(), vol 8926. Springer, Cham. https://doi.org/10.1007/978-3-319-16181-5_31
Download citation
DOI: https://doi.org/10.1007/978-3-319-16181-5_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16180-8
Online ISBN: 978-3-319-16181-5
eBook Packages: Computer ScienceComputer Science (R0)