Synonyms
Related Concepts
Definition
Affordances are opportunities for action that are directly perceivable in an organism’s environment without higher-level cognitive functions. Action recognition is the result of mapping an observed action onto an internal motor or semantic representation.
Background
Affordances are defined by Gibson [1] as opportunities for action that are directly perceivable without the need for higher-level cognitive functions such as object recognition. The concept of affordances for action has generated significant interest in the computer vision and robotics community. More recently, links between this concept and that of action recognition have been explored, suggesting that the two may share common mechanisms.
Affordances. In robotics, early use of the term affordances dealt with the extraction of features from the visual environment that signal the...
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Gibson JJ (1966) The senses considered as perceptual systems. Houghton Mifflin, Boston
Fagg AH, Arbib MA (1998) Modeling parietal-premotor interactions in primate control of grasping. Neural Netw 11(7–8):1277–1303
Paletta L, Fritz G, Kintzler F, Irran J, Dorffner G (2007) Learning to perceive affordances in a framework of developmental embodied cognition. IEEE International Conference on Development and Learning, London, pp 110–115
Arbib MA (1997) From visual affordances in monkey parietal cortex to hippocampoparietal interactions underlying rat navigation. Phil Trans R Soc Lond B 352(1360): 1429–1436
Sahin E, Cakmak M, Dogar MR, Ugur E, Ucoluk G (2007) To afford or not to afford: a new formalization of affordances toward affordance-based robot control. Adapt Behav 15(4):447–472
Oztop E, Arbib MA (2002) Schema design and implementation of the grasp-related mirror neuron system. Biol Cybern 87(2):116–140
Metta G, Sandini G, Natale L, Craighero L, Fadiga L (2006) Understanding mirror neurons: a bio-robotic approach. Interact Stud 7(2):197–232
Bonaiuto J, Rosta E, Arbib MA (2007) Extending the mirror neuron system model, I: audible actions and invisible grasps. Biol Cybern 96:9–38
Yamato J, Ohya J, Ishii K (1992) Recognizing human action in time-sequential images using hidden Markov model. In: Proceedings of computer vision and pattern recognition (CVPR), Champaign, IL, pp 379–385
Bobick AF, Ivanov YA (1998) Action recognition using probabilistic parsing. In: Proceedings of computer vision and pattern recognition (CVPR), Santa Barbara, pp 196–202
Gupta A, Davis LS (2007) Objects in action: an approach for combining action understanding and object perception. In: Proceedings of computer vision and pattern recognition (CVPR), Minneapolis, pp 1–8
Oztop E, Wolpert D, Kawato M (2005) Mental state inference using visual control parameters. Cogn Brain Res 22:129–151
Lopes M, Melo FS, Montesano L (2007) Affordance-based imitation learning in robots. In: IEEE/RSJ international conference on intelligent robots and systems, San Diego, CA, pp 1015–1021
Moore DJ, Essa IA, Hayes MH (1999) Exploiting human actions and object context for recognition tasks. Proc Int Conf Comput Vis (ICCV) 1:80–86
Kjellstrom H, Romeroa J, Kragic D (2011) Visual object-action recognition: inferring object affordances from human demonstration. Comput Vis Image Underst 115(1): 81–90
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this entry
Cite this entry
Bonaiuto, J. (2014). Affordances and Action Recognition. In: Ikeuchi, K. (eds) Computer Vision. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-31439-6_772
Download citation
DOI: https://doi.org/10.1007/978-0-387-31439-6_772
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-30771-8
Online ISBN: 978-0-387-31439-6
eBook Packages: Computer ScienceReference Module Computer Science and Engineering