Computer Vision

2014 Edition
| Editors: Katsushi Ikeuchi

Affordances and Action Recognition

  • James Bonaiuto
Reference work entry


Related Concepts


Affordances are opportunities for action that are directly perceivable in an organism’s environment without higher-level cognitive functions. Action recognition is the result of mapping an observed action onto an internal motor or semantic representation.


Affordances are defined by Gibson [1] as opportunities for action that are directly perceivable without the need for higher-level cognitive functions such as object recognition. The concept of affordances for action has generated significant interest in the computer vision and robotics community. More recently, links between this concept and that of action recognition have been explored, suggesting that the two may share common mechanisms.

Affordances. In robotics, early use of the term affordances dealt with the extraction of features from the visual environment that signal...

This is a preview of subscription content, log in to check access.


  1. 1.
    Gibson JJ (1966) The senses considered as perceptual systems. Houghton Mifflin, BostonGoogle Scholar
  2. 2.
    Fagg AH, Arbib MA (1998) Modeling parietal-premotor interactions in primate control of grasping. Neural Netw 11(7–8):1277–1303CrossRefGoogle Scholar
  3. 3.
    Paletta L, Fritz G, Kintzler F, Irran J, Dorffner G (2007) Learning to perceive affordances in a framework of developmental embodied cognition. IEEE International Conference on Development and Learning, London, pp 110–115Google Scholar
  4. 4.
    Arbib MA (1997) From visual affordances in monkey parietal cortex to hippocampoparietal interactions underlying rat navigation. Phil Trans R Soc Lond B 352(1360): 1429–1436CrossRefGoogle Scholar
  5. 5.
    Sahin E, Cakmak M, Dogar MR, Ugur E, Ucoluk G (2007) To afford or not to afford: a new formalization of affordances toward affordance-based robot control. Adapt Behav 15(4):447–472CrossRefGoogle Scholar
  6. 6.
    Oztop E, Arbib MA (2002) Schema design and implementation of the grasp-related mirror neuron system. Biol Cybern 87(2):116–140zbMATHCrossRefGoogle Scholar
  7. 7.
    Metta G, Sandini G, Natale L, Craighero L, Fadiga L (2006) Understanding mirror neurons: a bio-robotic approach. Interact Stud 7(2):197–232CrossRefGoogle Scholar
  8. 8.
    Bonaiuto J, Rosta E, Arbib MA (2007) Extending the mirror neuron system model, I: audible actions and invisible grasps. Biol Cybern 96:9–38MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Yamato J, Ohya J, Ishii K (1992) Recognizing human action in time-sequential images using hidden Markov model. In: Proceedings of computer vision and pattern recognition (CVPR), Champaign, IL, pp 379–385Google Scholar
  10. 10.
    Bobick AF, Ivanov YA (1998) Action recognition using probabilistic parsing. In: Proceedings of computer vision and pattern recognition (CVPR), Santa Barbara, pp 196–202Google Scholar
  11. 11.
    Gupta A, Davis LS (2007) Objects in action: an approach for combining action understanding and object perception. In: Proceedings of computer vision and pattern recognition (CVPR), Minneapolis, pp 1–8Google Scholar
  12. 12.
    Oztop E, Wolpert D, Kawato M (2005) Mental state inference using visual control parameters. Cogn Brain Res 22:129–151CrossRefGoogle Scholar
  13. 13.
    Lopes M, Melo FS, Montesano L (2007) Affordance-based imitation learning in robots. In: IEEE/RSJ international conference on intelligent robots and systems, San Diego, CA, pp 1015–1021Google Scholar
  14. 14.
    Moore DJ, Essa IA, Hayes MH (1999) Exploiting human actions and object context for recognition tasks. Proc Int Conf Comput Vis (ICCV) 1:80–86Google Scholar
  15. 15.
    Kjellstrom H, Romeroa J, Kragic D (2011) Visual object-action recognition: inferring object affordances from human demonstration. Comput Vis Image Underst 115(1): 81–90CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • James Bonaiuto
    • 1
  1. 1.California Institute of TechnologyPasadenaUSA