Optimised Computational Visual Attention Model for Robotic Cognition

  • J. Amudha
  • Ravi Kiran Chadalawada
  • V. Subashini
  • B. Barath Kumar
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 182)


The goal of research in computer vision is to impart and improvise the visual intelligence in a machine i.e. to facilitate a machine to see, perceive, and respond in human-like fashion(though with reduced complexity) using multitudinal sensors and actuators. The major challenge in dealing with these kinds of machines is in making them perceive and learn from huge amount of visual information received through their sensors. Mimicking human like visual perception is an area of research that grabs attention of many researchers. To achieve this complex task of visual perception and learning, Visual Attention model is developed. A visual attention model enables the robot to selectively (and autonomously) choose a “behaviourally relevant” segment of visual information for further processing while relative exclusion of others (Visual Attention for Robotic Cognition: A Survey, March 2011).The aim of this paper is to suggest an improvised visual attention model with reduced complexity while determining the potential region of interest in a scenario.


Visual Search Visual Attention Target Object Salient Region Decision Tree Classifier 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Amudha, J., Soman, K.P., Kiran, Y.: Feature Selection in Top-Down Visual Attention Model using WEKA. International Journal of Computer Applications (0975 – 8887) 24(4), 38–43 (2011)CrossRefGoogle Scholar
  2. 2.
    Triesman, A.M., Gelade, G.: A feature integration theory of attention. Cogn. Psychol. 12, 97–136 (1980)CrossRefGoogle Scholar
  3. 3.
    Koch, C., Ullman, S.: Shifts in selective visual attention: Toward the underlying neural circuitry. Human Neurobiol. 4, 219–227 (1985)Google Scholar
  4. 4.
    Logan, G.D.: The CODE theory of visual attention: An integration of space-based and object-based attention. Psychol. Rev. 103, 603–649 (1996)CrossRefGoogle Scholar
  5. 5.
    Wolfe, J.M., Cave, K., Franzel, S.: Guided search:An alternative to the feature integration model for visual search. J. Exp. Pyschol.: Human Percept. Perform. 15, 419–433 (1989)CrossRefGoogle Scholar
  6. 6.
    Itti, L., Koch, C.: A saliency based search mechanism for overt and covert shift of visual attention. Vis. Res. 40, 1489–1506 (2000)CrossRefGoogle Scholar
  7. 7.
    Itti, L., Koch, C., Niebur, E.: A model of saliency based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  8. 8.
    Posner, M.I., Cohen, Y.: Components of visual orienting, pp. 531–556. Erlbaum, Hillsdale (1984)Google Scholar
  9. 9.
    Momotazbegam, FakhiriKarray: Visual Attention for Robotic Cognition: A Survey. IEEE Transactions on Autonomous Mental Development 3(1) (March 2011)Google Scholar
  10. 10.
    Frintrop, S.: VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search. LNCS (LNAI), vol. 3899. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  11. 11.
    Frintrop, S., Rome, E., Christension, H.I.: Computatinal visual attention sys-tem and their cognitive foundation: A survey. ACM Journal Name 7(1), 1 (2010)Google Scholar
  12. 12.
    Navalpakkamand, V., Itti, L.: Top down attention selection is fine grained. J. Vis. 6, 1180–1193 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • J. Amudha
    • 1
  • Ravi Kiran Chadalawada
    • 1
  • V. Subashini
    • 1
  • B. Barath Kumar
    • 1
  1. 1.Amrita School of Engg.BangaloreIndia

Personalised recommendations