Optimizing a Conspicuous Point Detector for Camera Trajectory Estimation with Brain Programming

  • Daniel E. Hernández
  • Gustavo Olague
  • Eddie Clemente
  • León Dozal
Part of the Studies in Computational Intelligence book series (SCI, volume 500)


The interaction between a visual system and its environment is an important research topic of purposive vision, seeking to establish a link between perception and action. When a robotic system implements vision as its main source of information from the environment, it must be selective with the perceived data. In order to fulfill the task at hand we must contrive a way of extracting data from the images that will help to achieve the system’s goal; this selective process is what we call a visual behavior. In this paper, we present an automatic process for synthesizing visual behaviors through genetic programming, resulting in specialized prominent point detection algorithms to estimate the trajectory of a camera with a simultaneous localization and map building system. We present a real working system; the experiments were done with a robotic manipulator in a hand-eye configuration. The main idea of our work is to evolve a conspicuous point detector based on the concept of an artificial dorsal stream. We experimentally show that it is in fact possible to find conspicuous points in an image through a visual attention process, and that it is also possible to purposefully generate them through an evolutionary algorithm, seeking to solve a specific task.


Evolutionary Visual Behavior Multiobjective Evolution Purposive Vision SLAM Conspicuous Point Detection 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aloimonos, J., Weiss, I., Bandyopadhyay, A.: Active vision. In: Proceedings of the First International Conference on Computer Vision, pp. 35–54 (1987)Google Scholar
  2. 2.
    Aloimonos, Y.: Active Perception, 292 pages. Lawrence Erlbaum Associates, Publishers (1993)Google Scholar
  3. 3.
    Ballard, D.: Animate Vision. Artificial Intelligence Journal 48, 57–86 (1991)CrossRefGoogle Scholar
  4. 4.
    Clemente, E., Olague, G., Dozal, L., Mancilla, M.: Object Recognition with an Optimized Ventral Stream Model using Genetic Programming. In: Di Chio, C., et al. (eds.) EvoApplications 2012. LNCS, vol. 7248, pp. 315–325. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  5. 5.
    Davison, A.J.: Real-Time Simultaneous Localisation and Mapping with a Single Camera. In: Proceedings of the Ninth IEEE International Conference on Computer Vision, vol. 2, pp. 1403–1410. IEEE Computer Society, Washington, DC (2003)CrossRefGoogle Scholar
  6. 6.
    Dozal, L., Olague, G., Clemente, E., Sánchez, M.: Evolving Visual Attention Programs through EVO Features. In: Di Chio, C., et al. (eds.) EvoApplications 2012. LNCS, vol. 7248, pp. 326–335. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  7. 7.
    Dunn, E., Olague, G.: Multi-objective sensor planning for efficient and accurate object reconstruction. In: Raidl, G.R., et al. (eds.) EvoWorkshops 2004. LNCS, vol. 3005, pp. 312–321. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. 8.
    Dunn, E., Olague, G.: Pareto Optimal Camera Placement for Automated Visual Inspection. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3821–3826 (2005)Google Scholar
  9. 9.
    Fermüller, C., Aloimonos, Y.: The Synthesis of Vision and Action. In: Landy, et al. (eds.) Exploratory Vision: The Active Eye, ch. 9, pp. 205–240. Springer (1995)Google Scholar
  10. 10.
    Hernández, D., Olague, G., Clemente, E., Dozal, L.: Evolutionary Purposive or Behavioral Vision for Camera Trajectory Estimation. In: Di Chio, C., et al. (eds.) EvoApplications 2012. LNCS, vol. 7248, pp. 336–345. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Itti, L., Koch, C.: Computational modelling of visual attention. Nature Review Neuroscience 2(3), 194–203 (2001)CrossRefGoogle Scholar
  12. 12.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4(4), 219–227 (1985)Google Scholar
  13. 13.
    Lepetit, V., Fua, P.: Monocular Model-Based 3D Tracking of Rigid Objects: A Survey. Foundations and Trends in Computer Graphics and Vision 1, 1–89 (2005)CrossRefGoogle Scholar
  14. 14.
    Olague, G., Trujillo, L.: Evolutionary-computer-assisted design of image operators that detect interest points using genetic programming. Image and Vision Computing 29(7), 484–498 (2011)CrossRefGoogle Scholar
  15. 15.
    Olague, G., Trujillo, L.: Interest Point Detection through Multiobjective Genetic Programming. Applied Soft Computing 12(8), 2566–2582 (2012)CrossRefGoogle Scholar
  16. 16.
    Olague, G.: Evolutionary Computer Vision – The First Footprints. Springer (to appear)Google Scholar
  17. 17.
    Shi, J., Tomasi, C.: Good features to track. In: Proceedings of Computer Vision and Pattern Recognition, pp. 593–600 (1994)Google Scholar
  18. 18.
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognitive Psychology 12(1), 97–136 (1980)CrossRefGoogle Scholar
  19. 19.
    Trujillo, L., Olague, G.: Automated Design of Image Operators that Detect Interest Points. Evolutionary Computation 16(4), 483–507 (2008)CrossRefGoogle Scholar
  20. 20.
    Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the strength Pareto evolutionary algorithm. Technical report, Evolutionary Methods for Design (2001)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Daniel E. Hernández
    • 1
  • Gustavo Olague
    • 1
  • Eddie Clemente
    • 1
    • 2
  • León Dozal
    • 1
  1. 1.Proyecto EvoVisión, Departamento de Ciencias de la Computación, División de Física AplicadaCentro de Investigación Científica y de Educación Superior de EnsenadaEnsenadaMéxico
  2. 2.Tecnológico de Estudios Superiores de Ecatepec.Ecatepec de MorelosMexico

Personalised recommendations