Evolving Conspicuous Point Detectors for Camera Trajectory Estimation

  • Daniel Hernández
  • Gustavo Olague
  • Eddie Clemente
  • León Dozal
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 175)

Abstract

The interaction between a visual system with its environment is studied in terms of a purposive vision system with the aim of establishing a link between perception and action. A system that performs visuomotor tasks requires a selective perception process in order to execute specific motion actions. This combination is understood as a visual behavior. This paper presents a solution to the process of synthesizing visual behaviors through genetic programming, resulting in specialized visual routines that are used to estimate the trajectory of a camera within a vision based simultaneous localization and map building system. Thus, the experiments were carried out with a real-working system consisting of a robotic manipulator in a hand-eye configuration. The main idea is to evolve a conspicuous point detector based on the concept of an artificial dorsal stream. The results on this paper show that it is in fact possible to find key points in an image through a visual attention process in combination with an evolutionary algorithm to design specialized visual behaviors.

Keywords

Visual Attention Pareto Front Interest Point Dorsal Stream Visual Behavior 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aloimonos, J., Weiss, I., Bandyopadhyay, A.: Active vision. In: Proceedings of the First International Conference on Computer Vision, pp. 35–54 (1987)Google Scholar
  2. 2.
    Aloimonos, Y.: Active Perception, 292 pages. Lawrence Erlbaum Associates, Publishers (1993)Google Scholar
  3. 3.
    Ballard, D.: Animate Vision. Artificial Intelligence Journal 48, 57–86 (1991)CrossRefGoogle Scholar
  4. 4.
    Clemente, E., Olague, G., Dozal, L., Mancilla, M.: Object Recognition with an Optimized Ventral Stream Model Using Genetic Programming. In: Di Chio, C., Agapitos, A., Cagnoni, S., Cotta, C., de Vega, F.F., Di Caro, G.A., Drechsler, R., Ekárt, A., Esparcia-Alcázar, A.I., Farooq, M., Langdon, W.B., Merelo-Guervós, J.J., Preuss, M., Richter, H., Silva, S., Simões, A., Squillero, G., Tarantino, E., Tettamanzi, A.G.B., Togelius, J., Urquhart, N., Uyar, A.Ş., Yannakakis, G.N. (eds.) EvoApplications 2012. LNCS, vol. 7248, pp. 315–325. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  5. 5.
    Davison, A.J.: Real-Time Simultaneous Localisation and Mapping with a Single Camera. In: Proceedings of the Ninth IEEE International Conference on Computer Vision, vol. 2, pp. 1403–1410. IEEE Computer Society, Washington, DC (2003)CrossRefGoogle Scholar
  6. 6.
    Dozal, L., Olague, G., Clemente, E., Sánchez, M.: Evolving Visual Attention Programs through EVO Features. In: Di Chio, C., Agapitos, A., Cagnoni, S., Cotta, C., de Vega, F.F., Di Caro, G.A., Drechsler, R., Ekárt, A., Esparcia-Alcázar, A.I., Farooq, M., Langdon, W.B., Merelo-Guervós, J.J., Preuss, M., Richter, H., Silva, S., Simões, A., Squillero, G., Tarantino, E., Tettamanzi, A.G.B., Togelius, J., Urquhart, N., Uyar, A.Ş., Yannakakis, G.N. (eds.) EvoApplications 2012. LNCS, vol. 7248, pp. 326–335. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  7. 7.
    Dunn, E., Olague, G.: Multi-objective Sensor Planning for Efficient and Accurate Object Reconstruction. In: Raidl, G.R., Cagnoni, S., Branke, J., Corne, D.W., Drechsler, R., Jin, Y., Johnson, C.G., Machado, P., Marchiori, E., Rothlauf, F., Smith, G.D., Squillero, G. (eds.) EvoWorkshops 2004. LNCS, vol. 3005, pp. 312–321. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. 8.
    Dunn, E., Olague, G.: Pareto Optimal Camera Placement for Automated Visual Inspection. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3821–3826 (2005)Google Scholar
  9. 9.
    Fermüller, C., Aloimonos, Y.: The Synthesis of Vision and Action. In: Landy, et al. (eds.) Exploratory Vision: The Active Eye, ch. 9, pp. 205–240. Springer (1995)Google Scholar
  10. 10.
    Hernández, D., Olague, G., Clemente, E., Dozal, L.: Evolutionary Purposive or Behavioral Vision for Camera Trajectory Estimation. In: Di Chio, C., Agapitos, A., Cagnoni, S., Cotta, C., de Vega, F.F., Di Caro, G.A., Drechsler, R., Ekárt, A., Esparcia-Alcázar, A.I., Farooq, M., Langdon, W.B., Merelo-Guervós, J.J., Preuss, M., Richter, H., Silva, S., Simões, A., Squillero, G., Tarantino, E., Tettamanzi, A.G.B., Togelius, J., Urquhart, N., Uyar, A.Ş., Yannakakis, G.N. (eds.) EvoApplications 2012. LNCS, vol. 7248, pp. 336–345. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Itti, L., Koch, C.: Computational modelling of visual attention. Nature Review Neuroscience 2(3), 194–203 (2001)CrossRefGoogle Scholar
  12. 12.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4(4), 219–227 (1985)Google Scholar
  13. 13.
    Lepetit, V., Fua, P.: Monocular Model-Based 3D Tracking of Rigid Objects: A Survey. In: Foundations and Trends in Computer Graphics and Vision, vol. 1, pp. 1–89 (2005)Google Scholar
  14. 14.
    Olague, G.: Automated Photogrammetric Network Design using Genetic Algorithms. Photogrammetric Engineering & Remote Sensing 68(5), 423–431 (2002)Google Scholar
  15. 15.
    Olague, G., Mohr, R.: Optimal Camera Placement for Accurate Reconstruction. Pattern Recognition 27(4), 927–944 (2002)CrossRefGoogle Scholar
  16. 16.
    Olague, G., Trujillo, L.: Interest Point Detection through Multiobjective Genetic Programming. Applied Soft Computing (to appear, 2012)Google Scholar
  17. 17.
    Shi, J., Tomasi, C.: Good features to track. In: Proceedings of Computer Vision and Pattern Recognition, pp. 593–600 (1994)Google Scholar
  18. 18.
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognitive Psychology 12(1), 97–136 (1980)CrossRefGoogle Scholar
  19. 19.
    Trujillo, L., Olague, G.: Automated Design of Image Operators that Detect Interest Points. Evolutionary Computation 16, 483–507 (2008)CrossRefGoogle Scholar
  20. 20.
    Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the strength Pareto evolutionary algorithm. Technical report, Evolutionary Methods for Design (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Daniel Hernández
    • 1
  • Gustavo Olague
    • 2
  • Eddie Clemente
    • 3
  • León Dozal
    • 1
  1. 1.EvoVision Project, Computer Science DepartmentCICESEEnsenadaMéxico
  2. 2.CICESE, Carretera Ensenada-TijuanaEnsenadaMéxico
  3. 3.Tecnológico de Estudios Superiores de EcatepecEcatepec de MorelosMéxico

Personalised recommendations