Vergence Using GPU Cepstral Filtering

  • Luis Almeida
  • Paulo Menezes
  • Jorge Dias
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 349)

Abstract

Vergence ability is an important visual behavior observed on living creatures when they use vision to interact with the environment. The notion of active observer is equally useful for robotic vision systems on tasks like object tracking, fixation and 3D environment structure recovery. Humanoid robotics are a potential playground for such behaviors. This paper describes the implementation of a real time binocular vergence behavior using cepstral filtering to estimate stereo disparities. By implementing the cepstral filter on a graphics processing unit (GPU) using Compute Unified Device Architecture (CUDA) we demonstrate that robust parallel algorithms that used to require dedicated hardware are now available on common computers. The overall system is implemented in the binocular vision system IMPEP (IMPEP Integrated Multimodal Perception Experimental Platform) to illustrate the system performance experimentally.

Keywords

Cepstrum GPU CUDA vergence 

References

  1. 1.
    Yeshurun, Y., Schwartz, E.L.: Cepstral Filtering on a Columnar Image Architecture: A Fast Algorithm for Binocular Stereo Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 11(7), 759–767 (1989)CrossRefGoogle Scholar
  2. 2.
    Coombs, D.: Real-time Gaze Holding in Binocular Robot Vision, PhD. Thesis, Department of Computer Science, University of Rochester (June 1992)Google Scholar
  3. 3.
    Kwon, K.-C., Lee, H.-S., Kim, N.: Hybrid Cepstral Filter for Rapid and Precise Vergence. Control of Parallel Stereoscopic Camera. Journal of the Research Institute for Computer and Information Communication 12(3) (December 2004)Google Scholar
  4. 4.
    NVIDIA CUDA C ProgrammingGuide 3.1, NVIDIA (2010)Google Scholar
  5. 5.
    OpenCV (Open Source Computer Vision)Google Scholar
  6. 6.
    Taylor, J.R., Olson, T., Martin, W.N.: Accurate vergence control in complex scenes. In: Proc. Computer Vision and Pattern Recognition, Seattle, USA, pp. 540–545 (June 1994)Google Scholar
  7. 7.
    Ferreira, J.F., Lobo, J., Dias, J.: Bayesian Real-Time Perception Algorithms on GPU - Real-Time Implementation of Bayesian Models for Multimodal Perception Using CUDA. Journal of Real-Time Image Processing (February 26, 2010); Special Issue, Springer Berlin/Heidelberg, published online (ISSN: 1861-8219)Google Scholar
  8. 8.
    Garland, M., Le Grand, S., Nickolls, J., Anderson, J., Hardwick, J., Morton, S., Phillips, E., Zhang, Y., Volkov, V.: Parallel computing experiences with CUDA. IEEE Micro. 28(4), 13–27 (2008)CrossRefGoogle Scholar
  9. 9.
    CUDA: CUFFT Library, NVIDIA Corp (2010)Google Scholar
  10. 10.
    Olson, T.J., Coombs, D.: Real-Time Vergence Control for Binocular Robots. IJCV 7(1), 67–89 (1991)CrossRefGoogle Scholar
  11. 11.
    Ballard, D.H.: Reference Frames for Animate Vision. In: International Joint Conference on Artificial Intelligence. AAAI, Menlo Park (1989)Google Scholar
  12. 12.
    Dias, J., Paredes, C., Fonseca, I., Araujo, H., Batista, J., Almeida, A.T.: Simulating Pursuit with Machine Experiments with Robots and Artificial Vision. IEEE Transactions on Robotics and Automation 3(1), 1–18 (1998)CrossRefGoogle Scholar
  13. 13.
    Batista, J., Dias, J., Araujo, H., de Almeida, A.: The ISR Multi-Degree of Freedom Active Vision Robot Head: Design and Calibration. In: SMART Program Workshop, Instituto Superior Técnico, Lisboa, Portugal, April 27-28 (1995)Google Scholar
  14. 14.
    Brown, C.M.: The Rochester robot, Tech. Rep. 257, Dept. Comp. Sci., Univ. Rochester, NY (1988)Google Scholar
  15. 15.
    Mowforth, P., Siebert, J., Jin, Z., Urquhart, C.: A head called Richard. In: Proceedings of the British Machine Vision Conference 1990, Oxford, UK, pp. 361–366 (1990)Google Scholar
  16. 16.
    Betsis, D., Lavest, J.: Kinematic calibration of the KTH head-eye system. In: ISRN KTH (1994)Google Scholar
  17. 17.
    Natale, L., Metta, G., Sandini, G.: Development of auditory-evoked reflexes: Visuo-acoustic cues integration in a binocular head. Robotics and Autonomous Systems 39, 87–106 (2002)CrossRefGoogle Scholar
  18. 18.
    Eklundh, J.-O., Bjrkman, M.: Recognition of objects in the real world from a systems perspectiveâ. Kuenstliche Intelligenz 19(2), 12–17 (2005)Google Scholar
  19. 19.
    Bernardino, A., Santos Victor, J.: Binocular tracking: integrating perception and control. IEEE Transactions on Robotics & Automation 15(6), 1080–1094 (1999)CrossRefGoogle Scholar
  20. 20.
    Bogert, B., Healy, M., Tukey, J.W.: The Quefrency Alanysis of Time Series for Echoes: Cepstrum, Pseudo-autocovariance, Cross-Cepstrurn, and Saphe Cracking. In: Rosenblatt, M. (ed.) Proc. Symp. Time Series Analysis, pp. 209–243. John Wiley and Sons, Chichester (1963)Google Scholar
  21. 21.
    Scharstein, D., Szeliski, R.: A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms. International Journal of Computer Vision 47(1-3), 7–42 (2002)MATHCrossRefGoogle Scholar
  22. 22.
    Brown, M.Z., Burschka, D., Hager, G.D.: Advances in Computational Stereo. IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI) 25(8), 993–1008 (2003)CrossRefGoogle Scholar
  23. 23.
    POP project (Perception on Purpose), number FP6-IST - 2004-027268, http://perception.inrialpes.fr/POP/
  24. 24.
    Wilming, N., Wolfsteller, F., König, P., Caseiro, R., Xavier, J., Araújo, H.: Attention Models for Vergence Movements based on the JAMF Framework and the POPEYE Robot. VISSAPP 2, 429–435 (2009)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2011

Authors and Affiliations

  • Luis Almeida
    • 1
    • 2
  • Paulo Menezes
    • 1
  • Jorge Dias
    • 1
  1. 1.Institute of Systems and Robotics, Department of Electrical and Computer EngineeringUniversity of CoimbraCoimbraPortugal
  2. 2.Department of Informatics EngineeringInstitute Polytechnic of TomarTomarPortugal

Personalised recommendations