Advertisement

Machine Vision and Applications

, Volume 24, Issue 4, pp 721–738 | Cite as

Hybrid homographies and fundamental matrices mixing uncalibrated omnidirectional and conventional cameras

  • Luis PuigEmail author
  • Peter Sturm
  • J. J. Guerrero
Original Paper

Abstract

In this paper, we present a deep analysis of the hybrid two-view relations combining images acquired with uncalibrated central catadioptric systems and conventional cameras. We consider both, hybrid fundamental matrices and hybrid planar homographies. These matrices contain useful geometric information. We study three different types of matrices, varying in complexity depending on their capacity to deal with a single or multiple types of central catadioptric systems. The first and simplest one is designed to deal with para-catadioptric systems, the second one and more complex considers the combination of a perspective camera and any central catadioptric system. The last one is the complete and generic model which is able to deal with any combination of central catadioptric systems. We show that the generic and most complex model sometimes is not the best option when we deal with real images. Simpler models are not as accurate as the complete model in the ideal case, but they provide a better and more accurate behavior in the presence of noise, being simpler and requiring less correspondences to be computed. Experiments with simulated data and real images are performed. To show the potential of these approaches, we develop two applications. The first is the successful matching between perspective images and hyper-catadioptric images using SIFT descriptors. In the second one, using only the hybrid fundamental matrix and the hybrid planar homography we compute the metric localization of the perspective camera inside the catadioptric view in an indoors environment.

Keywords

Hybrid two-view geometry Central catadioptric systems Hybrid matching 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baker S., Nayar S.: A theory of single-viewpoint catadioptric image formation. Int. J. Comput. Vis. 35(2), 175–196 (1999)CrossRefGoogle Scholar
  2. 2.
    Barreto, J.P., Daniilidis, K.: Epipolar geometry of central projection systems using veronese maps. In: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1258–1265. NY (2006)Google Scholar
  3. 3.
    Bartoli A., Sturm P.: Non-linear estimation of the fundamental matrix with minimal parameters. IEEE Trans. Pattern Anal. Mach. Intell. 26(4), 426–432 (2004)CrossRefGoogle Scholar
  4. 4.
    Bastanlar, Y., Temizel, A., Yardimci, Y., Sturm, P.: Effective structure-from-motion for hybrid camera systems. In: International Conference on Pattern Recognition, vol. 0, pp. 1654–1657. IEEE Computer Society, Los Alamitos, CA (2010)Google Scholar
  5. 5.
    Chen, D., Yang, J.: Image registration with uncalibrated cameras in hybrid vision systems. In: Seventh IEEE Workshops on Application of Computer Vision. WACV/MOTIONS’05, vol. 1, pp. 427–432 (2005)Google Scholar
  6. 6.
    Chen, X., Yang, J., Waibel, A.: Calibration of a hybrid camera network. In: Proceedings of Ninth IEEE International Conference on Computer Vision, vol. 1, pp. 150–155 (2003)Google Scholar
  7. 7.
    Claus, D., Fitzgibbon, A.W.: A rational function lens distortion model for general cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 213–219 (2005)Google Scholar
  8. 8.
    Gasparini, S., Sturm, P., Barreto, J.: Plane-based calibration of central catadioptric cameras. In: Proceedings of the International Conference on Computer Vision, Kyoto, Japan (2009)Google Scholar
  9. 9.
    Geyer, C., Daniilidis, K.: A unifying theory for central panoramic systems and practical applications. In: European Conference on Computer Vision, vol. 2, pp. 445–461 (2000)Google Scholar
  10. 10.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge. ISBN: 0521623049 (2000)Google Scholar
  11. 11.
    Jankovic, N., Naish, M.: A centralized omnidirectional multi-camera system with peripherally-guided active vision and depth perception. In: IEEE International Conference on Networking, Sensing and Control, pp. 662–667, 15–17 April 2007Google Scholar
  12. 12.
    Lin, H.Y., Wang, M.L.: Generalized stereo for hybrid omnidirectional and perspective imaging. In: IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), pp. 2220–2227 (2009)Google Scholar
  13. 13.
    Lowe D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 20, 91–110 (2004)CrossRefGoogle Scholar
  14. 14.
    Menem, M., Pajdla, T.: Constraints on perspective images and circular panoramas. In: Proceedings of the 15th British Machine Vision Conference. BMVA, British Machine Vision Association, London, UK (2004)Google Scholar
  15. 15.
    Murillo, A.C., Guerrero, J.J., Sagües, C.: Surf features for efficient robot localization with omnidirectional images. In: IEEE International Conference on Robotics and Automation, Roma (2007)Google Scholar
  16. 16.
    Puig, L., Guerrero, J., Sturm, P.: Matching of omindirectional and perspective images using the hybrid fundamental matrix. In: Proceedings of the Workshop on Omnidirectional Vision, Camera Networks and Non-Classical Cameras, Marseille, France (2008)Google Scholar
  17. 17.
    Puig, L., Guerrero, J.J.: Self-location from monocular uncalibrated vision using reference omniviews. In: IROS 2009: The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. St. Louis, MO, USA (2009)Google Scholar
  18. 18.
    Sagues C., Murillo A.C., Escudero F., Guerrero J.J.: From lines to epipoles through planes in two views. Pattern Recogn. 39(3), 384–393 (2006)CrossRefGoogle Scholar
  19. 19.
    Sturm, P.: Mixing catadioptric and perspective cameras. In: Workshop on Omnidirectional Vision, Copenhagen, Denmark, pp. 37–44 (2002)Google Scholar
  20. 20.
    Sturm, P., Barreto, J.: General imaging geometry for central catadioptric cameras. In: Proceedings of the 10th European Conference on Computer Vision, Marseille, France, vol. 4, pp. 609–622. Springer, Berlin (2008)Google Scholar
  21. 21.
    Svoboda T., Pajdla T.: Epipolar geometry for central catadioptric cameras. Int. J. Comput. Vis. 49(1), 23–37 (2002)zbMATHCrossRefGoogle Scholar
  22. 22.
    Mauthner, T., Fraundorfer, F., Bischof, H.: Region matching for omnidirectional images using virtual camera planes. In: Proceedings of the 11th Computer Vision Winter Workshop 2006, Telc, Czech Republic, pp. 93–98 (2006)Google Scholar
  23. 23.
    Vedaldi, A.: An open implementation of the SIFT detector and descriptor. Tech. Rep. 070012, UCLA CSD (2007)Google Scholar

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  1. 1.Departamento de Informática e Ingeniería de Sistemas (DIIS) and Instituto de Investigación en Ingeniería de Aragón (I3A)Universidad de ZaragozaZaragozaSpain
  2. 2.INRIA Rhône-Alpes and Laboratoire Jean KuntzmannGrenobleFrance

Personalised recommendations