Machine Vision and Applications

, Volume 28, Issue 1–2, pp 141–155 | Cite as

Extrinsic calibration of multi-modal sensor arrangements with non-overlapping field-of-view

  • Carolina Raposo
  • João Pedro Barreto
  • Urbano Nunes
Original Paper


Several applications in robotics require complex sensor arrangements that must be carefully calibrated, both intrinsically and extrinsically, to allow information fusion and enable the system to function as a whole. These arrangements can combine different sensing modalities—such as color cameras, laser-rangefinders, and depth cameras—in an attempt to obtain richer descriptions of the environment. Finding the location of multi-modal sensors in a common world reference frame is a difficult problem that is largely unsolved whenever sensors observe distinct, disjoint parts of the scene. This article builds on recent results in object pose estimation using mirror reflections to provide an accurate and practical solution for the extrinsic calibration of mixtures of color cameras, LRFs, and depth cameras with non-overlapping field-of-view. The method is able to calibrate any possible sensor combination as far as the setup includes at least one color camera. The technique is tested in challenging situations not covered by the current state-of-the-art, proving to be practical and effective. The calibration software is made available to be freely used by the research community.


Extrinsic calibration Laser-rangefinder Depth camera Non-overlapping FOV Mirror 


  1. 1.
    Auvinet, E., Meunier, J., Multon, F.: Multiple depth cameras calibration and body volume reconstruction for gait analysis. In: 2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA) (2012)Google Scholar
  2. 2.
    Bok, Y., Choi, D.G., Vasseur, P., Kweon, I.S.: Extrinsic calibration of non-overlapping camera-laser system using structured environment. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 436–443 (2014)Google Scholar
  3. 3.
    Bouguet, J.Y.: Camera calibration toolbox for matlab.
  4. 4.
    Douillard, B., Fox, D., Ramos, F., Durrant-Whyte, H.: Classification and semantic mapping of urban environments. Int. J. Robot. Res. 30, 5–32 (2011)CrossRefGoogle Scholar
  5. 5.
    Haralick, B., Lee, C.N., Ottenberg, K., Nlle, M.: Review and analysis of solutions of the three point perspective pose estimation problem. Int. J. Comput. Vis. 13, 331–356 (1994)CrossRefGoogle Scholar
  6. 6.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, second edn. Cambridge University Press, ISBN 0521540518 (2004)Google Scholar
  7. 7.
    Herrera, D., Kannala, J., Heikkilä, J.: Joint depth and color camera calibration with distortion correction. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2058–2064 (2012)CrossRefGoogle Scholar
  8. 8.
    Hesch, J., Mourikis, A., Roumeliotis, S.: Determining the camera to robot-body transformation from planar mirror reflections. In: IROS 2008. IEEE/RSJ International Conference on Intelligent Robots and Systems (2008)Google Scholar
  9. 9.
    Hesch, J.A., Mourikis, A.I., Roumeliotis, S.I.: Extrinsic camera calibration using multiple reflections. In: K. Daniilidis, P. Maragos, N. Paragios (eds.) Computer Vision ECCV 2010, Lecture Notes in Computer Science, vol. 6314, pp. 311–325 (2010)Google Scholar
  10. 10.
    Horn, B.K.P.: Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. A 4, 629–642 (1987)CrossRefGoogle Scholar
  11. 11.
    Javed, O., Rasheed, Z., Alatas, O., Shah, M.: Knight trade;: a real time surveillance system for multiple and non-overlapping cameras. In: ICME ’03. Proceedings of the 2003 International Conference on Multimedia and Expo (2003)Google Scholar
  12. 12.
    Kang, Y.S., Ho, Y.S.: High-quality multi-view depth generation using multiple color and depth cameras. In: 2010 IEEE International Conference on Multimedia and Expo (ICME) (2010)Google Scholar
  13. 13.
    Kumar, R., Ilie, A., Frahm, J.M., Pollefeys, M.: Simple calibration of non-overlapping cameras with a mirror. In: CVPR 2008. IEEE Conference on Computer Vision and Pattern Recognition (2008)Google Scholar
  14. 14.
    Mariottini, G., Scheggi, S., Morbidi, F., Prattichizzo, D.: Planar mirrors for image-based robot localization and 3-d reconstruction. Mechatronics 22(4), 398–409 (2012). Special Issue on Visual ServoingCrossRefGoogle Scholar
  15. 15.
    Pagel, F.: Calibration of non-overlapping cameras in vehicles. In: 2010 IEEE Intelligent Vehicles Symposium (IV) (2010)Google Scholar
  16. 16.
    Pflugfelder, R., Bischof, H.: Localization and trajectory reconstruction in surveillance cameras with nonoverlapping views. IEEE Trans. Pattern Anal. Mach. Intell. 32, 709–721 (2010)CrossRefGoogle Scholar
  17. 17.
    Premebida, C., Monteiro, G., Nunes, U., Peixoto, P.: A lidar and vision-based approach for pedestrian and vehicle detection and tracking. In: ITSC 2007. IEEE Intelligent Transportation Systems Conference (2007)Google Scholar
  18. 18.
    Raposo, C., Antunes, M., Barreto, J.: Piecewise-planar stereoscan: Structure and motion from plane primitives. In: Computer Vision ECCV 2014, Lecture Notes in Computer Science. Springer International Publishing (2014)Google Scholar
  19. 19.
    Raposo, C., Barreto, J., Nunes, U.: Fast and accurate calibration of a kinect sensor. In: 2013 International Conference on 3D Vision - 3DV 2013 (2013)Google Scholar
  20. 20.
    Rodrigues, R., Barreto, J.P., Nunes, U.: Camera pose estimation using images of planar mirror reflections. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) Computer Vision ECCV 2010. Lecture Notes in Computer Science. Springer, Berlin Heidelberg (2010)Google Scholar
  21. 21.
    Schenk, K., Kolarow, A., Eisenbach, M., Debes, K., Gross, H.: Automatic calibration of multiple stationary laser range finders using trajectories. In: 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance (AVSS) (2012)Google Scholar
  22. 22.
    Sturm, P., Bonfort, T.: How to compute the pose of an object without a direct view? In: Computer Vision ACCV 2006, Lecture Notes in Computer Science. Springer Berlin Heidelberg (2006)Google Scholar
  23. 23.
    Vasconcelos, F., Barreto, J.P., Nunes, U.: A minimal solution for the extrinsic calibration of a camera and a laser-rangefinder. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2097–2107 (2012)CrossRefGoogle Scholar
  24. 24.
    Wilson, A.D., Benko, H.: Combining multiple depth cameras and projectors for interactions on, above and between surfaces. In: Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology, UIST ’10 (2010)Google Scholar
  25. 25.
    Ying, X., Peng, K., Ren, R., Zha, H.: Geometric properties of multiple reflections in catadioptric camera with two planar mirrors. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1126–1132 (2010)Google Scholar
  26. 26.
    Zha, H., Zhao, H., Cui, J., Song, X., Ying, X.: Combining laser-scanning data and images for target tracking and scene modeling. In: Robotics Research, Springer Tracts in Advanced Robotics, vol. 70, pp. 573–587 (2011)Google Scholar
  27. 27.
    Zhang, Q., Pless, R.: Extrinsic calibration of a camera and laser range finder (improves camera calibration). In: (IROS 2004). Proceedings, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (2004)Google Scholar
  28. 28.
    Zhang, Z.: Flexible camera calibration by viewing a plane from unknown orientations. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision. (1999). doi: 10.1109/ICCV.1999.791289

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Institute of Systems and RoboticsUniversity of CoimbraCoimbraPortugal

Personalised recommendations