Advertisement

Infrastructure-Based Multi-camera Calibration Using Radial Projections

Conference paper
  • 906 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12361)

Abstract

Multi-camera systems are an important sensor platform for intelligent systems such as self-driving cars. Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually. However, extrinsic calibration of systems with little to no visual overlap between the cameras is a challenge. Given the camera intrinsics, infrastructure-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion. In this paper, we propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach. Assuming that the distortion is mainly radial, we introduce a two-stage approach. We first estimate the camera-rig extrinsics up to a single unknown translation component per camera. Next, we solve for both the intrinsic parameters and the missing translation components. Extensive experiments on multiple indoor and outdoor scenes with multiple multi-camera systems show that our calibration method achieves high accuracy and robustness. In particular, our approach is more robust than the naive approach of first estimating intrinsic parameters and pose per camera before refining the extrinsic parameters of the system. The implementation is available at https://github.com/youkely/InfrasCal.

Notes

Acknowledgement

This work was supported by the Swedish Foundation for Strategic Research (Semantic Mapping and Visual Navigation for Smart Robots), the Chalmers AI Research Centre (CHAIR) (VisLocLearn), OP VVV project Research Center for Informatics No. CZ.02.1.01/0.0/0.0/\(16\_019\)/0000765, and EU Horizon 2020 research and innovation program under grant No. 688007 (TrimBot2020). Viktor Larsson was supported by an ETH Zurich Postdoctoral Fellowship.

Supplementary material

504471_1_En_20_MOESM1_ESM.pdf (3.7 mb)
Supplementary material 1 (pdf 3775 KB)

References

  1. 1.
    Agarwal, S., Mierle, K., et al.: Ceres solver, 2013 (2018). http://ceres-solver.org
  2. 2.
    Arth, C., Wagner, D., Klopschitz, M., Irschara, A., Schmalstieg, D.: Wide area localization on mobile phones. In: 2009 8th IEEE International Symposium on Mixed and Augmented Reality, pp. 73–82. IEEE (2009)Google Scholar
  3. 3.
    Bujnak, M., Kukelova, Z., Pajdla, T.: A general solution to the P4P problem for camera with unknown focal length. In: Computer Vision and Pattern Recognition (CVPR) (2008)Google Scholar
  4. 4.
    Camposeco, F., Sattler, T., Pollefeys, M.: Non-parametric structure-based calibration of radially symmetric cameras. In: International Conference on Computer Vision (ICCV) (2015)Google Scholar
  5. 5.
    Carrera, G., Angeli, A., Davison, A.J.: SLAM-based automatic extrinsic calibration of a multi-camera rig. In: International Conference on Robotics and Automation (ICRA) (2011)Google Scholar
  6. 6.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Geppert, M., Liu, P., Cui, Z., Pollefeys, M., Sattler, T.: Efficient 2D-3D matching for multi-camera visual localization. In: International Conference on Robotics and Automation (ICRA) (2019)Google Scholar
  8. 8.
    Govindu, V.M.: Robustness in motion averaging. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3852, pp. 457–466. Springer, Heidelberg (2006).  https://doi.org/10.1007/11612704_46CrossRefGoogle Scholar
  9. 9.
    Haralick, B.M., Lee, C.N., Ottenberg, K., Nölle, M.: Review and analysis of solutions of the three point perspective pose estimation problem. Int. J. Comput. Vis. 13(3), 331–356 (1994).  https://doi.org/10.1007/BF02028352CrossRefGoogle Scholar
  10. 10.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar
  11. 11.
    Heng, L., et al.: Project autovision: localization and 3D scene perception for an autonomous vehicle with a multi-camera system. In: International Conference on Robotics and Automation (ICRA) (2019)Google Scholar
  12. 12.
    Heng, L., Furgale, P., Pollefeys, M.: Leveraging image-based localization for infrastructure-based calibration of a multi-camera rig. J. Field Robot. 32(5), 775–802 (2015)CrossRefGoogle Scholar
  13. 13.
    Heng, L., Lee, G.H., Pollefeys, M.: Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle. Auton. Robot. 39(3), 259–277 (2015).  https://doi.org/10.1007/s10514-015-9466-8CrossRefGoogle Scholar
  14. 14.
    Heng, L., Li, B., Pollefeys, M.: CamOdoCal: automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry. In: International Conference on Intelligent Robots and Systems (IROS) (2013)Google Scholar
  15. 15.
    Josephson, K., Byrod, M.: Pose estimation with radial distortion and unknown focal length. In: Computer Vision and Pattern Recognition (CVPR) (2009)Google Scholar
  16. 16.
    Kannala, J., Brandt, S.S.: A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. Trans. Pattern Anal. Mach. Intell. (PAMI) 28(8), 1335–1340 (2006)CrossRefGoogle Scholar
  17. 17.
    Kukelova, Z., Bujnak, M., Pajdla, T.: Real-time solution to the absolute pose problem with unknown radial distortion and focal length. In: International Conference on Computer Vision (ICCV) (2013)Google Scholar
  18. 18.
    Kukelova, Z., Heller, J., Fitzgibbon, A.: Efficient intersection of three quadrics and applications in computer vision. In: Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  19. 19.
    Kumar, R.K., Ilie, A., Frahm, J.M., Pollefeys, M.: Simple calibration of non-overlapping cameras with a mirror. In: Computer Vision and Pattern Recognition (CVPR) (2008)Google Scholar
  20. 20.
    Larsson, V., Kukelova, Z., Zheng, Y.: Making minimal solvers for absolute pose estimation compact and robust. In: International Conference on Computer Vision (ICCV) (2017)Google Scholar
  21. 21.
    Larsson, V., Kukelova, Z., Zheng, Y.: Camera pose estimation with unknown principal point. In: Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  22. 22.
    Larsson, V., Sattler, T., Kukelova, Z., Pollefeys, M.: Revisiting radial distortion absolute pose. In: International Conference on Computer Vision (ICCV) (2019)Google Scholar
  23. 23.
    Li, B., Heng, L., Koser, K., Pollefeys, M.: A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern. In: International Conference on Intelligent Robots and Systems (IROS) (2013)Google Scholar
  24. 24.
    Liu, P., Geppert, M., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.: Towards robust visual odometry with a multi-camera system. In: International Conference on Intelligent Robots and Systems (IROS) (2018)Google Scholar
  25. 25.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004).  https://doi.org/10.1023/B:VISI.0000029664.99615.94CrossRefGoogle Scholar
  26. 26.
    Maddern, W., Pascoe, G., Gadd, M., Barnes, D., Yeomans, B., Newman, P.: Real-time kinematic ground truth for the Oxford RobotCar Dataset. arXiv preprint arXiv:2002.10152 (2020). https://arxiv.org/pdf/2002.10152
  27. 27.
    Maye, J., Furgale, P., Siegwart, R.: Self-supervised calibration for robotic systems. In: 2013 IEEE Intelligent Vehicles Symposium (IV), pp. 473–480. IEEE (2013)Google Scholar
  28. 28.
    Olson, E.: AprilTag: a robust and flexible visual fiducial system. In: International Conference on Robotics and Automation (ICRA) (2011)Google Scholar
  29. 29.
    Olsson, C., Enqvist, O.: Stable structure from motion for unordered image collections. In: Heyden, A., Kahl, F. (eds.) SCIA 2011. LNCS, vol. 6688, pp. 524–535. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21227-7_49CrossRefGoogle Scholar
  30. 30.
    Penate-Sanchez, A., Andrade-Cetto, J., Moreno-Noguer, F.: Exhaustive linearization for robust camera pose and focal length estimation. Trans. Pattern Anal. Mach. Intell. (PAMI) 35(10), 2387–2400 (2013)CrossRefGoogle Scholar
  31. 31.
    Robinson, A., Persson, M., Felsberg, M.: Robust accurate extrinsic calibration of static non-overlapping cameras. In: Felsberg, M., Heyden, A., Krüger, N. (eds.) CAIP 2017. LNCS, vol. 10425, pp. 342–353. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-64698-5_29CrossRefGoogle Scholar
  32. 32.
    Sattler, T., et al.: Benchmarking 6DOF outdoor visual localization in changing conditions. In: Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  33. 33.
    Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  34. 34.
    Schwesinger, U., et al.: Automated valet parking and charging for e-mobility. In: Intelligent Vehicles Symposium (IV). IEEE (2016)Google Scholar
  35. 35.
    Sorkine-Hornung, O., Rabinovich, M.: Least-squares rigid motion using SVD. Computing 1(1), 1–5 (2017)Google Scholar
  36. 36.
    Strisciuglio, N., et al.: Trimbot 2020: an outdoor robot for automatic gardening. In: ISR 2018; 50th International Symposium on Robotics, pp. 1–6. VDE (2018)Google Scholar
  37. 37.
    Sturm, P.F., Maybank, S.J.: On plane-based camera calibration: a general algorithm, singularities, applications. In: Computer Vision and Pattern Recognition (CVPR) (1999)Google Scholar
  38. 38.
    Thirthala, S., Pollefeys, M.: Radial multi-focal tensors. Int. J. Comput. Vis. 96(2), 195–211 (2012).  https://doi.org/10.1007/s11263-011-0463-xMathSciNetzbMATHCrossRefGoogle Scholar
  39. 39.
    Triggs, B.: Camera pose and calibration from 4 or 5 known 3D points. In: International Conference on Computer Vision (ICCV) (1999)Google Scholar
  40. 40.
    Tsai, R.: A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 3(4), 323–344 (1987)CrossRefGoogle Scholar
  41. 41.
    Wu, C.: P3.5P: pose estimation with unknown focal length. In: Computer Vision and Pattern Recognition (CVPR) (2015)Google Scholar
  42. 42.
    Zhang, Q., Pless, R.: Extrinsic calibration of a camera and laser range finder (improves camera calibration). In: International Conference on Intelligent Robots and Systems (IROS) (2004)Google Scholar
  43. 43.
    Zheng, Y., Sugimoto, S., Sato, I., Okutomi, M.: A general and simple method for camera pose and focal length determination. In: Computer Vision and Pattern Recognition (CVPR) (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer ScienceETH ZürichZürichSwitzerland
  2. 2.VRG, Faculty of Electrical EngineeringCzech Technical University in PraguePragueCzech Republic
  3. 3.Microsoft Mixed Reality & AI Zurich LabZürichSwitzerland
  4. 4.Department of Electrical EngineeringChalmers University of TechnologyGothenburgSweden

Personalised recommendations