Skip to main content
Log in

Camera calibration for the surround-view system: a benchmark and dataset

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Surround-view system (SVS) is widely used in the advanced driver assistance system (ADAS). SVS uses four fish-eye lenses to monitor real-time scenes around the vehicle. However, accurate intrinsic and extrinsic parameter estimation is required for the proper functioning of the system. At present, the intrinsic calibration can be pipeline by utilizing checkerboard algorithm, while extrinsic calibration is still immature. Therefore, we proposed a specific calibration pipeline to estimate extrinsic parameters robustly. This scheme takes a driving sequence of four cameras as input. It firstly utilizes lane line to roughly estimate each camera pose. Considering the environmental condition differences in each camera, we separately select strategies from two methods to accurately estimate the extrinsic parameters. To achieve accurate estimates for both front and rear camera, we proposed a method that mutually iterating line detection and pose estimation. As for bilateral camera, we iteratively adjust the camera pose and position by minimizing texture and edge error between ground projections of adjacent cameras. After estimating the extrinsic parameters, the surround-view image can be synthesized by homography-based transformation. The proposed pipeline can robustly estimate the four SVS camera extrinsic parameters in real driving environments. In addition, to evaluate the proposed scheme, we build a surround-view fish-eye dataset, which contains 40 videos with 32,000 frames, acquired from different real traffic scenarios. All the frames in each video are manually labeled with lane annotation, with its GT extrinsic parameters. Moreover, this surround-view dataset could be used by other researchers to evaluate their performance. The dataset will be available soon.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availability

The datasets associated with the current study are available upon reasonable request from the corresponding author.

References

  1. Baftiu, I., Pajaziti, A., Cheok, K.C.: Multi-mode surround view for ADAS vehicles. In: 2016 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), pp. 190–193 (2016)

  2. Gao, Y., Lin, C., Zhao, Y., Wang, X., Wei, S., Huang, Q.: 3-D surround view for advanced driver assistance systems. IEEE Trans. Intell. Transp. Syst. 19(1), 320–328 (2018)

    Article  Google Scholar 

  3. Chen, Y., Xiang, Z., Du, W.: Improving lane detection with adaptive homography prediction. Vis. Comput. 39(2), 581–595 (2023)

    Article  Google Scholar 

  4. Li, Z., Wang, W., Li, H., Xie, E., Sima, C., Lu, T., Qiao, Y., Dai, J.: BEVFormer: learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In: Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX, pp. 1–18 (2022)

  5. Ma, Y., Liu, Y., Zhang, L., Cao, Y., Guo, S., Li, H.: Research review on parking space detection method. Symmetry 13(1), 128 (2021)

    Article  ADS  Google Scholar 

  6. Choi, K., Jung, H.G., Suhr, J.K.: Automatic calibration of an around view monitor system exploiting lane markings. Sensors 18(8), 2956 (2018)

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  7. Liu, X., Zhang, L., Shen, Y., Zhang, S., Zhao, S.: Online camera pose optimization for the surround-view system. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 383–391 (2019)

  8. Zhang, T., Zhang, L., Shen, Y., Ma, Y., Zhao, S., Zhou, Y.: Oecs: Towards online extrinsics correction for the surround-view system. In: 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2020)

  9. Zhang, T., Zhao, N., Shen, Y., Shao, X., Zhang, L., Zhou, Y.: ROECS: a robust semi-direct pipeline towards online extrinsics correction of the surround-view system. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 3153–3161 (2021)

  10. Yogamani, S., Hughes, C., Horgan, J., Sistu, G., Varley, P., O’Dea, D., Uricár, M., Milz, S., Simon, M., Amende, K., et al.: WoodScape: a multi-task, multi-camera fisheye dataset for autonomous driving. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9308–9318 (2019)

  11. Chen, Y., Zhang, L., Shen, Y., Zhao, B.N., Zhou, Y.: Extrinsic self-calibration of the surround-view system: a weakly supervised approach. IEEE Trans. Multimed. (2022). https://doi.org/10.1109/TMM.2022.3144889

    Article  Google Scholar 

  12. Ma, C., Xie, M.: A method for lane detection based on color clustering. In: 2010 Third International Conference on Knowledge Discovery and Data Mining. pp. 200–203 (2010)

  13. Wang, J., Mei, T., Kong, B., Wei., H.: An approach of lane detection based on inverse perspective mapping. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), pp. 35–38 (2014). https://doi.org/10.1109/ITSC.2014.6957662

  14. Maya, P., Tharini, C.: Performance analysis of lane detection algorithm using partial Hough transform. In: Maya, P., Tharini, C. (eds.) 2020 21st International Arab Conference on Information Technology (ACIT), pp. 1–4 (2020)

  15. Punagin, A., Punagin, S.: Analysis of lane detection techniques on structured roads using openCV. Int. J. Res. Appl. Sci. Eng. Technol. 8, 2994–3003 (2020)

    Article  Google Scholar 

  16. Teo, T.Y., Sutopo, R., Lim, J.M.-Y., Wong, K.: Innovative lane detection method to increase the accuracy of lane departure warning system. Multimed. Tools Appl. 80, 2063–2080 (2021)

    Article  Google Scholar 

  17. Haris, M., Hou, J., Wang, X.: Lane line detection and departure estimation in a complex environment by using an asymmetric kernel convolution algorithm. Vis. Comput. 39(2), 519–538 (2023)

    Article  Google Scholar 

  18. Huval, B., Wang, T., Tandon, S., Kiske, J., Song, W., Pazhayampallil, J., Andriluka, M., Cheng-Yue, R., Mujica, F., Coates, A.: An empirical evaluation of deep learning on highway driving. Comput. Sci. (2015). https://doi.org/10.48550/arXiv.1504.01716

  19. Lee, S., Kim, J., Yoon, J.S., Shin, S., Bailo, O., Kim, N., Lee, T.-H., Hong, H.S., Han, S.-H., Kweon, I.S.: VPGNet: vanishing point guided network for lane and road marking detection and recognition. In: IEEE International Conference on Computer Vision (ICCV), pp. 1947–1955 (2017). https://doi.org/10.1109/ICCV.2017.215

  20. Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: Spatial CNN for traffic scene understanding. arXiv e-prints. (2018)

  21. Hou, Y., Ma, Z., Liu, C., Loy, C.C.: Learning lightweight lane detection CNNs by self attention distillation. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1013–1021 (2019). https://doi.org/10.1109/ICCV.2019.00110

  22. Hou, Y., Ma, Z., Liu, C., Hui, T.-W., Loy, C.C.: Inter-region affinity distillation for road marking segmentation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12483–12492 (2020). https://doi.org/10.1109/CVPR42600.2020.01250

  23. Tabelini, L., Berriel, R., Paixao, T.M., Badue, C., De Souza, A.F., Oliveira-Santos, T.: PolyLaneNet: lane estimation via deep polynomial regression. In: 2020 25th International Conference on Pattern Recognition (ICPR), 6150–6156 (2021)

  24. Qin, Z., Wang, H., Li, X.: Ultra fast structure-aware deep lane detection. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16, pp. 276–291 (2020)

  25. Hedi, A., Lončarić, S.: A system for vehicle surround view. IFAC Proc. Vol. 45(22), 120–125 (2012)

    Article  Google Scholar 

  26. Natroshvili, K., Scholl, K.-U.: Automatic extrinsic calibration methods for surround view systems. In: IEEE Intelligent Vehicles Symposium (IV), pp. 82–88 (2017)

  27. Ueshiba, T., Tomita, F.: Calibration of multi-camera systems using planar patterns. Sensors 8, 4 (2002)

  28. Zhao, K., Iurgel, U., Meuter, M., Pauli, J.: An automatic online camera calibration system for vehicular applications. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), pp. 1490–1492 (2014)

  29. Lourakis, M.I.: Sparse non-linear least squares optimization for geometric vision. In: Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5–11, 2010, Proceedings, Part II 11, pp. 43–56 (2010)

  30. Moré, J.J.: The Levenberg–Marquardt algorithm: implementation and theory. In: Numerical Analysis: Proceedings of the Biennial Conference Held at Dundee, June 28–July 1, 1977, pp. 105–116 (2006)

  31. Dubská, M., Herout, A.: Real projective plane mapping for detection of orthogonal vanishing points. In: BMVC (2013)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No.62172032, No. 62372036)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunyu Lin.

Ethics declarations

Conflict of interest

The authors certify that there are no actual or potential conflicts of interest in relation to this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qin, L., Lin, C., Huang, S. et al. Camera calibration for the surround-view system: a benchmark and dataset. Vis Comput (2024). https://doi.org/10.1007/s00371-024-03275-9

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-024-03275-9

Keywords

Navigation