Skip to main content

Towards a Unified Benchmark for Monocular Radial Distortion Correction and the Importance of Testing on Real-World Data

  • 1460 Accesses

Part of the Lecture Notes in Computer Science book series (LNCS,volume 13363)

Abstract

Radial distortion correction for a single image is often overlooked in computer vision. It is possible to rectify images accurately when the camera and lens are known or physically available to take additional images with a calibration pattern. However, sometimes it is impossible to identify the camera or lens of an image, e.g., crowd-sourced datasets. Nonetheless, it is still important to correct that image for radial distortion in these cases. Especially in the last few years, solving the radial distortion correction problem from a single image with a deep neural network approach increased in popularity. This paper shows that these approaches tend to overfit completely on the synthetic data generation process used to train such networks. Additionally, we investigate which parts of this process are responsible for overfitting. We apply an explainability tool to analyze the trained models’ behavior. Furthermore, we introduce a new dataset based on the popular ImageNet dataset as a new benchmark for comparison. Lastly, we propose an efficient solution to the overfitting problem by feeding edge images to the neural networks instead of the images. Source code, data, and models are publicly available at https://github.com/cvjena/deeprect.

Keywords

  • Radial distortion
  • Monocular images
  • Synthetic data

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Alvarez, L., Gómez, L., Sendra, J.R.: An algebraic approach to lens distortion by line rectification. J. Math. Imag. Vis. 35(1), 36–50 (2009). https://doi.org/10.1007/s10851-009-0153-2

    CrossRef  MATH  Google Scholar 

  2. Barreto, J.P.: A unifying geometric representation for central projection systems. Comput. Vis. Image Underst. 103(3), 208–217 (2006)

    CrossRef  Google Scholar 

  3. Bogdan, O., Eckstein, V., Rameau, F., Bazin, J.C.: DeepCalib: a deep learning approach for automatic intrinsic calibration of wide field-of-view cameras. In: Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production, CVMP 2018, pp. 1–10. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3278471.3278479

  4. Brown, D.C.: Decentering distortion of lenses. Photogrammetric Engineering and Remote Sensing (1966). https://ci.nii.ac.jp/naid/10022411406/

  5. Bräuer-Burchardt, C., Voss, K.: Automatic lens distortion calibration using single views. In: Sommer, G., Krüger, N., Perwass, C. (eds.) Mustererkennung 2000. Informatik aktuell, pp. 187–194. Springer, Heidelberg (2000). https://doi.org/10.1007/978-3-642-59802-9_24

  6. Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 6, 679–698 (1986)

    CrossRef  Google Scholar 

  7. Claus, D., Fitzgibbon, A.W.: A plumbline constraint for the rational function lens distortion model. In: Proceedings of the British Machine Vision Conference 2005, pp. 10.1-10.10. British Machine Vision Association, Oxford (2005). https://doi.org/10.5244/C.19.10

  8. Devernay, F., Faugeras, O.: Straight lines have to be straight. Mach. Vis. Appl. 13(1), 14–24 (2001). https://doi.org/10.1007/PL00013269

    CrossRef  Google Scholar 

  9. Fitzgibbon, A.: Simultaneous linear estimation of multiple view geometry and lens distortion. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, pp. I (2001). https://doi.org/10.1109/CVPR.2001.990465, ISSN 1063-6919

  10. Garg, R., B.G., V.K., Carneiro, G., Reid, I.: Unsupervised CNN for single view depth estimation: geometry to the rescue. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 740–756. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_45

    CrossRef  Google Scholar 

  11. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27. Curran Associates, Inc. (2014)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, pp. 770–778 (2016)

    Google Scholar 

  13. Li, X., Zhang, B., Sander, P.V., Liao, J.: Blind geometric distortion correction on images through deep learning, pp. 4855–4864 (2019)

    Google Scholar 

  14. Liao, K., Lin, C., Zhao, Y., Gabbouj, M.: DR-GAN: automatic radial distortion rectification using conditional GAN in real-time. IEEE Trans. Circ. Syst. Video Technol. 30(3), 725–733 (2020). https://doi.org/10.1109/TCSVT.2019.2897984

  15. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    CrossRef  Google Scholar 

  16. Lopez, M., Mari, R., Gargallo, P., Kuang, Y., Gonzalez-Jimenez, J., Haro, G.: Deep single image camera calibration with radial distortion, pp. 11817–11825 (2019)

    Google Scholar 

  17. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv:1711.05101 [cs, math] (2019). arXiv: 1711.05101

  18. Lutz, S., Davey, M., Smolic, A.: Deep convolutional neural networks for estimating lens distortion parameters. Session 2: Deep Learning for Computer Vision (2019). https://doi.org/10.21427/yg8t-6g48

  19. Paszke, A.,et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)

    Google Scholar 

  20. Ranjan, A., et al.: Competitive collaboration: joint unsupervised learning of depth, camera motion, optical flow and motion segmentation, pp. 12240–12249 (2019)

    Google Scholar 

  21. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016, pp. 1135–1144 (2016)

    Google Scholar 

  22. Rong, J., Huang, S., Shang, Z., Ying, X.: Radial lens distortion correction using convolutional neural networks trained with synthesized images. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 35–49. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54187-7_3

    CrossRef  Google Scholar 

  23. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    CrossRef  Google Scholar 

  24. Shi, Y., Zhang, D., Wen, J., Tong, X., Ying, X., Zha, H.: Radial lens distortion correction by adding a weight layer with inverted foveal models to convolutional neural networks. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 1–6 (2018). https://doi.org/10.1109/ICPR.2018.8545218, ISSN 1051-4651

  25. Strand, R., Hayman, E.: Correcting radial distortion by circle fitting. In: Proceedings of the British Machine Vision Conference 2005, pp. 9.1-9.10. British Machine Vision Association, Oxford (2005). https://doi.org/10.5244/C.19.9

  26. Tang, Z., Grompone von Gioi, R., Monasse, P., Morel, J.M.: A precision analysis of camera distortion models. IEEE Trans. Image Process. 26(6), 2694–2704 (2017). https://doi.org/10.1109/TIP.2017.2686001

  27. Ullman, S., Brenner, S.: The interpretation of structure from motion. Proc. R. Soc. Lond. Ser. B. Biol. Sci. 203(1153), 405–426 (1979). https://doi.org/10.1098/rspb.1979.0006, https://royalsocietypublishing.org/doi/abs/10.1098/rspb.1979.0006

  28. Xue, Z.C., Xue, N., Xia, G.S.: Fisheye distortion rectification from deep straight lines. arXiv:2003.11386 [cs] (2020). arXiv: 2003.11386

  29. Xue, Z., Xue, N., Xia, G.S., Shen, W.: Learning to calibrate straight lines for Fisheye image rectification, pp. 1643–1651 (2019)

    Google Scholar 

  30. Yin, X., Wang, X., Yu, J., Zhang, M., Fua, P., Tao, D.: FishEyeRecNet: a multi-context collaborative deep network for Fisheye image rectification, pp. 469–484 (2018)

    Google Scholar 

  31. Yin, Z., Shi, J.: GeoNet: unsupervised learning of dense depth, optical flow and camera pose, pp. 1983–1992 (2018)

    Google Scholar 

  32. Ying, X., Hu, Z., Zha, H.: Fisheye lenses calibration using straight-line spherical perspective projection constraint. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3852, pp. 61–70. Springer, Heidelberg (2006). https://doi.org/10.1007/11612704_7

    CrossRef  Google Scholar 

  33. Zhao, H., Shi, Y., Tong, X., Ying, X., Zha, H.: A simple yet effective pipeline for radial distortion correction. In: 2020 IEEE International Conference on Image Processing (ICIP), pp. 878–882 (2020). https://doi.org/10.1109/ICIP40778.2020.9191107, ISSN 2381-8549

  34. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452–1464 (2018). https://doi.org/10.1109/TPAMI.2017.2723009

  35. Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised Learning of depth and ego-motion from video, pp. 1851–1858 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Christoph Theiß or Joachim Denzler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Theiß, C., Denzler, J. (2022). Towards a Unified Benchmark for Monocular Radial Distortion Correction and the Importance of Testing on Real-World Data. In: El Yacoubi, M., Granger, E., Yuen, P.C., Pal, U., Vincent, N. (eds) Pattern Recognition and Artificial Intelligence. ICPRAI 2022. Lecture Notes in Computer Science, vol 13363. Springer, Cham. https://doi.org/10.1007/978-3-031-09037-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-09037-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-09036-3

  • Online ISBN: 978-3-031-09037-0

  • eBook Packages: Computer ScienceComputer Science (R0)