Advertisement

Neural Geometric Parser for Single Image Camera Calibration

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

We propose a neural geometric parser learning single image camera calibration for man-made scenes. Unlike previous neural approaches that rely only on semantic cues obtained from neural networks, our approach considers both semantic and geometric cues, resulting in significant accuracy improvement. The proposed framework consists of two networks. Using line segments of an image as geometric cues, the first network estimates the zenith vanishing point and generates several candidates consisting of the camera rotation and focal length. The second network evaluates each candidate based on the given image and the geometric cues, where prior knowledge of man-made scenes is used for the evaluation. With the supervision of datasets consisting of the horizontal line and focal length of the images, our networks can be trained to estimate the same camera parameters. Based on the Manhattan world assumption, we can further estimate the camera rotation and focal length in a weakly supervised manner. The experimental results reveal that the performance of our neural approach is significantly higher than that of existing state-of-the-art camera calibration techniques for single images of indoor and outdoor scenes.

Keywords

Single image camera calibration Neural geometric parser Horizon line Focal length Vanishing Points Man-made scenes 

Notes

Acknowledgements

This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1B03034907).

Supplementary material

504453_1_En_32_MOESM1_ESM.pdf (15.8 mb)
Supplementary material 1 (pdf 16208 KB)

References

  1. 1.
    Google Street View Images API. https://developers.google.com/maps/
  2. 2.
    Akinlar, C., Topal, C.: EDLines: a real-time line segment detector with a false detection control. Pattern Recogn. Lett. 32(13), 1633–1642 (2011)CrossRefGoogle Scholar
  3. 3.
    Alberti, L.B.: Della Pittura (1435)Google Scholar
  4. 4.
    Barinova, O., Lempitsky, V., Tretiak, E., Kohli, P.: Geometric image parsing in man-made environments. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 57–70. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15552-9_5CrossRefGoogle Scholar
  5. 5.
    Brachmann, E., et al.: DSAC – differentiable RANSAC for camera localization. In: Proceedings of CVPR, pp. 6684–6692 (2017)Google Scholar
  6. 6.
    Brachmann, E., Rother, C.: Neural-guided RANSAC: learning where to sample model hypotheses. In: Proceedings of ICCV, pp. 4322–4331 (2019)Google Scholar
  7. 7.
    Coughlan, J.M., Yuille, A.L.: Manhattan world: compass direction from a single image by Bayesian inference. In: Proceedings of ICCV, pp. 941–947 (1999)Google Scholar
  8. 8.
    Criminisi, A., Reid, I., Zisserman, A.: Single view metrology. Int. J. Comput. Vis. 40(2), 123–148 (2000)CrossRefGoogle Scholar
  9. 9.
    Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: ScanNet: richly-annotated 3D reconstructions of indoor scenes. In: Proceedings of CVPR, pp. 5828–5839 (2017)Google Scholar
  10. 10.
    Denis, P., Elder, J.H., Estrada, F.J.: Efficient edge-based methods for estimating manhattan frames in urban imagery. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5303, pp. 197–210. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88688-4_15CrossRefGoogle Scholar
  11. 11.
    Fischer, P., Dosovitskiy, A., Brox, T.: Image orientation estimation with convolutional networks. In: Gall, J., Gehler, P., Leibe, B. (eds.) GCPR 2015. LNCS, vol. 9358, pp. 368–378. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24947-6_30CrossRefGoogle Scholar
  12. 12.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  13. 13.
    von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2010)CrossRefGoogle Scholar
  14. 14.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2 edn. Cambridge University Press, Cambridge (2003)Google Scholar
  15. 15.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of CVPR, pp. 770–778 (2016)Google Scholar
  16. 16.
    Hold-Geoffroy, Y., et al.: A perceptual measure for deep single image camera calibration. In: Proceedings of CVPR, pp. 2354–2363 (2018)Google Scholar
  17. 17.
    Kluger, F., Brachmann, E., Ackermann, H., Rother, C., Yang, M.Y., Rosenhahn, B.: CONSAC: robust multi-model fitting by conditional sample consensus. In: Proceedings of CVPR, pp. 4633–4642 (2020)Google Scholar
  18. 18.
    Košecká, J., Zhang, W.: Video compass. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 476–490. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-47979-1_32CrossRefGoogle Scholar
  19. 19.
    Lee, H., Shechtman, E., Wang, J., Lee, S.: Automatic upright adjustment of photographs with robust camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 833–844 (2014)CrossRefGoogle Scholar
  20. 20.
    Li, H., Zhao, J., Bazin, J.C., Chen, W., Liu, Z., Liu, Y.H.: Quasi-globally optimal and efficient vanishing point estimation in manhattan world. In: Proceedings of ICCV, pp. 1646–1654 (2019)Google Scholar
  21. 21.
    Ma, Y., Soatto, S., Kosecka, J., Sastry, S.S.: An Invitation to 3-D Vision: From Images to Geometric Models. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-0-387-21779-6
  22. 22.
    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of CVPR, pp. 652–660 (2017)Google Scholar
  23. 23.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  24. 24.
    Schindler, G., Dellaert, F.: Atlanta world: an expectation maximization framework for simultaneous low-level edge grouping and camera calibration in complex man-made environments. In: Proceedings of CVPR (2004)Google Scholar
  25. 25.
    Simon, G., Fond, A., Berger, M.-O.: A-Contrario horizon-first vanishing point detection using second-order grouping laws. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 323–338. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01249-6_20CrossRefGoogle Scholar
  26. 26.
    Tardif, J.P.: Non-iterative approach for fast and accurate vanishing point detection. In: Proceedings of ICCV, pp. 1250–1257 (2009)Google Scholar
  27. 27.
    Tretyak, E., Barinova, O., Kohli, P., Lempitsky, V.: Geometric image parsing in man-made environments. Int. J. Comput. Vis. 97(3), 305–321 (2012)CrossRefGoogle Scholar
  28. 28.
    Wildenauer, H., Hanbury, A.: Robust camera self-calibration from monocular images of manhattan worlds. In: Proceedings of CVPR, pp. 2831–2838 (2012)Google Scholar
  29. 29.
    Workman, S., Greenwell, C., Zhai, M., Baltenberger, R., Jacobs, N.: DEEPFOCAL: a method for direct focal length estimation. In: Proceedings of ICIP, pp. 1369–1373 (2015)Google Scholar
  30. 30.
    Workman, S., Zhai, M., Jacobs, N.: Horizon lines in the wild. In: Proceedings of BMVC, pp. 20.1–20.12 (2016)Google Scholar
  31. 31.
    Xian, W., Li, Z., Fisher, M., Eisenmann, J., Shechtman, E., Snavely, N.: UprightNet: geometry-aware camera orientation estimation from single images. In: Proceedings of ICCV, pp. 9974–9983 (2019)Google Scholar
  32. 32.
    Xu, Y., Oh, S., Hoogs, A.: A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments. In: Proceedings of CVPR, pp. 1376–1383 (2013)Google Scholar
  33. 33.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  34. 34.
    Zhai, M., Workman, S., Jacobs, N.: Detecting vanishing points using global image context in a non-manhattan world. In: Proceedings of CVPR, pp. 5657–5665 (2016)Google Scholar
  35. 35.
    Zhou, Y., Qi, H., Huang, J., Ma, Y.: NeurVPS: neural vanishing point scanning via conic convolution. In: Proceedings of NeurIPS (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Kookmin UniversitySeoulSouth Korea
  2. 2.Adobe ResearchSan JoseUSA
  3. 3.Intel KoreaSeoulSouth Korea

Personalised recommendations