Advertisement

Peripheral Expansion of Depth Information via Layout Estimation with Fisheye Camera

  • Alejandro Perez-YusEmail author
  • Gonzalo Lopez-Nicolas
  • Jose J. Guerrero
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9912)

Abstract

Consumer RGB-D cameras have become very useful in the last years, but their field of view is too narrow for certain applications. We propose a new hybrid camera system composed by a conventional RGB-D and a fisheye camera to extend the field of view over 180\(^{\circ }\). With this system we have a region of the hemispherical image with depth certainty, and color data in the periphery that is used to extend the structural information of the scene. We have developed a new method to generate scaled layout hypotheses from relevant corners, combining the extraction of lines in the fisheye image and the depth information. Experiments with real images from different scenarios validate our layout recovery method and the advantages of this camera system, which is also able to overcome severe occlusions. As a result, we obtain a scaled 3D model expanding the original depth information with the wide scene reconstruction. Our proposal expands successfully the depth map more than eleven times in a single shot.

Keywords

3D layout estimation RGB-D Omnidirectional cameras Multi-camera systems 

Notes

Acknowledgments

This work was supported by Projects DPI2014-61792-EXP and DPI2015-65962-R (MINECO/FEDER, UE) and grant BES-2013-065834 (MINECO).

Supplementary material

419983_1_En_24_MOESM1_ESM.pdf (5.2 mb)
Supplementary material 1 (pdf 5325 KB)

Supplementary material 2 (mp4 30391 KB)

References

  1. 1.
    Endres, F., Sprunk, C., Kummerle, R., Burgard, W.: A catadioptric extension for RGB-D cameras. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 466–471(2014)Google Scholar
  2. 2.
    Tomari, R., Kobayashi, Y., Kuno, Y.: Wide field of view kinect undistortion for social navigation implementation. In: Bebis, G., et al. (eds.) ISVC 2012. LNCS, vol. 7432, pp. 526–535. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33191-6_52 CrossRefGoogle Scholar
  3. 3.
    Fernandez-Moral, E., González-Jiménez, J., Rives, P., Arévalo, V.: Extrinsic calibration of a set of range cameras in 5 seconds without pattern. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 429–435 (2014)Google Scholar
  4. 4.
    Delage, E., Lee, H., Ng, A.Y.: A dynamic bayesian network model for autonomous 3D reconstruction from a single indoor image. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 2418–2428 (2006)Google Scholar
  5. 5.
    Lee, D.C., Hebert, M., Kanade, T.: Geometric reasoning for single image structure recovery. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2136–2143 (2009)Google Scholar
  6. 6.
    Coughlan, J.M., Yuille, A.L.: Manhattan world: Compass direction from a single image by bayesian inference. In: IEEE International Conference on Computer Vision (ICCV), vol. 2, pp. 941–947 (1999)Google Scholar
  7. 7.
    Del Pero, L., Bowdish, J., Fried, D., Kermgard, B., Hartley, E., Barnard, K.: Bayesian geometric modeling of indoor scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2719–2726 (2012)Google Scholar
  8. 8.
    Chang, H.C., Huang, S.H., Lai, S.H.: Using line consistency to estimate 3D indoor Manhattan scene layout from a single image. In: IEEE International Conference on Image Processing (ICIP), pp. 4723–4727 (2015)Google Scholar
  9. 9.
    Hedau, V., Hoiem, D., Forsyth, D.: Recovering the spatial layout of cluttered rooms. In: IEEE International Conference on Computer Vision (ICCV), pp. 1849–1856 (2009)Google Scholar
  10. 10.
    Schwing, A.G., Hazan, T., Pollefeys, M., Urtasun, R.: Efficient structured prediction for 3D indoor scene understanding. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2815–2822 (2012)Google Scholar
  11. 11.
    Ramalingam, S., Pillai, J., Jain, A., Taguchi, Y.: Manhattan junction catalogue for spatial reasoning of indoor scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3065–3072 (2013)Google Scholar
  12. 12.
    Mallya, A., Lazebnik, S.: Learning informative edge maps for indoor scene layout prediction. In: IEEE International Conference on Computer Vision (ICCV), pp. 936–944 (2015)Google Scholar
  13. 13.
    Hedau, V., Hoiem, D., Forsyth, D.: Thinking inside the box: using appearance models and context based on room geometry. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part VI. LNCS, vol. 6316, pp. 224–237. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Gupta, A., Hebert, M., Kanade, T., Blei, D.M.: Estimating spatial layout of rooms using volumetric reasoning about objects and surfaces. In: Advances in Neural Information Processing Systems, vol. 23, pp. 1288–1296. Curran Associates, Inc. (2010)Google Scholar
  15. 15.
    Del Pero, L., Guan, J., Brau, E., Schlecht, J., Barnard, K.: Sampling bedrooms. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2009–2016 (2011)Google Scholar
  16. 16.
    Choi, W., Chao, Y.W., Pantofaru, C., Savarese, S.: Understanding indoor scenes using 3D geometric phrases. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 33–40 (2013)Google Scholar
  17. 17.
    Schwing, A.G., Fidler, S., Pollefeys, M., Urtasun, R.: Box in the box: Joint 3D layout and object reasoning from single images. In: IEEE International Conference on Computer Vision (ICCV) (2013)Google Scholar
  18. 18.
    Flint, A., Murray, D., Reid, I.: Manhattan scene understanding using monocular, stereo, and 3D features. In: IEEE International Conference on Computer Vision (ICCV), pp. 2228–2235 (2011)Google Scholar
  19. 19.
    Furlan, A., Miller, S.D., Sorrenti, D.G., Li, F.F., Savarese, S.: Free your camera: 3D indoor scene understanding from arbitrary camera motion. In: British Machine Vision Conference (BMVC) (2013)Google Scholar
  20. 20.
    Jia, H., Li, S.: Estimating the structure of rooms from a single fisheye image. In: IAPR Asian Conference on Pattern Recognition (ACPR), pp. 818–822 (2013)Google Scholar
  21. 21.
    López-Nicolás, G., Omedes, J., Guerrero, J.J.: Spatial layout recovery from a single omnidirectional image and its matching-free sequential propagation. Robot. Auton. Syst. 62(9), 1271–1281 (2014)CrossRefGoogle Scholar
  22. 22.
    Jia, H., Li, S.: Estimating structure of indoor scene from a single full-view image. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 4851–4858 (2015)Google Scholar
  23. 23.
    Zhang, Y., Song, S., Tan, P., Xiao, J.: PanoContext: a whole-room 3D context model for Panoramic scene understanding. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VI. LNCS, vol. 8694, pp. 668–686. Springer, Heidelberg (2014)Google Scholar
  24. 24.
    Zhang, Q., Pless, R.: Extrinsic calibration of a camera and laser range finder (improves camera calibration). In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2301–2306 (2004)Google Scholar
  25. 25.
    Scaramuzza, D., Harati, A., Siegwart, R.: Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4164–4169 (2007)Google Scholar
  26. 26.
    Geiger, A., Moosmann, F., Car, O., Schuster, B.: Automatic camera and range sensor calibration using a single shot. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 3936–3943 (2012)Google Scholar
  27. 27.
    Puig, L., Bermudez-Cameo, J., Sturm, P., Guerrero, J.J.: Calibration of omnidirectional cameras in practice: a comparison of methods. Comput. Vis. Image Underst. 116(1), 120–137 (2012)CrossRefGoogle Scholar
  28. 28.
    Herrera C, D., Kannala, J., Heikkilä, J.: Joint depth and color camera calibration with distortion correction. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 2058–2064 (2012)CrossRefGoogle Scholar
  29. 29.
    Scaramuzza, D., Martinelli, A., Siegwart, R.: A toolbox for easily calibrating omnidirectional cameras. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5695–5701 (2006)Google Scholar
  30. 30.
    Perez-Yus, A., Lopez-Nicolas, G., Guerrero, J.J.: A novel hybrid camera system with depth and fisheye cameras. In: IAPR International Conference on Pattern Recognition (ICPR) (2016)Google Scholar
  31. 31.
    Bermudez-Cameo, J., Lopez-Nicolas, G., Guerrero, J.J.: Automatic line extraction in uncalibrated omnidirectional cameras with revolution symmetry. Int. J. Comput. Vis. 114(1), 16–37 (2015)CrossRefMathSciNetGoogle Scholar
  32. 32.
    Geyer, C., Daniilidis, K.: A unifying theory for central panoramic systems and practical implications. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 445–461. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  33. 33.
    Bazin, J.C., Kweon, I., Demonceaux, C., Vasseur, P.: A robust top-down approach for rotation estimation and vanishing points extraction by catadioptric vision in urban environment. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 346–353 (2008)Google Scholar
  34. 34.
    Rusu, R.B., Cousins, S.: 3D is here: point cloud library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA) (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Alejandro Perez-Yus
    • 1
    Email author
  • Gonzalo Lopez-Nicolas
    • 1
  • Jose J. Guerrero
    • 1
  1. 1.Instituto de Investigación en Ingeniería de Aragón (I3A)Universidad de ZaragozaZaragozaSpain

Personalised recommendations