Advertisement

Unsupervised Visual Hull Reconstruction of a Dense Dataset

  • Maxim Mikhnevich
  • Denis Laurendeau
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 550)

Abstract

In this paper a method for the reconstruction of an objects Visual Hull (VH) is presented. An image sequence of a moving object under different lighting condition is captured and analyzed. In this analysis, information from multiple domains (space, time and lighting) is merged based on a MRF framework. The advantage of the proposed method is that it allows to obtain an approximation of an object 3D model without any assumption on object appearance or geometry. Real-data experiments show that the proposed approach allows for robust VH reconstruction of a variety of challenging objects such as a transparent wine glass or a light bulb.

Keywords

Shape from silhouette SFS Multi-view image segmentation Multi-lighting Visual hull VH Graph cuts 

References

  1. 1.
    Snow, D., Viola, P., Zabih, R.: Exact voxel occupancy with graph cuts. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) vol. 1, p. 1345 (2000)Google Scholar
  2. 2.
    Matusik, W., Pfister, H., Ngan, A., Beardsley, P., Ziegler, R., McMillan, L.: Image-based 3d photography using opacity hulls. ACM Trans. Graph. 21, 427–437 (2002)CrossRefGoogle Scholar
  3. 3.
    Baumgart, B.G.: Geometric modeling for computer vision. Ph.D. thesis, Stanford, CA, USA (1974)Google Scholar
  4. 4.
    Smith, A.R., Blinn, J.F.: Blue screen matting. In: ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 259–268 (1996)Google Scholar
  5. 5.
    Jagers, M., Birkbeck, N., Cobzas, D.: A three-tier hierarchical model for capturing and rendering of 3d geometry and appearance from 2d images. In: International Symposium on 3-D Data Processing, Visualization, and Transmission (3DPVT) (2008)Google Scholar
  6. 6.
    Zongker, D.E., Werner, D.M., Curless, B., Salesin, D.H.: Environment matting and compositing. In: ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 205–214 (1999)Google Scholar
  7. 7.
    Piccardi, M.: Background subtraction techniques: a review. In: International Conference on Systems, Man and Cybernetics (SMC), pp. 3099–3104 (2004)Google Scholar
  8. 8.
    Radke, R.J., Andra, S., Al-Kofahi, O., Roysam, B.: Image change detection algorithms: a systematic survey. IEEE Trans. Image Process. 14, 294–307 (2005)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Parks, D.H., Fels, S.S.: Evaluation of background subtraction algorithms with post-processing. In: International Conference on Advanced Video and Signal Based Surveillance, pp. 192–199 (2008)Google Scholar
  10. 10.
    Boykov, Y., Jolly, M.P.: Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images. In: Eighth IEEE International Conference on Computer Vision (ICCV), vol. 1, pp. 105–112 (2001)Google Scholar
  11. 11.
    Campbell, N., Vogiatzis, G., Hernndez, C., Cipolla, R.: Automatic 3d object segmentation in multiple views using volumetric graph-cuts. In: British Machine Vision Conference, vol. 1, pp. 530–539 (2007)Google Scholar
  12. 12.
    Lee, W., Woo, W., Boyer, E.: Identifying foreground from multiple images. In: Yagi, Y., Kang, S.B., Kweon, I.S., Zha, H. (eds.) ACCV 2007, Part II. LNCS, vol. 4844, pp. 580–589. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  13. 13.
    Wu, C., Liu, Y., Ji, X., Dai, Q.: Multi-view reconstruction under varying illumination conditions. In: Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 930–933 (2009)Google Scholar
  14. 14.
    Rother, C., Kolmogorov, V., Blake, A.: “Grabcut”: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23, 309–314 (2004)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Open Access This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Authors and Affiliations

  1. 1.Computer Vision and Systems LaboratoryLaval UniversityQuebecCanada

Personalised recommendations