Advertisement

Computational Visual Media

, Volume 5, Issue 1, pp 21–32 | Cite as

Removing fences from sweep motion videos using global 3D reconstruction and fence-aware light field rendering

  • Chanya Lueangwattana
  • Shohei MoriEmail author
  • Hideo Saito
Open Access
Research Article
  • 155 Downloads

Abstract

Diminishing the appearance of a fence in an image is a challenging research area due to the characteristics of fences (thinness, lack of texture, etc.) and the need for occluded background restoration. In this paper, we describe a fence removal method for an image sequence captured by a user making a sweep motion, in which occluded background is potentially observed. To make use of geometric and appearance information such as consecutive images, we use two well-known approaches: structure from motion and light field rendering. Results using real image sequences show that our method can stably segment fences and preserve background details for various fence and background combinations. A new video without the fence, with frame coherence, can be successfully provided.

Keywords

video fence video repair diminished reality (DR) structure from motion (SfM) light field rendering (LFR) 

Notes

Acknowledgements

This work was supported in part by Grant-in-Aid from the Japan Society for the Promotion of Science, following Grant No. 16J05114.

Supplementary material

Supplementary material, approximately 13.3 MB.

References

  1. [1]
    Mori, S.; Ikeda, S.; Saito, H. A survey of diminished reality. Techniques for visually concealing, eliminating, and seeing through real objects. IPSJ Transactions on Computer Vision and Applications Vol. 9, 17, 2017.CrossRefGoogle Scholar
  2. [2]
    Park, M.; Brocklehurst, K.; Collins, R. T.; Liu, Y. Image de-fencing revisited. In: Computer Vision–ACCV 2010. Lecture Notes in Computer Science, Vol. 6495. Kimmel, R.; Klette, R.; Sugimoto, A. Eds. Springer Berlin Heidelberg, 422–434, 2011.Google Scholar
  3. [3]
    Khasare, V. S.; Sahay, R. R.; Kankanhalli, M. S. Seeing through the fence: Image de-fencing using a video sequence. In: Proceedings of the IEEE International Conference on Image Processing, 1351–1355, 2013.Google Scholar
  4. [4]
    Negi, C. S.; Mandal, K.; Sahay, R. R.; Kankanhalli, M. S. Super-resolution de-fencing: Simultaneous fence removal and high-resolution image recovery using videos. In: Proceedings of the IEEE International Conference on Multimedia and Expo Workshops, 1–6, 2014.Google Scholar
  5. [5]
    Jonna, S.; Satapathy, S.; Sahay, R. R. Stereo image de-fencing using smartphones. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1792–1796, 2017.Google Scholar
  6. [6]
    Jonna, S.; Voleti, V. S.; Sahay, R. R.; Kankanhalli, M. S. A multimodal approach for image de-fencing and depth inpainting. In: Proceedings of the 8th International Conference on Advances in Pattern Recognition, 1–6, 2015.Google Scholar
  7. [7]
    Liu, Y.; Belkina, T.; Hays, J. H.; Lublinerman, R. Image de-fencing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–8, 2008.Google Scholar
  8. [8]
    Mu, Y.; Liu, W.; Yan, S. Video de-fencing. IEEE Transactions on Circuits and Systems for Video Technology Vol. 24, No. 7, 1111–1121, 2014.CrossRefGoogle Scholar
  9. [9]
    Yi, R.; Wang, J.; Tan, P. Automatic fence segmentation in videos of dynamic scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 705–713, 2016.Google Scholar
  10. [10]
    Yamashita, A.; Matsui, A.; Kaneko, T. Fence removal from multi-focus images. In: Proceedings of the 20th International Conference on Pattern Recognition, 4532–4535, 2010.Google Scholar
  11. [11]
    Zhang, Q.; Yuan, Y.; Lu, X. Image de-fencing with hyperspectral camera. In: Proceedings of the International Conference on Computer, Information and Telecommunication Systems, 1–5, 2016.Google Scholar
  12. [12]
    Zhang, F.-L.; Wang, J.; Shechtman, E.; Zhou, Z.-Y.. Shi, J.-X.; Hu, S.-M. PlenoPatch: Patch-based plenoptic image manipulation. IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 5, 1561–1573, 2017.CrossRefGoogle Scholar
  13. [13]
    Barnes, C.; Zhang, F.-L.; Lou, L.; Wu, X.; Hu, S.-M. PatchTable: Efficient patch queries for large datasets and applications. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 97, 2015.Google Scholar
  14. [14]
    Xue, T.; Rubinstein, M.; Liu, C.; Freeman, W. T. A computational approach for obstruction-free photography. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 79, 2015.Google Scholar
  15. [15]
    Barnes, C.; Zhang, F.-L. A survey of the state-of-the-art in patch-based synthesis. Computational Visual Media Vol. 3, No. 1, 3–20, 2017.CrossRefGoogle Scholar
  16. [16]
    Criminisi, A.; Prez, P.; Toyama, K. Region filling and object removal by exemplar-based inpainting. IEEE Transactions on Image Processing Vol. 13, No. 9, 1200–1212, 2004.CrossRefGoogle Scholar
  17. [17]
    Datar, M.; Immorlica, N.; Indyk, P.; Mirrokni, V. S. Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the 20th Annual Symposium on Computational Geometry, 253–262, 2004.Google Scholar
  18. [18]
    Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM Journal on Imaging Sciences Vol. 2, No. 2, 323–343, 2009.MathSciNetCrossRefzbMATHGoogle Scholar
  19. [19]
    Schönberger, J. L.; Frahm, J.-M. Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4104–4113, 2016.Google Scholar
  20. [20]
    Schönberger, J. L.; Zheng, E.; Frahm, J.-M.; Pollefeys, M. Pixelwise view selection for unstructured multi-view stereo. In: Computer Vision–ECCV 2016. Lecture Notes in Computer Science, Vol. 9907. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 501–518, 2016.Google Scholar
  21. [21]
    Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Transactions on Graphics Vol. 32, No. 3, Article No. 29, 2013.Google Scholar
  22. [22]
    Davis, A.; Levoy, M.; Durand, F. Unstructured light fields. Computer Graphics Forum Vol. 31, No. 2pt1, 305–314, 2012.Google Scholar
  23. [23]
    Isaksen, A.; McMillan, L.; Gortler, S. J. Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 297–306, 2000.Google Scholar
  24. [24]
    Kusumoto, N.; Hiura, S.; Sato, K. Uncalibrated synthetic aperture for defocus control. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2552–2559, 2009.Google Scholar
  25. [25]
    Levoy, M.; Hanrahan, P. Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 31–42, 1996.Google Scholar
  26. [26]
    Gortler, S. J.; Grzeszczuk, R.; Szeliski, R.; Cohen, M. F. The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 43–54, 1996.Google Scholar
  27. [27]
    Buehler, C.; Bosse, M.; McMillan, L.; Gortler, S.; Cohen, M. Unstructured lumigraph rendering. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 425–432, 2001.Google Scholar
  28. [28]
    Farnebäck, G. Two-frame motion estimation based on polynomial expansion. In: Image Analysis. Lecture Notes in Computer Science, Vol. 2749. Bigun, J.; Gustavsson, T. Eds. Springer Berlin Heidelberg, 363–370, 2003.Google Scholar
  29. [29]
    Barnes, C.; Shechtman, E.; Finkelstein, A.; Goldman, D. B. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 24, 2009.Google Scholar

Copyright information

© The Author(s) 2018

Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (https://doi.org/creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from https://doi.org/www.springer.com/journal/41095. To submit a manuscript, please go to https://doi.org/www.editorialmanager.com/cvmj.

Authors and Affiliations

  • Chanya Lueangwattana
    • 1
  • Shohei Mori
    • 2
    Email author
  • Hideo Saito
    • 1
  1. 1.Department of Science and TechnologyKeio UniversityTokyoJapan
  2. 2.Institute of Computer Graphics and VisionGraz University of TechnologyGrazAustria

Personalised recommendations