Abstract
Diminishing the appearance of a fence in an image is a challenging research area due to the characteristics of fences (thinness, lack of texture, etc.) and the need for occluded background restoration. In this paper, we describe a fence removal method for an image sequence captured by a user making a sweep motion, in which occluded background is potentially observed. To make use of geometric and appearance information such as consecutive images, we use two well-known approaches: structure from motion and light field rendering. Results using real image sequences show that our method can stably segment fences and preserve background details for various fence and background combinations. A new video without the fence, with frame coherence, can be successfully provided.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Mori, S.; Ikeda, S.; Saito, H. A survey of diminished reality. Techniques for visually concealing, eliminating, and seeing through real objects. IPSJ Transactions on Computer Vision and Applications Vol. 9, 17, 2017.
Park, M.; Brocklehurst, K.; Collins, R. T.; Liu, Y. Image de-fencing revisited. In: Computer Vision–ACCV 2010. Lecture Notes in Computer Science, Vol. 6495. Kimmel, R.; Klette, R.; Sugimoto, A. Eds. Springer Berlin Heidelberg, 422–434, 2011.
Khasare, V. S.; Sahay, R. R.; Kankanhalli, M. S. Seeing through the fence: Image de-fencing using a video sequence. In: Proceedings of the IEEE International Conference on Image Processing, 1351–1355, 2013.
Negi, C. S.; Mandal, K.; Sahay, R. R.; Kankanhalli, M. S. Super-resolution de-fencing: Simultaneous fence removal and high-resolution image recovery using videos. In: Proceedings of the IEEE International Conference on Multimedia and Expo Workshops, 1–6, 2014.
Jonna, S.; Satapathy, S.; Sahay, R. R. Stereo image de-fencing using smartphones. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1792–1796, 2017.
Jonna, S.; Voleti, V. S.; Sahay, R. R.; Kankanhalli, M. S. A multimodal approach for image de-fencing and depth inpainting. In: Proceedings of the 8th International Conference on Advances in Pattern Recognition, 1–6, 2015.
Liu, Y.; Belkina, T.; Hays, J. H.; Lublinerman, R. Image de-fencing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–8, 2008.
Mu, Y.; Liu, W.; Yan, S. Video de-fencing. IEEE Transactions on Circuits and Systems for Video Technology Vol. 24, No. 7, 1111–1121, 2014.
Yi, R.; Wang, J.; Tan, P. Automatic fence segmentation in videos of dynamic scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 705–713, 2016.
Yamashita, A.; Matsui, A.; Kaneko, T. Fence removal from multi-focus images. In: Proceedings of the 20th International Conference on Pattern Recognition, 4532–4535, 2010.
Zhang, Q.; Yuan, Y.; Lu, X. Image de-fencing with hyperspectral camera. In: Proceedings of the International Conference on Computer, Information and Telecommunication Systems, 1–5, 2016.
Zhang, F.-L.; Wang, J.; Shechtman, E.; Zhou, Z.-Y.. Shi, J.-X.; Hu, S.-M. PlenoPatch: Patch-based plenoptic image manipulation. IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 5, 1561–1573, 2017.
Barnes, C.; Zhang, F.-L.; Lou, L.; Wu, X.; Hu, S.-M. PatchTable: Efficient patch queries for large datasets and applications. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 97, 2015.
Xue, T.; Rubinstein, M.; Liu, C.; Freeman, W. T. A computational approach for obstruction-free photography. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 79, 2015.
Barnes, C.; Zhang, F.-L. A survey of the state-of-the-art in patch-based synthesis. Computational Visual Media Vol. 3, No. 1, 3–20, 2017.
Criminisi, A.; Prez, P.; Toyama, K. Region filling and object removal by exemplar-based inpainting. IEEE Transactions on Image Processing Vol. 13, No. 9, 1200–1212, 2004.
Datar, M.; Immorlica, N.; Indyk, P.; Mirrokni, V. S. Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the 20th Annual Symposium on Computational Geometry, 253–262, 2004.
Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM Journal on Imaging Sciences Vol. 2, No. 2, 323–343, 2009.
Schönberger, J. L.; Frahm, J.-M. Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4104–4113, 2016.
Schönberger, J. L.; Zheng, E.; Frahm, J.-M.; Pollefeys, M. Pixelwise view selection for unstructured multi-view stereo. In: Computer Vision–ECCV 2016. Lecture Notes in Computer Science, Vol. 9907. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 501–518, 2016.
Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Transactions on Graphics Vol. 32, No. 3, Article No. 29, 2013.
Davis, A.; Levoy, M.; Durand, F. Unstructured light fields. Computer Graphics Forum Vol. 31, No. 2pt1, 305–314, 2012.
Isaksen, A.; McMillan, L.; Gortler, S. J. Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 297–306, 2000.
Kusumoto, N.; Hiura, S.; Sato, K. Uncalibrated synthetic aperture for defocus control. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2552–2559, 2009.
Levoy, M.; Hanrahan, P. Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 31–42, 1996.
Gortler, S. J.; Grzeszczuk, R.; Szeliski, R.; Cohen, M. F. The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 43–54, 1996.
Buehler, C.; Bosse, M.; McMillan, L.; Gortler, S.; Cohen, M. Unstructured lumigraph rendering. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 425–432, 2001.
Farnebäck, G. Two-frame motion estimation based on polynomial expansion. In: Image Analysis. Lecture Notes in Computer Science, Vol. 2749. Bigun, J.; Gustavsson, T. Eds. Springer Berlin Heidelberg, 363–370, 2003.
Barnes, C.; Shechtman, E.; Finkelstein, A.; Goldman, D. B. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 24, 2009.
Acknowledgements
This work was supported in part by Grant-in-Aid from the Japan Society for the Promotion of Science, following Grant No. 16J05114.
Author information
Authors and Affiliations
Corresponding author
Additional information
Chanya Lueangwattana received her B.E. degree in electronics and communication engineering from Thammasat University, Thailand, in 2015. Since 2016, she was a master student in the Graduate School of Science and Technology at Keio University, from which she received her M.S. degree in 2018. Her research interests include image processing, computer vision, and diminished reality.
Shohei Mori received his B.S., M.S., and Ph.D. degrees in engineering from Ritsumeikan University, Japan, in 2011, 2013, and 2016, respectively. He was a research fellowship for Young Scientists DC-1 and PD from the Japan Society for the Promotion of Science until 2016 and 2018 respectively. He is currently a university project assistant at Graz University of Technology, Austria. His research interests include diminished reality and related computer vision technology.
Hideo Saito received his Ph.D. degree in electrical engineering from Keio University, Japan, in 1992. Since then, he has been on the Faculty of Science and Technology, Keio University. From 1997 to 1999, he joined the Virtualized Reality Project in the Robotics Institute, Carnegie Mellon University as a visiting researcher. Since 2006, he has been a full professor in the Department of Information and Computer Science, Keio University. His recent activities for academic conferences include being Program Chair of ACCV2014, a General Chair of ISMAR2015, and a Program Chair of ISMAR2016. His research interests include computer vision and pattern recognition, and their applications to augmented reality, virtual reality, and human-robotics interaction.
Electronic supplementary material
Supplementary material, approximately 13.3 MB.
Rights and permissions
Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (https://doi.org/creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Other papers from this open access journal are available free of charge from https://doi.org/www.springer.com/journal/41095. To submit a manuscript, please go to https://doi.org/www.editorialmanager.com/cvmj.
About this article
Cite this article
Lueangwattana, C., Mori, S. & Saito, H. Removing fences from sweep motion videos using global 3D reconstruction and fence-aware light field rendering. Comp. Visual Media 5, 21–32 (2019). https://doi.org/10.1007/s41095-018-0126-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41095-018-0126-8