Practical automatic background substitution for live video
- 654 Downloads
In this paper we present a novel automatic background substitution approach for live video. The objective of background substitution is to extract the foreground from the input video and then combine it with a new background. In this paper, we use a color line model to improve the Gaussian mixture model in the background cut method to obtain a binary foreground segmentation result that is less sensitive to brightness differences. Based on the high quality binary segmentation results, we can automatically create a reliable trimap for alpha matting to refine the segmentation boundary. To make the composition result more realistic, an automatic foreground color adjustment step is added to make the foreground look consistent with the new background. Compared to previous approaches, our method can produce higher quality binary segmentation results, and to the best of our knowledge, this is the first time such an automatic and integrated background substitution system has been proposed which can run in real time, which makes it practical for everyday applications.
Keywordsbackground substitution background replacement background subtraction alpha matting
We thank the reviewers for their valuable comments. This work was supported by the National High-Tech R&D Program of China (Project No. 2012AA011903), the National Natural Science Foundation of China (Project No. 61373069), the Research Grant of Beijing Higher Institution Engineering Research Center, and Tsinghua–Tencent Joint Laboratory for Internet Innovation Technology.
- Bai, X.; Wang, J.; Simons, D.; Sapiro, G. Video SnapCut: Robust video object cutout using localized classifiers. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 70, 2009.Google Scholar
- Liu, Z.; Cohen, M. Head-size equalization for better visual perception of video conferencing. In: Proceedings of the IEEE International Conference on Multimedia and Expo, 4, 2005.Google Scholar
- Van Krevelen, D. W. F.; Poelman, R. A survey of augmented reality technologies, applications and limitations. International Journal of Virtual Reality Vol. 9, No. 2, 1–21, 2010.Google Scholar
- Apostoloff, N.; Fitzgibbon, A. Bayesian video matting using learnt image priors. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, I-407–I-414, 2004.Google Scholar
- Hofmann, M.; Tiefenbacher, P.; Rigoll, G. Background segmentation with feedback: The pixel-based adaptive segmenter. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 38–43, 2012.Google Scholar
- Criminisi, A.; Cross, G.; Blake, A.; Kolmogorov, V. Bilayer segmentation of live video. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 53–60, 2006.Google Scholar
- Cheng, D.; Price, B.; Cohen, S.; Brown, M. S. Beyond white: Ground truth colors for color constancy correction. In: Proceedings of the IEEE International Conference on Computer Vision, 298–306, 2015.Google Scholar
- Omer, I.; Werman, M. Color lines: Image specific color representation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, II-946–II-953, 2004.Google Scholar
- Smith, A. R.; Blinn, J. F. Blue screen matting. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 259–268, 1996.Google Scholar
- Mumtaz, A.; Zhang, W.; Chan, A. B. Joint motion segmentation and background estimation in dynamic scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 368–375, 2014.Google Scholar
- Chen, X.; Zou, D.; Zhou, S.; Zhao, Q.; Tan, P. Image matting with local and nonlocal smooth priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1902–1907, 2013.Google Scholar
- Kuang, Z.; Lu, P.; Wang, X.; Lu, X. Learning selfadaptive color harmony model for aesthetic quality classification. In: Proceedings of SPIE 9443, the 6th International Conference on Graphic and Image Processing, 94431O, 2015.Google Scholar
- Chen, T.; Cheng, M.-M.; Tan, P.; Shamir, A.; Hu, S.-M. Sketch2Photo: Internet image montage. ACM Transactions on Graphics Vol. 28, No. 5, Article No. 124, 2009.Google Scholar
- Farbman, Z.; Hoffer, G.; Lipman, Y.; Cohen-Or, D.; Lischinski, D. Coordinates for instant image cloning. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 67, 2009.Google Scholar
- Boykov, Y.; Kolmogorov, V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. In: Energy Minimization Methods in Computer Vision and Pattern Recognition. Figueiredo, M.; Zerubia, J.; Jain, A. K. Eds. Springer Berlin Heidelberg, 359–374, 2001.CrossRefGoogle Scholar
- Sigari, M. H.; Mozayani, N.; Pourreza, H. R. Fuzzy running average and fuzzy background subtraction: concepts and application. International Journal of Computer Science and Network Security Vol. 8, No. 2, 138–143, 2008.Google Scholar
- Sobral, A. BGSLibrary. 2016. Available at https://github.com/andrewssobral/bgslibrary.Google Scholar
- Graphics and Media Lab. Videomatting benchmark. 2016. Available at http://videomatting.com.Google Scholar
Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.