Skip to main content
Log in

Automatic and real-time green screen keying

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Green screen keying has always been an essential and fundamental part of film and television special effects. In the actual shooting process, captured green screen images vary significantly due to the comprehensive influence of lighting, shooting angle, green cloth material, characters, etc. In order to obtain visually pleasing effects, traditional methods usually require professionals to adjust the corresponding parameters with lots of workload for different images, which is inefficient. Meanwhile, there are tedious steps in dealing with the green spill problem. In this paper, we propose a deep learning-based green screen keying method which is automatic, effective, and real-time. Firstly, we create a green screen dataset that contains not only alpha and foreground maps but also samples with green spill phenomenon and a large number of distinct green screen backgrounds. Secondly, we propose an end-to-end network that can automatically tackle green screen keying and green spill removal problems. Our method only takes a single image as input without user interaction and estimates alpha and foreground simultaneously. Extensive experiments clearly demonstrate the superiority of our proposed method. Moreover, our method achieves approximately 75fps on 720P videos (1280 \(\times \) 720) and 25fps on 1080P videos (1920 \(\times \) 1080), which can be considered real-time for many applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Keylight. https://learn.foundry.com/nuke/content/refere-nce-guide/keyer-nodes/keylight.html

  2. Aximmetry. https://aximmetry.com/

  3. Aksoy, Y., Aydin, T.O., Pollefeys, M., et al.: Interactive high-quality green-screen keying via color unmixing. ACM Trans. Graph. (TOG) 36(4), 1 (2016)

    Article  Google Scholar 

  4. Li, H., Zhu, W., Jin, H., et al.: Automatic, illumination-invariant and real-time green-screen keying using deeply guided linear models. Symmetry 13(8), 1454 (2021)

    Article  Google Scholar 

  5. Xu, N., Price, B., Cohen, S., Huang, T.: Deep image matting. In: CVPR, pp. 2970–2979 (2017)

  6. Tang, J., Aksoy, Y., Oztireli, C., Gross, M., Aydin, T.O.: Learning-based sampling for natural image matting. In: CVPR, pp. 3055–3063 (2019)

  7. Cai, S., Zhang, X., Fan, H., Huang, H., Liu, J., Liu, J., Liu, J., Wang, J., Sun, J.: Disentangled image matting. In: ICCV, pp. 8819-8828 (2019)

  8. Yu, Q., Zhang, J., Zhang, H., et al.: Mask guided matting via progressive refinement network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1154–1163 (2021)

  9. Sengupta, S., Jayaram, V., Curless, B., Seitz, S.M., Kemelmacher-Shlizerman, I.: Background matting: the world is your green screen. In: CVPR, pp. 2291–2300 (2020)

  10. Lin, S., Ryabtsev, A., Sengupta, S., et al.: Real-time high-resolution background matting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8762–8771 (2021)

  11. Chen, Q., Ge, T., Xu, Y., Zhang, Z., Yang, X., Gai, K.: Semantic human matting. In: ACM (2018)

  12. Zhang, Y., Gong, L., Fan, L., Ren, P., Huang, Q., Bao, H., Xu, W.: A late fusion cnn for digital matting. In: CVPR, pp. 7469–7478 (2019)

  13. Liu, J., Yao, Y., Hou, W., Cui, M., Xie, X., Zhang, C., Hua, X.-s.: Boosting semantic human matting with coarse annotations. In: CVPR, pp. 8563–8572 (2020)

  14. Qiao, Y., Liu, Y., Yang, X., Zhou, D., Xu, M., Zhang, Q., Wei, X.: Attention-guided hierarchical structure aggregation for image matting. In: CVPR, pp. 13676–13685 (2020)

  15. Ke, Z., Li, K., Zhou, Y., et al.: Is a Green Screen Really Necessary for Real-Time Portrait Matting? arXiv: 2011.11961 (2020)

  16. Li, J., Zhang, J., Maybank, S.J., Tao, D.: Bridging composite and real: towards end-to-end deep image matting. Int. J. Comput. Vision 130(2), 246–266 (2022)

  17. Hou, Q., Liu, F.: Context-aware image matting for simultaneous foreground and alpha estimation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4130–4139 (2019)

  18. Li, J., Zhang, J., Tao, D.: Deep automatic natural image matting. arXiv:2107.07235 (2021)

  19. Forte, M., Pitié ,F.: F,B, Alpha Matting. arXiv:2003.07711 (2020)

  20. Vlahos, P., Dadourian, A., Sauve, G.: Method and apparatus for adjusting parameters used by compositing devices: U.S. Patent 5907315 (1999)

  21. Mishima, Y.: Soft edge chroma-key generation based upon hexoctahedral color space: U.S. Patent 5355174 (1994)

  22. Brinkmann, R.: The Art and Science of Digital Compositing, 2nd edn., pp. 477–481. Elsevier, Oxford (2008)

    Google Scholar 

  23. Smith, A.R., Blinn, J.F.: Blue screen matting. In: Proceedings of the 23rd ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 259–268. ACM, New Orleans (1996)

  24. Chuang, Y.Y., Curless, B., Salesin, D.H., et al.: A bayesian approach to digital matting. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001. IEEE, pp. 2: II–II (2001)

  25. Gastal, E.S.L., Oliveira, M.M.: Shared sampling for real-time alpha matting. Computer Graphics Forum, vol. 29, No. 2, pp. 575–584. Blackwell Publishing Ltd, Oxford (2010)

  26. He, K., Rhemann, C., Rother, C., et al.: A global sampling method for alpha matting. In: CVPR 2011. IEEE, pp. 2049–2056 (2011)

  27. Johnson, J., Varnousfaderani, E.S., Cholakkal, H., et al.: Sparse coding for alpha matting. IEEE Trans. Image Process. 25(7), 3032–3043 (2016)

    Article  MathSciNet  Google Scholar 

  28. Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2007)

    Article  Google Scholar 

  29. Sun, J., Jia, J., Tang, C.K., et al.: Poisson matting. In: ACM SIGGRAPH 2004 Papers, pp. 315–321 (2004)

  30. Grady, L., Schiwietz, T., Aharon, S., et al.: Random walks for interactive alpha-matting. Proc. VIIP 2005, 423–429 (2005)

    Google Scholar 

  31. Chen, Q., Li, D., Tang, C.K.: KNN matting. IEEE Trans. Pattern Anal. Mach. Intell. 35(9), 2175–2188 (2013)

    Article  Google Scholar 

  32. Aksoy, Y., Aydin, T.O., Pollefeys, M.: Designing effective inter-pixel information flow for natural image matting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 29–37 (2017)

  33. Liu, Q., Xie, H., Zhang, S., et al.: Long-range feature propagating for natural image matting. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 526–534 (2021)

  34. Jiang, W., Yu, D., Xie, Z., et al.: Trimap-guided Feature Mining and Fusion Network for Natural Image Matting. arXiv:2112.00510 (2021)

  35. Goel, A., Kumar, M., Sudheendra, P., et al.: IamAlpha: Instant and Adaptive Mobile Network for Alpha Matting (2021)

  36. Rhemann, C., Rother, C., Wang, J., et al.: A perceptually motivated online benchmark for image matting. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1826–1833. IEEE (2009)

Download references

Acknowledgements

This work was supported by National Key Research and Development Program of China (2020YFB1710400), National Natural Science Foundation of China under Grants (Nos. 62172392 and 61702482) and Scientic Research Instrument and Equipment Development Project of Chinese Academy of Sciences (YJKYYQ20190055).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaoxin Li.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, Y., Li, Z., Zhu, D. et al. Automatic and real-time green screen keying. Vis Comput 38, 3135–3147 (2022). https://doi.org/10.1007/s00371-022-02542-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02542-x

Keywords

Navigation