Advertisement

Signal, Image and Video Processing

, Volume 13, Issue 8, pp 1487–1494 | Cite as

Inter-frame video image generation based on spatial continuity generative adversarial networks

  • Tao ZhangEmail author
  • Peipei Jiang
  • Meng Zhang
Original Paper
  • 83 Downloads

Abstract

This paper proposes a method for generating inter-frame video images based on spatial continuity generative adversarial networks (SC-GANs) to smooth the playing of low-frame rate videos and to clarify blurry image edges caused by the use of traditional methods to improve the video frame rate. Firstly, the auto-encoder is used as a discriminator and Wasserstein distance is applied to represent the difference between the loss distribution of the real sample and the generated sample, instead of the typical method of generative adversarial networks to directly match data distribution. Secondly, the hyperparameter between generator and discriminator is used to stabilize the training process, which effectively prevents the model from collapsing. Finally, taking advantage of the spatial continuity of the image features of continuous video frames, an optimal value between two consecutive frames is found by Adam and then mapped to the image space to generate inter-frame images. In order to illustrate the authenticity of the generated inter-frame images, PSNR and SSIM are adopted to evaluate the inter-frame images, and the results show that the generated inter-frame images have a high degree of authenticity. The feasibility and validity of the proposed method based on SC-GAN are also verified.

Keywords

GAN Adversarial training Spatial continuity Adam Inter-frame image generation 

Notes

References

  1. 1.
    Paramkusam, A.V., Reddy, V.S.K.: Multilayer reference frame motion estimation for video coding. Signal Image Video Process. 9(8), 1851–1860 (2015)CrossRefGoogle Scholar
  2. 2.
    Purwar, R.K., Prakash, N., Rajpal, N.: A matching criterion for motion compensation in the temporal coding of video signal. Signal Image Video Process. 5(2), 133–139 (2011)CrossRefGoogle Scholar
  3. 3.
    Ha, T., Lee, S., Kim, J.: Motion compensated frame interpolation by new block-based motion estimation algorithm. IEEE Trans. Consum. Electron. 50(2), 752–759 (2004)CrossRefGoogle Scholar
  4. 4.
    Dikbas, S., Altunbasak, Y.: Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation. IEEE Trans. Image Process. 22(8), 2931–2945 (2013)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Jeong, S.G., Lee, C., Kim, C.S.: Motion-compensated frame interpolation based on multi-hypothesis motion estimation and texture optimization. IEEE Trans. Image Process. 22(11), 4497–4509 (2013)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Wei, C.: Frame rate up conversion algorithm research based on motion estimation and motion compensation. Doctoral dissertation (2016)Google Scholar
  7. 7.
    Taşdemir, K., Çetin, A.E.: Content-based video copy detection based on motion vectors estimated using a lower frame rate. Signal Image Video Process. 8(6), 1049–1057 (2014)CrossRefGoogle Scholar
  8. 8.
    Qi, G.: Research and application of image generation based on deep learning. Doctoral dissertation (2017)Google Scholar
  9. 9.
    Park, K., Yu, S., Park, S., Lee, S., Paik, J.: An optimal low dynamic range image generation method using a neural network. IEEE Trans. Consum. Electron. 64(1), 69–76 (2018)CrossRefGoogle Scholar
  10. 10.
    Jingxuan, H., Yao, Z., Chunyu, L., Meiqin, L., Huihui, B.: CNN-based frame rate up-conversion algorithm. Appl. Res. Comput. 35(2), 611–614 (2018)Google Scholar
  11. 11.
    Gucan, L., Xiaohu, Z., Qifeng, Y.: Deep convolutional neural network for motion compensated frame interpolation. J. Natl. Univ. Def. Technol. 38(5), 143–148 (2016)Google Scholar
  12. 12.
    Goodfellow, I.: Nips 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016)
  13. 13.
    Kunfeng, W., Chao, G., Yanjie, D.: Generative adversarial networks: the state of the art and beyond. Acta Autom. Sin. 43(3), 321–332 (2017)Google Scholar
  14. 14.
    Mathieu, M., Couprie, C., Lecun, Y.: Deep multi-scale video prediction beyond mean square error. Electr. Eng. Syst. Sci. 32(24), 1091–1105 (2017)Google Scholar
  15. 15.
    Berthelot, D., Schumm, T., Metz, L.: BEGAN: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717 (2017)
  16. 16.
    Xueqi, C., Xiaolong, J., Yuanzhuo, W.: Survey on big data system and analytic technology. J. Softw. 25(9), 1889–1908 (2014)Google Scholar
  17. 17.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. computer. Science 10(21), 2672–2680 (2014)Google Scholar
  18. 18.
    Zhao, J., Mathieu, M., LeCun, Y.: Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126 (2016)
  19. 19.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)
  20. 20.
    Tanchenko, A.: Visual-PSNR measure of image quality. J. Vis. Commun. Image Represent. 25(5), 874–878 (2014)CrossRefGoogle Scholar
  21. 21.
    Zhu, R., Zhou, F., Xue, J.H.: MvSSIM: a quality assessment index for hyperspectral images. Neurocomputing 272, 250–257 (2018)CrossRefGoogle Scholar
  22. 22.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Information Science and EngineeringYanshan UniversityQinhuangdaoChina
  2. 2.Neusoft Software Co., Ltd.QinhuangdaoChina

Personalised recommendations