Skip to main content

Feedback Recurrent Autoencoder for Video Compression

  • Conference paper
  • First Online:
Computer Vision – ACCV 2020 (ACCV 2020)

Abstract

Recent advances in deep generative modeling have enabled efficient modeling of high dimensional data distributions and opened up a new horizon for solving data compression problems. Specifically, autoencoder based learned image or video compression solutions are emerging as strong competitors to traditional approaches. In this work, We propose a new network architecture, based on common and well studied components, for learned video compression operating in low latency mode. Our method yields competitive MS-SSIM/rate performance on the high-resolution UVG dataset, among both learned video compression approaches and classical video compression methods (H.265 and H.264) in the rate range of interest for streaming applications. Additionally, we provide an analysis of existing approaches through the lens of their underlying probabilistic graphical models. Finally, we point out issues with temporal consistency and color shift observed in empirical evaluation, and suggest directions forward to alleviate those.

A. Goliński, R. Pourreza and Y. Yang—Equal Contribution.

Work completed during internship at Qualcomm Technologies Netherlands B.V. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For practical entropy coder, there is a constant overhead per block/stream, which is negligible with a large number of bits per stream and thus can be ignored. For example, for adaptive arithmetic coding (AAC), there is up to 2-bit inefficiency [18].

  2. 2.

    We refer the reader to [21, 22] and Sect. 2 of [4] for a good overview of frame structures in classic codecs.

References

  1. Sandvine: 2019 Global Internet Phenomena Report. https://www.ncta.com/whats-new/report-where-does-the-majority-of-internet-traffic-come (2019) Accessed 28 Feb 2020

  2. Lu, G., Ouyang, W., Xu, D., Zhang, X., Cai, C., Gao, Z.: DVC: an end-to-end deep video compression framework. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  3. Lu, G., Zhang, X., Ouyang, W., Chen, L., Gao, Z., Xu, D.: An end-to-end learning framework for video compression. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)

    Google Scholar 

  4. Wu, C.Y., Singhal, N., Krähenbühl, P.: Video compression through image interpolation. In: Proceedings of the European Conference on Computer Vision (2018)

    Google Scholar 

  5. Rippel, O., Nair, S., Lew, C., Branson, S., Anderson, A.G., Bourdev, L.: Learned video compression. In: Proceedings of the International Conference on Computer Vision (2019)

    Google Scholar 

  6. Habibian, A., van Rozendaal, T., Tomczak, J.M., Cohen, T.S.: Video compression with rate-distortion autoencoders. In: Proceedings of the International Conference on Computer Vision (2019)

    Google Scholar 

  7. Liu, H., shen, H., Huang, L., Lu, M., Chen, T., Ma, Z.: Learned video compression via joint spatial-temporal correlation exploration. In: Proceedings of the national conference on Artificial Intelligence (2020)

    Google Scholar 

  8. Han, J., Lombardo, S., Schroers, C., Mandt, S.: Deep probabilistic video compression. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  9. Lin, J., Liu, D., Li, H., Wu, F.: M-lvc: multiple frames prediction for learned video compression. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  11. Shi, X., Chen, Z., Wang, H., Yeung, D., Wong, W., Woo, W.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: Advances in Neural Information Processing Systems (2015)

    Google Scholar 

  12. van den Oord, A., Kalchbrenner, N., Espeholt, L., Kavukcuoglu, K., Vinyals, O., Graves, A.: Conditional image generation with PixelCNN decoders. In: Advances in Neural Information Processing Systems (2016)

    Google Scholar 

  13. Yang, Y., Sautiére, G., Ryu, J.J., Cohen, T.S.: Feedback Recurrent AutoEncoder. In: IEEE International Conference on Acoustics, Speech and Signal Processing (2019)

    Google Scholar 

  14. Mercat, A., Viitanen, M., Vanne, J.: Uvg dataset: 50/120fps 4k sequences for video codec analysis and development. In: Proceedings of the 11th ACM Multimedia Systems Conference, MMSys 2020, New York, Association for Computing Machinery, pp. 297–302 (2020)

    Google Scholar 

  15. Bossen, F.: Common test conditions and software reference configurations. JCTVC-F900 (2011)

    Google Scholar 

  16. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)

    Article  Google Scholar 

  17. Summerson, C.: How much data does Netflix use? https://www.howtogeek.com/338983/how-much-data-does-netflix-use/ (2018) Accessed 28 Feb 2020

  18. Pearlman, W.A., Said, A.: Digital Signal Compression: Principles and Practice. Cambridge University Press, Cambridge (2011)

    Book  Google Scholar 

  19. Higgins, I., et al.: beta-VAE: learning basic visual concepts with a constrained variational framework. In: International Conference on Learning Representations (2017)

    Google Scholar 

  20. Theis, L., Shi, W., Cunningham, A., Huszár, F.: Lossy image compression with compressive autoencoders. In: International Conference on Learning Representations (2017)

    Google Scholar 

  21. Moreira, L.: Digital video introduction. https://github.com/leandromoreira/digital_video_introduction/blob/master/README.md#frame-types (2017) Accessed 02 Mar 2020

  22. Sullivan, G.J., Ohm, J.R., Han, W.J., Wiegand, T.: Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 22, 1649–1668 (2012)

    Article  Google Scholar 

  23. Ballé, J., Laparra, V., Simoncelli, E.P.: End-to-end optimized image compression. In: International Conference on Learning Representations (2017)

    Google Scholar 

  24. Ballé, J., Minnen, D., Singh, S., Hwang, S.J., Johnston, N.: Variational image compression with a scale hyperprior. In: International Conference on Learning Representations (2018)

    Google Scholar 

  25. Mentzer, F., Agustsson, E., Tschannen, M., Timofte, R., Van Gool, L.: Conditional probability models for deep image compression. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  26. Rippel, O., Bourdev, L.: Real-time adaptive image compression. In: Proceedings of the International Conference on Machine Learning (2017)

    Google Scholar 

  27. Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K.: Spatial Transformer Networks. In: Advances in Neural Information Processing Systems (2015)

    Google Scholar 

  28. Gregor, K., Danihelka, I., Graves, A., Rezende, D.J., Wierstra, D.: DRAW: a recurrent neural network for image generation. In: Proceedings of the International Conference on Machine Learning (2015)

    Google Scholar 

  29. Gregor, K., Besse, F., Rezende, D.J., Danihelka, I., Wierstra, D.: Towards conceptual compression. In: Advances in Neural Information Processing Systems (2016)

    Google Scholar 

  30. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention (2015)

    Google Scholar 

  31. Xue, T., Chen, B., Wu, J., Wei, D., Freeman, W.T.: Video enhancement with task-oriented flow. Int. J. Comput. Vis. 127, 1106–1125 (2019)

    Article  Google Scholar 

  32. Hu, Y., Yang, W., Ma, Z., Liu, J.: Learning end-to-end lossy image compression: a benchmark. arXiv:2002.03711 (2020)

  33. Ballé, J., Laparra, V., Simoncelli, E.P.: Density modeling of images using a generalized normalization transformation. In: International Conference on Learning Representations (2016)

    Google Scholar 

  34. Liu, H., et al.: Non-local Attention Optimized Deep Image Compression. arXiv:1904.09757 (2019)

  35. Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT Press, Cambridge (2009)

    MATH  Google Scholar 

  36. Alemi, A.A., Poole, B., Fischer, I., Dillon, J.V., Saurous, R.A., Murphy, K.: Fixing a Broken ELBO. In: Proceedings of the International Conference on Machine Learning (2018)

    Google Scholar 

  37. Webb, S., et al.: Faithful Inversion of Generative Models for Effective Amortized Inference. In: Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  38. Kay, W., et al.: The Kinetics Human Action Video Dataset. arxiv:1705.06950 (2017)

  39. Xiph.org: Xiph.org video test media [derf’s collection]. https://media.xiph.org/video/derf/ (2004) Accessed: 21 Feb 2020

  40. Wiegand, T., Sullivan, G.J., Bjontegaard, G., Luthra, A.: Overview of the H.264/AVC video coding standard. IEEE Trans. Circuits Syst. Video Technol. 13, 560–576 (2003)

    Google Scholar 

  41. Tomar, S.: Converting video formats with ffmpeg. Linux J. 2006, 10 (2006)

    Google Scholar 

  42. HM developers: High Efficiency Video Coding (HEVC). https://hevc.hhi.fraunhofer.de/ (2012) Accessed 21 Feb 2020

  43. Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: FlowNet 2.0: evolution of optical flow estimation with deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  44. Gao, C., Gu, D., Zhang, F., Yu, Y.: ReCoNet: real-time coherent video style transfer network. In: Proceedings of the Asian Conference on Computer Vision (2018)

    Google Scholar 

  45. Lai, W.S., Huang, J.B., Wang, O., Shechtman, E., Yumer, E., Yang, M.H.: Learning blind video temporal consistency. In: Proceedings of the European Conference on Computer Vision (2018)

    Google Scholar 

  46. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. In: IEEE Transactions on Computational Imaging (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Yang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 2478 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Goliński, A., Pourreza, R., Yang, Y., Sautière, G., Cohen, T.S. (2021). Feedback Recurrent Autoencoder for Video Compression. In: Ishikawa, H., Liu, CL., Pajdla, T., Shi, J. (eds) Computer Vision – ACCV 2020. ACCV 2020. Lecture Notes in Computer Science(), vol 12625. Springer, Cham. https://doi.org/10.1007/978-3-030-69538-5_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69538-5_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69537-8

  • Online ISBN: 978-3-030-69538-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics