Skip to main content

Unsupervised Temporal Attention Summarization Model for User Created Videos

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12572))

Included in the following conference series:

Abstract

Unlike surveillance videos, videos created by common users contain more frequent shot changes, more diversified backgrounds, and a wider variety of content. The existing methods have two critical issues for summarizing user-created videos: 1) information distortion 2) high redundancy among keyframes. Therefore, we propose a novel temporal attention model to evaluate the importance scores of each frame. Specifically, on the basis of the classical attention model, we combine the predictions of both encoder and decoder to ensure using integrate information to score frame-level importance. Further, in order to sift redundant frames out, we devise a feedforward reward function to quantify diversity, representativeness, and storyness properties of candidate keyframes in attention model. Last, the Deep Deterministic Policy Gradient algorithm is adopted to efficiently solve the proposed formulation. Extensive experiments on the public SumMe and TVSum datasets show that our method outperforms the state of the art by a large margin in terms of the F-score.

Supported by Basic Research Project of Science and Technology Plan of Shenzhen (JCYJ20170818143246278).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Open video project: https://open-video.org/.

References

  1. Zhang, J., et al.: A structure-transfer-driven temporal subspace clustering for video summarization. Multimedia Tools Appl. 78(1), 24123–24145 (2019). https://doi.org/10.1007/s11042-018-6841-4

  2. Ji, Z., et al.: Deep Attentive Video Summarization With Distribution Consistency Learning (2020)

    Google Scholar 

  3. Zhang, K., Chao, W.-L., Sha, F., Grauman, K.: Video summarization with long short-term memory. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9911, pp. 766–782. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46478-7_47

    Chapter  Google Scholar 

  4. Ji, Z., Xiong, K., Pang, Y., Li, X.: Video summarization with attention-based encoder-decoder networks. IEEE Trans. Circuits Syst. Video Technol 30 (6). Early Access (2020)

    Google Scholar 

  5. Mahasseni, B., Lam, M., Todorovic, S.: Unsupervised video summarization with adversarial LSTM networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 202– 211 (2017)

    Google Scholar 

  6. Zhou, K., Qiao, Y., Xiang, T.: Deep reinforcement learning for unsupervised video summarization with diversity-representativeness reward. In: Thirty-Second AAAI Conference on Artificial Intelligence, pp. 7582–7589 (2018)

    Google Scholar 

  7. Lipton, Z.C., Berkowitz, J., Elkan, C.: A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019 (2015)

  8. Gong, B., Chao, W.-L., Grauman, K., Sha, F.: Diverse sequential subset selection for supervised video summarization. In: Advances in Neural Information Processing Systems, pp. 2069–2077 (2014)

    Google Scholar 

  9. Avila, S.E.F.D., Lopes, A.P.B., da Luz Jr., A., de Albuquerque Arajo, A.: VSUMM: a mechanism designed to produce static video summaries and a novel evaluation method. Pattern Recogn. Lett. 32(1), 56–68 (2011)

    Google Scholar 

  10. Lu, Z., Grauman, K.: Story-driven summarization for egocentric video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2714–2721 (2013)

    Google Scholar 

  11. El-Ghoroury, H.N., Gupta, S.C.: Additive Bernoulli noise linear sequential circuits. IEEE Trans. Comput. 100(10), 1119–1124 (1972)

    Article  Google Scholar 

  12. Gygli, M., Grabner, H., Van Gool, L.: Video summarization by learning submodular mixtures of objectives. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3090–3098 (2015)

    Google Scholar 

  13. Gygli, M., Grabner, H., Riemenschneider, H., Van Gool, L.: Creating summaries from user videos. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 505–520. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10584-0_33

    Chapter  Google Scholar 

  14. Song, Y., Vallmitjana, J., Stent, A., Jaimes, A.: TVSum: summarizing web videos using titles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5179–5187 (2015)

    Google Scholar 

  15. Lin, C.-Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)

    Google Scholar 

  16. Khosla, A., Hamid, R., Lin, C.-J., Sundaresan, N.: Large-scale video summarization using web-image priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2698– 2705 (2013)

    Google Scholar 

  17. Elhamifar, E., Sapiro, G., Vidal, R.: Sparse modeling for finding representative objects. Preparation 4(6), 8 (2012)

    Google Scholar 

  18. Zhao, B., Xing, E.P.: Quasi real-time summarization for consumer videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2513–2520 (2014)

    Google Scholar 

  19. Zhang, K., Chao, W.-L., Sha, F., Grauman, K.: Summary transfer: exemplar-based subset selection for video summarization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1059–1067 (2016)

    Google Scholar 

  20. Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. Computer Science, vol. 23, no. 8, p. 187 (2015)

    Google Scholar 

  21. Dzmitry, B., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Computer Science (2014)

    Google Scholar 

  22. Li, X., Zhao, B., Lu, X.: A general framework for edited video and raw video summarization. IEEE Trans. Image Process. 26(8), 3652–3664 (2017)

    Article  MathSciNet  Google Scholar 

  23. Sreeja, M.U., Kovoor, B.C.: Towards genre-specific frameworks for video summarisation: a survey. J. Vis. Commun. Image Represent. 62, 340–358 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruimin Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, M., Hu, R., Wang, X., Sheng, R. (2021). Unsupervised Temporal Attention Summarization Model for User Created Videos. In: Lokoč, J., et al. MultiMedia Modeling. MMM 2021. Lecture Notes in Computer Science(), vol 12572. Springer, Cham. https://doi.org/10.1007/978-3-030-67832-6_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67832-6_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67831-9

  • Online ISBN: 978-3-030-67832-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics