Skip to main content
Log in

Video clip recommendation model by sentiment analysis of time-sync comments

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

With the advent of video time-sync comments, users can not only comment the videos on the Internet, but also share their feelings with others. However, the number of the videos on the Internet is so huge that users do not have enough time and energy to watch all the videos. How to recommend the videos suitable for users has become an important problem. The traditional video sentiment analysis methods can not work effectively and the results are not easy to explain. In this paper, an emotion recognition algorithm based on sync-time comments is proposed, as a basis for the recommendation of video clips. First, we propose a formal description of video clips recommendation based on sentiment analysis. Secondly, by constructing the classification of time-sync comments based on Latent Dirichlet Allocation (LDA) topic model, we evaluate the emotion vector of the words in time-sync comments. Meanwhile, the video clips are recommended according to the emotion relationships among the video clips. The experimental results show that the proposed model is effective in analyzing the complex sentiment of different kinds of text information.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Blei DM, Ng AY, Jordan MI (2003) Latent Dirichlet allocation. J Mach Learn Res 3:993–1022

    MATH  Google Scholar 

  2. Bobicev V, Maxim V, Prodan T et al (2010) Emotions in Words: Developing a Multilingual WordNet-Affect. Computational Linguistics and Intelligent Text Processing, CICLing 2010, LNCS 6008, pp 375–384

  3. Foundation PS Document for Jieba 0.39[EB/OL]. [2017-04-15]. https://pypi.org/project/jieba/jieba0.39.html

  4. Hamasaki M, Takeda H, Hope T et al (2009) Network analysis of an emergent massively collaborative creation community. In: Proceedings of the third international ICWSM conference. Menlo Park, pp 222–225

  5. Heinrich G Parameter estimation for text analysis[EB/OL]. [2016-03-10]. http://www.arbylon.net/publications/text-est2.pdf

  6. Li SS, Huang CR (2010) Chinese sentiment classification based on stacking combination method. Journal of Chinese Information Processing 24(5):56–61

    Google Scholar 

  7. Liu ZM, Liu L (2012) Empirical study of sentiment classification for Chinese microblog based on machine learning. Comput Eng Appl 48(1):1–4

    MathSciNet  Google Scholar 

  8. Luo CZ, Ni BB, Yan SC et al (2016) Image classification by selective regularized subspace learning. IEEE Trans Multimedia 18(1):40–50

    Article  Google Scholar 

  9. Lv G, Xu T, Chen E et al (2016) Reading the videos: temporal labeling for crowdsourced time-sync videos based on semantic embedding. In: Proceedings of the 13th AAAI conference on artificial intelligence. Menlo Park, pp 3000–3006

  10. Quan C, Ren F (2010) Sentence emotion analysis and recognition based on emotion words using Ren-CECps. International Journal of Advanced Intelligence 2(1):105–117

    Google Scholar 

  11. Ren F, Quan C (2012) Linguistic-based emotion analysis and recognition for measuring consumer satisfaction: an application of affective computing. Inf Technol Manag 13(4):321–332

    Article  Google Scholar 

  12. Ren J Document for Ren-CECps 1.0 [EB/OL]. [2016-06-12]. http://a1-www.is.tokushima-u.ac.jp/member/ren/RenCECps1.0/Ren-CECps1.0.html

  13. Wang M, Fu WJ, Hao SJ et al (2017) Learning on big graph: label inference and regularization with anchor hierarchy. IEEE Trans Knowl Data Eng 29(5):1101–1114

    Article  Google Scholar 

  14. Wang M, Gao Y, Lu K et al (2013) View-based discriminative probabilistic modeling for 3D object retrieval and recognition. IEEE Trans Image Process 22(4):1395–1407

    Article  MathSciNet  MATH  Google Scholar 

  15. Wang M, Hong RC, Li GD et al (2012) Event driven Web video summarization by tag localization and key-shot identification. IEEE Trans Multimedia 14(4):975–985

    Article  Google Scholar 

  16. Wang M, Luo CZ, Hong RC et al (2016) Beyond object proposals: random crop pooling for multi-label image recognition. IEEE Trans Image Process 25(12):5678–5688

    Article  MathSciNet  MATH  Google Scholar 

  17. Wu B, Zhong E, Horner A et al (2014) Music emotion recognition by multi-label multi-layer multi-instance multi-view learning. In: Proceedings of the 22nd ACM international conference on multimedia. New York, pp 117–126

  18. Wu B, Zhong E, Tan B et al (2014) Crowdsourced time-sync video tagging using temporal and personalized topic modeling. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. New York, pp 721–730

  19. Wu Z, Ito E (2014) Correlation analysis between user’s emotional comments and popularity measures. In: Proceedings of the 2014 IIAI 3rd international conference on advanced applied informatics. Piscataway, pp 280–283

  20. Xian Y, Li J, Zhang C et al (2015) Video highlight clip extraction with time-sync comment. In: Proceedings of the 7th international workshop on hot topics in planet-scale mobile computing and online social networking. New York, pp 31–36

  21. Yoshii K, Goto M Music Commentator: generating comments synchronized with musical audio signals by a joint probabilistic model of acoustic and textual features[EB/OL]. [2016-03-10]. https://staff.aist.go.jp/m.goto/PAPER/ICEC2009yoshii.pdf

  22. Yu H, Hatzivassiloglou V (2003) Towards answering opinion questions: separating facts from opinions and identifying the polarity of opinion sentences. In: Proceedings of the 2003 conference on empirical methods in natural language processing. Stroudsburg, pp 129–136

  23. Zhao J, Liu K, Wang G Adding redundant features for CRFs based sentence sentiment classification. In: Proceedings of the conference on empirical methods in natural language processing. Stroudsburg, pp 117–126 2008

  24. Zheng YY, Xu J, Xiao Z (2015) Utilization of sentiment analysis and visualization in online video bullet-screen comments. New Technology of Library and Information Service 31(11):82–90

    Google Scholar 

  25. Zhou L, Xia Y, Li B et al WIA-opinmine system in NTCIR-8 MOAT evaluation [EB/OL]. [2016-03-10]. http://research.nii.Ac.jp/ntcir/workshop/OnlineProceedings8/NTCIR/15-NTCIR8-MOAT-ZhouL.pdf

Download references

Acknowledgements

This work is partly supported by the Daze Scholar Project of Suzhou University Under Grant 2018SZXYDZXZ01, the National Natural Science Foundation of China Under Grant 61702355, the Scientific and Technological Project of Suzhou City Under Grant SZ2017GG39, the Key Natural Science Project of the Anhui Educational Department Under Grant KJ2018A0448.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenggao Pan.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pan, Z., Li, X., Cui, L. et al. Video clip recommendation model by sentiment analysis of time-sync comments. Multimed Tools Appl 79, 33449–33466 (2020). https://doi.org/10.1007/s11042-019-7578-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-7578-4

Keywords

Navigation