Advertisement

Frontiers of Computer Science

, Volume 9, Issue 2, pp 210–223 | Cite as

Product-oriented review summarization and scoring

  • Rong Zhang
  • Wenzhe Yu
  • Chaofeng Sha
  • Xiaofeng HeEmail author
  • Aoying Zhou
Research Article

Abstract

Currently, there are many online review web sites where consumers can freely write comments about different kinds of products and services. These comments are quite useful for other potential consumers. However, the number of online comments is often large and the number continues to grow as more and more consumers contribute. In addition, one comment may mention more than one product and contain opinions about different products, mentioning something good and something bad. However, they share only a single overall score. Therefore, it is not easy to know the quality of an individual product from these comments.

This paper presents a novel approach to generate review summaries including scores and description snippets with respect to each individual product. From the large number of comments, we first extract the context (snippet) that includes a description of the products and choose those snippets that express consumer opinions on them. We then propose several methods to predict the rating (from 1 to 5 stars) of the snippets. Finally, we derive a generic framework for generating summaries from the snippets. We design a new snippet selection algorithm to ensure that the returned results preserve the opinion-aspect statistical properties and attribute-aspect coverage based on a standard seat allocation algorithm. Through experimentswe demonstrate empirically that our methods are effective. We also quantitatively evaluate each step of our approach.

Keywords

online transaction diversification review summarization review scoring 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Liu J, Cao Y, Lin C Y, Huang Y, Zhou M. Low-quality product review detection in opinion summarization. In: Proceedings of the Joint Meeting of the Conference on Empirical Methods on Natural Language Processing and the Conference on Natural Language Learing. 2007, 334–342Google Scholar
  2. 2.
    Lappas T, Crovella M, Terzi E. Selecting a characteristic set of reviews. In: Proceedings of the 18th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2012, 832–840CrossRefGoogle Scholar
  3. 3.
    Dang V, Croft W B. Diversity by proportionality: an election-based approach to search result diversification. In: Proceedings of the 35th ACM SIGIR Conference on Research and Development in Information Retrieval. 2012, 65–74CrossRefGoogle Scholar
  4. 4.
    Tsaparas P, Ntoulas A, Terzi E. Selecting a comprehensive set of reviews. In: Proceedings of the 17th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2011, 168–176Google Scholar
  5. 5.
    Yu W, Zhang R, He X. Selecting a diversified set of reviews. Lecture Notes in Computer Science, 2013, 7808, 721–733CrossRefGoogle Scholar
  6. 6.
    Sinha P, Mehrotra S, Jain R. Summarization of personal photologs using multidimensional content and context. In: Proceedings of the 1st ACM International Conference on Multimedia Retrieval. 2011, 4Google Scholar
  7. 7.
    Lu Y, Tsaparas P, Ntoulas A, Polanyi L. Exploiting social context for review quality prediction. In: Proceedings of the 19th international conference on World Wide Web. 2010, 691–700CrossRefGoogle Scholar
  8. 8.
    O’Mahony M P, Smyth B. Learning to recommend helpful hotel reviews. In: Proceedings of the 3rd ACM Conference on Recommender Systems. 2009, 305–308Google Scholar
  9. 9.
    Kim S M, Pantel P, Chklovski T, Pennacchiotti M. Automatically assessing review helpfulness. In: Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. 2006, 423–430CrossRefGoogle Scholar
  10. 10.
    Liu Y, Huang X, An A, Yu X. Modeling and predicting the helpfulness of online reviews. In: Proceedings of the 8th IEEE International Conference on Data Mining. 2008, 443–452Google Scholar
  11. 11.
    Zhang R, Sha C F, Zhou M Q, Zhou A Y. Exploiting shopping and reviewing behavior to re-score online evaluations. In: Proceedings of the 21st International Conference Companion on World Wide Web. 2012, 649–650CrossRefGoogle Scholar
  12. 12.
    Lappas T, Gunopulos D. Efficient confident search in large review corpora. Lecture Notes in Computer Science, 2010, 6322: 195–210CrossRefGoogle Scholar
  13. 13.
    Ganesan K, Zhai C, Viegas E. Micropinion generation: an unsupervised approach to generating ultra-concise summaries of opinions. In: Proceedings of the 21st International Conference Companion onWorld Wide Web. 2012, 869–878Google Scholar
  14. 14.
    Hu M, Liu B. Mining and summarizing customer reviews. In: Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2004, 168–177Google Scholar
  15. 15.
    Zhuang L, Jing F, Zhu X Y. Movie review mining and summarization. In: Proceedings of the 15th ACM International Conference on Information and Knowledge Management. 2006, 43–50Google Scholar
  16. 16.
    Meng X, Wang H. Mining user reviews: from specification to summarization. In: Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. 2009, 177–180CrossRefGoogle Scholar
  17. 17.
    Moghaddam S, Ester M. Opinion digger: an unsupervised opinion miner from unstructured product reviews. In: Proceedings of the 19th ACM International Conference on Information and Knowledge Management. 2010, 1825–1828Google Scholar
  18. 18.
    Shimada K, Tadano R, Endo T. Multi-aspects review summarization with objective information. Procedia-Social and Behavioral Sciences, 2011, 27: 140–149CrossRefGoogle Scholar
  19. 19.
    Zhan J, Loh H T, Liu Y. Gather customer concerns from online product reviews — a text summarization approach. Expert Systems with Applications, 2009, 36(2): 2107–2115CrossRefGoogle Scholar
  20. 20.
    Pang B, Lee L, Vaithyanathan S. Thumbs up? Sentiment classification using machine learning techniques. In: Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing. 2002, 79–86Google Scholar
  21. 21.
    Turney P D. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. 2002, 417–424Google Scholar
  22. 22.
    Pang B, Lee L. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2008, 2(1–2): 1–135CrossRefGoogle Scholar
  23. 23.
    Kruengkrai C, Uchimoto K, Kazama J, Wang Y, Torisawa K, Isahara H. An error-driven word-character hybrid model for joint Chinese word segmentation and POS tagging. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. 2009, 513–521Google Scholar
  24. 24.
    McCallum A, Nigam K. A comparison of event models for naive bayes text classification. AAAI-98 Workshop on Learning for Text Categorization, 1998, 752: 41–48Google Scholar
  25. 25.
    Berger A L, Pietra S A D, Pietra V J D. A maximum entropy approach to natural language processing. Journal of Computational Linguistics, 1996, 22(1): 39–71Google Scholar
  26. 26.
    Vapnik V. The Nature of Statistical Learning Theory. New York: Springer-Verlag, 1995CrossRefzbMATHGoogle Scholar
  27. 27.
    Qiu G, Liu B, Bu J, Chen C. Opinion word expansion and target extraction through double propagation. Computational Linguistics, 2011, 37(1): 9–27CrossRefGoogle Scholar
  28. 28.
    Zhai Z, Liu B, Zhang L, Xu H, Jia P. Identifying evaluative sentences in online discussions. In: Proceedings of the 25th AAAI Conference on Artificial Intelligence. 2011, 933–938Google Scholar
  29. 29.
    Petrov S, McDonald R. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language, 2012, 59Google Scholar
  30. 30.
    Koo T, Carreras X, Collins M. Simple semi-supervised dependency parsing. In: Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. 2008, 595–603Google Scholar
  31. 31.
    Zhao Y, Karypis G. Criterion Functions for Document Clustering: Experiments and Analysis. Technical Report. 2001Google Scholar
  32. 32.
    Lapata M. Automatic evaluation of information ordering: Kendall’stau. Computational Linguistics, 2006, 32(4): 471–484CrossRefzbMATHGoogle Scholar
  33. 33.
    Gunawardana A, Shani G. A survey of accuracy evaluation metrics of recommendation tasks. The Journal of Machine Learning Research, 2009, 10: 2935–2962zbMATHMathSciNetGoogle Scholar
  34. 34.
    Herlocker J L, Konstan J A, Terveen L G, Riedl J T. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems, 2004, 22(1): 5–53CrossRefGoogle Scholar
  35. 35.
    Chapelle O, Metlzer D, Zhang Y, Grinspan P. Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management. 2009, 621–630Google Scholar
  36. 36.
    Manning C D, Raghavan P, Schütze H. Introduction to Information Retrieval. New York: Cambridge University Press, 2008CrossRefzbMATHGoogle Scholar

Copyright information

© Higher Education Press and Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Rong Zhang
    • 1
  • Wenzhe Yu
    • 1
  • Chaofeng Sha
    • 2
  • Xiaofeng He
    • 1
    Email author
  • Aoying Zhou
    • 1
  1. 1.Institute of Data Science and Engineering, Shanghai Key Laboratory of Trustworthy ComputingEast China Normal UniversityShanghaiChina
  2. 2.School of Computer ScienceFudan UniversityShanghaiChina

Personalised recommendations