Advertisement

Filtering Reviews by Random Individual Error

  • Michaela GeierhosEmail author
  • Frederik S. Bäumer
  • Sabine Schulze
  • Valentina Stuß
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9101)

Abstract

Opinion mining from physician rating websites depends on the quality of the extracted information. Sometimes reviews are user-error prone and the assigned stars or grades contradict the associated content. We therefore aim at detecting random individual error within reviews. Such errors comprise the disagreement in polarity of review texts and the respective ratings. The challenges that thereby arise are (1) the content and sentiment analysis of the review texts and (2) the removal of the random individual errors contained therein. To solve these tasks, we assign polarities to automatically recognized opinion phrases in reviews and then check for divergence in rating and text polarity. The novelty of our approach is that we improve user-generated data quality by excluding error-prone reviews on German physician websites from average ratings.

Keywords

Data quality improvement Error-prone review detection Text-rating-inconsistency 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cambria, E., Hussain, A.: Sentic Computing: Techniques, Tools, and Applications, vol. 2. Springer, Dordrecht (2012)Google Scholar
  2. 2.
    Chen, R.Y., Guo, J.Y., Deng, X.L.: Detecting fake reviews of hype about restaurants by sentiment analysis. In: Chen, Y., Balke, W.-T., Xu, J., Xu, W., Jin, P., Lin, X., Tang, T., Hwang, E. (eds.) WAIM 2014. LNCS, vol. 8597, pp. 22–30. Springer, Heidelberg (2014) CrossRefGoogle Scholar
  3. 3.
    Fu, B., Lin, J., Li, L., Faloutsos, C., Hong, J., Sadeh, N.: Why people hate your app: making sense of user feedback in a mobile app store. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2013, pp. 1276–1284. ACM (2013)Google Scholar
  4. 4.
    Geierhos, M.: BiographIE - Klassifikation und Extraktion karrierespezifischer Informationen, Linguistic Resources for Natural Language Processing, vol. 5. Lincom, Munich (2010) Google Scholar
  5. 5.
    Grishman, R.: Information extraction: techniques and challenges. In: Pazienza, M.T. (ed.) SCIE 1997. LNCS, vol. 1299, pp. 10–27. Springer, Heidelberg (1997) Google Scholar
  6. 6.
    Gross, M.: Local grammars. In: Roche, E., Schabes, Y. (eds.) Finite-State Language Processing, pp. 330–354. MIT Press, Cambridge (1997)Google Scholar
  7. 7.
    Hanauer, D.A., Zheng, K., Singer, D.C., Gebremariam, A., Davis, M.M.: Public Awareness, Perception, and Use of Online Physician Rating Sites. JAMA 311(7), 734–735 (2014)CrossRefGoogle Scholar
  8. 8.
    Islam, M.R.: Numeric rating of apps on Google play store by sentiment analysis on user reviews. In: Proceedings of the International Conference on Electrical Engineering and Information & Communication Technology (ICEEICT), pp. 1–4. IEEE (2014)Google Scholar
  9. 9.
    Jang, W., Kim, J., Park, Y.: Why the online customer reviews are inconsistent? textual review vs. scoring review. In: Digital Enterprise Design & Management, p. 151. Springer (2014)Google Scholar
  10. 10.
    Lu, Y., Zhang, L., Xiao, Y., Li, Y.: Simultaneously detecting fake reviews and review spammers using factor graph model. In: Proceedings of the 5th Annual ACM Web Science Conference, pp. 225–233. ACM (2013)Google Scholar
  11. 11.
    Mudambi, S.M., Schuff, D., Zhang, Z.: Why aren’t the stars aligned? an analysis of online review content and star ratings. In: Proceedings of the 47th Hawaii International Conference on System Sciences (HICSS), pp. 3139–3147. IEEE (2014)Google Scholar
  12. 12.
    Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.: Fake review detection: Classification and analysis of real and pseudo reviews. Tech. rep., UIC-CS-2013-03, University of Illinois at Chicago (2013)Google Scholar
  13. 13.
    Olsher, D.J.: Full spectrum opinion mining: integrating domain, syntactic and lexical knowledge. In: 12th International Conference on Data Mining Workshops (ICDMW), pp. 693–700. IEEE (2012)Google Scholar
  14. 14.
    Pham, H.X., Jung, J.J.: Preference-based user rating correction process for interactive recommendation systems. Multimedia tools and applications 65(1), 119–132 (2013)CrossRefGoogle Scholar
  15. 15.
    Remus, U., Quasthoff, R., Heyer, G.: SentiWS - a publicly available German-language resource for sentiment analysis. In: Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC 2010). European Language Resources Association (ELRA), pp. 1168–1171 (2010)Google Scholar
  16. 16.
    Sabin, J.E.: Physician-rating websites. Virtual Mentor 15(11), 932–936 (2013)CrossRefGoogle Scholar
  17. 17.
    Terlutter, R., Bidmon, S., Röttl, J.: Who Uses Physician-Rating Websites? Differences in Sociodemographic Variables, Psychographic Variables, and Health Status of Users and Nonusers of Physician-Rating Websites. Journal of medical Internet research 16(3) (2014)Google Scholar
  18. 18.
    Wallace, B.C., Paul, M.J., Sarkar, U., Trikalinos, T.A., Dredze, M.: A large-scale quantitative analysis of latent factors and sentiment in online doctor reviews. Journal of the American Medical Informatics Association 21(6), 1098–1103 (2014)CrossRefGoogle Scholar
  19. 19.
    Whitely, S.E.: Individual inconsistency: Implications for test reliability and behavioral predictability. Applied Psychological Measurement 2(4), 571–579 (1978)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Michaela Geierhos
    • 1
    Email author
  • Frederik S. Bäumer
    • 1
  • Sabine Schulze
    • 1
  • Valentina Stuß
    • 1
  1. 1.Heinz Nixdorf InstituteUniversity of PaderbornPaderbornGermany

Personalised recommendations