Advertisement

Characterizing the dynamics and evolution of incentivized online reviews on Amazon

  • Soheil JamshidiEmail author
  • Reza Rejaie
  • Jun Li
Original Article
  • 104 Downloads

Abstract

During the past few years, sellers have increasingly offered discounted or free products to selected reviewers of e-commerce platforms in exchange for their reviews. Such incentivized (and often very positive) reviews can improve the rating of a product which in turn sways other users’ opinions about the product. Despite their importance, the prevalence, characteristics, and the influence of incentivized reviews in a major e-commerce platform have not been systematically and quantitatively studied. This paper examines the problem of detecting and characterizing incentivized reviews in two primary categories of Amazon products. We describe a new method to identify explicitly incentivized reviews (EIRs) and then collect a few datasets to capture an extensive collection of EIRs along with their associated products and reviewers. We show that the key features of EIRs and normal reviews exhibit different characteristics. Furthermore, we illustrate how the prevalence of EIRs has evolved and been affected by Amazon’s ban. Our examination of the temporal pattern of submitted reviews for sample products reveals promotional campaigns by the corresponding sellers and their effectiveness in attracting other users. We also demonstrate that a classifier that is trained by EIRs (without explicit keywords) and normal reviews can accurately detect other EIRs as well as implicitly incentivized reviews. Finally, we explore the current state of explicit reviews on Amazon. Overall, this analysis sheds insightful light on the impact of EIRs on Amazon products and users.

Keywords

Incentivized online reviews Machine learning Modeling Amazon Online review 

Notes

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grants Nos. CNS-1564348 and CHS-1551817. We gratefully acknowledge the support of Intel Corporation for giving access to the Intel AI DevCloud platform used for this work.

References

  1. aboutAmazon.com. (2016) Update customer review. https://blog.aboutamazon.com/innovation/update-on-customer-reviews
  2. Akoglu L, Chandy R, Faloutsos C (2013) Opinion fraud detection in online reviews by network effects. In: Proceedings of the ICWSMGoogle Scholar
  3. Amazon.com. (2018) About amazon verified purchase reviews. https://www.amazon.com/gp/help/customer/display.html/?nodeId=201145140
  4. Burtch G, Hong Y, Bapna R, Griskevicius V (2017) Stimulating online reviews by combining financial incentives and social norms. Manag Sci 64:2065–2082CrossRefGoogle Scholar
  5. d’Agostino RB (1971) An omnibus test of normality for moderate and large size samples. Biometrika 58(2):341–348MathSciNetCrossRefGoogle Scholar
  6. Gunning R (1952) The technique of clear writing. McGraw-Hill, New YorkGoogle Scholar
  7. Jamshidi S, Rejaie R, Li J (2018) Trojan horses in amazon’s castle: Understanding the incentivized online reviews. In: IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), pp 335–342Google Scholar
  8. Jamshidi S, Rejaie R, Li J (2016-18) Characterizing the incentivized online reviews. Technical report, University of Oregon, 2016–2018. https://www.cs.uoregon.edu/Reports/TR-2018-001.pdf. Accessed 6 May 2019
  9. Jindal N, Liu B (2008) Opinion spam and analysis. In: Proceedings of the international conference on web search and data miningGoogle Scholar
  10. Jindal N, Liu B, Lim EP (2010) Finding unusual review patterns using unexpected rules. In: Proceedings of the ACM international conference on information and knowledge managementGoogle Scholar
  11. Kanaris I, Kanaris K, Houvardas I, Stamatatos E (2007) Words versus character n-grams for anti-spam filtering. Int J Artif Intell Tools 16(06):1047–1067CrossRefGoogle Scholar
  12. Kim SM, Pantel P, Chklovski T, Pennacchiotti M (2006) Automatically assessing review helpfulness. In: Proceedings of the ACL conference on empirical methods in natural language processingGoogle Scholar
  13. Koppel M, Schler J, Argamon S (2011) Authorship attribution in the wild. Lang Resour Eval 45(1):83–94CrossRefGoogle Scholar
  14. Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260):583–621CrossRefGoogle Scholar
  15. Li FH, Huang M, Yang Y, Zhu X (2011) Learning to identify review spam. In: Twenty-second international joint conference on artificial intelligenceGoogle Scholar
  16. Lim P, Nguyen V, Jindal N, Liu B, Lauw H (2010) Detecting product review spammers using rating behaviors. In: Proceedings of ACM international conference on information and knowledge managementGoogle Scholar
  17. Liu J, Cao Y, Lin CY, Huang, Y, Zhou M (2007) Low-quality product review detection in opinion summarization. In: Proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL)Google Scholar
  18. Mudambi S (2010) What makes a helpful online review? A study of customer reviews on amazon.com. MIS Q 34:185–200CrossRefGoogle Scholar
  19. Ott M, Choi Y, Cardie C, Hancock J T (2011) Finding deceptive opinion spam by any stretch of the imagination. In: Proceedings of the ACL human language technologiesGoogle Scholar
  20. Petrescu M, O’Leary K, Goldring D, Mrad SB (2018) Incentivized reviews: promising the moon for a few stars. J Retail Consum Serv 41:288–295CrossRefGoogle Scholar
  21. Qiao D, Lee SY, Whinston A, Wei Q (2017) Incentive provision and pro-social behaviors. In: Proceedings of the Hawaii international conference on system sciencesGoogle Scholar
  22. Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the international conference on knowledge discovery and data mining, pp 1135–1144Google Scholar
  23. Shyong K, Frankowski D, Riedl J et al (2006) Do you trust your recommendations? an exploration of security and privacy issues in recommender systems. In: Emerging trends in information and communication securityGoogle Scholar
  24. Wang J, Ghose A, Ipeirotis P (2012) Bonus, disclosure, and choice: what motivates the creation of high-quality paid reviews? In: Proceedings of the international conference on information systemsGoogle Scholar
  25. Xie Z, Zhu S (2015) Appwatcher: unveiling the underground market of trading mobile app reviews. In: Proceedings of the ACM conference on security & privacy in wireless and mobile networksGoogle Scholar

Copyright information

© Springer-Verlag GmbH Austria, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer and Information ScienceUniversity of OregonEugeneUSA

Personalised recommendations