Characterizing the dynamics and evolution of incentivized online reviews on Amazon

Abstract

During the past few years, sellers have increasingly offered discounted or free products to selected reviewers of e-commerce platforms in exchange for their reviews. Such incentivized (and often very positive) reviews can improve the rating of a product which in turn sways other users’ opinions about the product. Despite their importance, the prevalence, characteristics, and the influence of incentivized reviews in a major e-commerce platform have not been systematically and quantitatively studied. This paper examines the problem of detecting and characterizing incentivized reviews in two primary categories of Amazon products. We describe a new method to identify explicitly incentivized reviews (EIRs) and then collect a few datasets to capture an extensive collection of EIRs along with their associated products and reviewers. We show that the key features of EIRs and normal reviews exhibit different characteristics. Furthermore, we illustrate how the prevalence of EIRs has evolved and been affected by Amazon’s ban. Our examination of the temporal pattern of submitted reviews for sample products reveals promotional campaigns by the corresponding sellers and their effectiveness in attracting other users. We also demonstrate that a classifier that is trained by EIRs (without explicit keywords) and normal reviews can accurately detect other EIRs as well as implicitly incentivized reviews. Finally, we explore the current state of explicit reviews on Amazon. Overall, this analysis sheds insightful light on the impact of EIRs on Amazon products and users.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Notes

  1. 1.

    https://www.amazon.com/gp/help/customer/display.html?nodeid=14279631

  2. 2.

    https://reviewmeta.com/blog/analysis-of-7-million-amazon-reviews-customers-who-receive-free-or-discounted-item-much-more-likely-to-write-positive-review/

  3. 3.

    https://blog.aboutamazon.com/innovation/update-on-customer-reviews

  4. 4.

    https://www.amazon.com/gp/help/customer/display.html/?nodeId=201145140

  5. 5.

    https://www.amazon.com/gp/bestsellers/

  6. 6.

    https://www.amazon.com/gp/help/customer/display.html?nodeid=14279631

  7. 7.

    Our manual inspection process was conducted in multiple rounds as follows: We first select all the reviews that contain our target keywords (e.g., free, discount) to create a pool. Then, we select 100 random samples of reviews from this pool to manually inspect in each round. As EIRs tend to contain some variants of the same disclaimer sentence, our manual inspection quickly identifies such signatures and uses them to automatically identify reviews in the pool that contain similar signatures. The examination of these reviews also reveals false alarm cases.

  8. 8.

    https://blog.aboutamazon.com/innovation/update-on-customer-reviews

  9. 9.

    https://pypi.org/project/textstat/

  10. 10.

    Amazon provides the date when a product becomes available for some categories of product. However, we frequently observe cases where a product has multiple versions in the same product page that have become available at different times but share the same pool of reviews. We use the time between the first and last reviews across all versions of a product to deal with this ambiguity in relating specific review to a particular version of a product.

  11. 11.

    Amazon appears to rely on some weighted averaging method (Bishop 2015) to calculate the overall rating of a product based on factors such as the recency of a review, its helpfulness and whether it is associated with a verified purchase. Since the details of Amazon’s rating method are unknown, we simply rely on a linear moving average of all ratings to determine the overall rating of each product or reviewer over time.

  12. 12.

    https://www.nltk.org/

  13. 13.

    We consider character-based n-grams since they are shown to be more robust as they capture spelling differences (Kanaris et al. 2007) and are more effective in authorship attribution (writer identification) (Koppel et al. 2011) as they cover a little bit of lexical content, syntactic content, and even style by covering punctuation and white spaces.

References

  1. aboutAmazon.com. (2016) Update customer review. https://blog.aboutamazon.com/innovation/update-on-customer-reviews

  2. Akoglu L, Chandy R, Faloutsos C (2013) Opinion fraud detection in online reviews by network effects. In: Proceedings of the ICWSM

  3. Amazon.com. (2018) Community guidelines, https://www.amazon.com/gp/help/customer/display.html?nodeid=14279631

  4. Amazon.com. (2018) About amazon verified purchase reviews. https://www.amazon.com/gp/help/customer/display.html/?nodeId=201145140

  5. Bishop T (2015) https://www.geekwire.com/2015/amazon-changes-its-influential-formula-for-calculating-product-ratings/. Accessed 6 May 2019

  6. Burtch G, Hong Y, Bapna R, Griskevicius V (2017) Stimulating online reviews by combining financial incentives and social norms. Manag Sci 64:2065–2082

    Article  Google Scholar 

  7. d’Agostino RB (1971) An omnibus test of normality for moderate and large size samples. Biometrika 58(2):341–348

    MathSciNet  Article  Google Scholar 

  8. Gunning R (1952) The technique of clear writing. McGraw-Hill, New York

    Google Scholar 

  9. Jamshidi S, Rejaie R, Li J (2018) Trojan horses in amazon’s castle: Understanding the incentivized online reviews. In: IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), pp 335–342

  10. Jamshidi S, Rejaie R, Li J (2016-18) Characterizing the incentivized online reviews. Technical report, University of Oregon, 2016–2018. https://www.cs.uoregon.edu/Reports/TR-2018-001.pdf. Accessed 6 May 2019

  11. Jindal N, Liu B (2008) Opinion spam and analysis. In: Proceedings of the international conference on web search and data mining

  12. Jindal N, Liu B, Lim EP (2010) Finding unusual review patterns using unexpected rules. In: Proceedings of the ACM international conference on information and knowledge management

  13. Kanaris I, Kanaris K, Houvardas I, Stamatatos E (2007) Words versus character n-grams for anti-spam filtering. Int J Artif Intell Tools 16(06):1047–1067

    Article  Google Scholar 

  14. Kim SM, Pantel P, Chklovski T, Pennacchiotti M (2006) Automatically assessing review helpfulness. In: Proceedings of the ACL conference on empirical methods in natural language processing

  15. Koppel M, Schler J, Argamon S (2011) Authorship attribution in the wild. Lang Resour Eval 45(1):83–94

    Article  Google Scholar 

  16. Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260):583–621

    Article  Google Scholar 

  17. Li FH, Huang M, Yang Y, Zhu X (2011) Learning to identify review spam. In: Twenty-second international joint conference on artificial intelligence

  18. Lim P, Nguyen V, Jindal N, Liu B, Lauw H (2010) Detecting product review spammers using rating behaviors. In: Proceedings of ACM international conference on information and knowledge management

  19. Liu J, Cao Y, Lin CY, Huang, Y, Zhou M (2007) Low-quality product review detection in opinion summarization. In: Proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL)

  20. Mudambi S (2010) What makes a helpful online review? A study of customer reviews on amazon.com. MIS Q 34:185–200

    Article  Google Scholar 

  21. Ott M, Choi Y, Cardie C, Hancock J T (2011) Finding deceptive opinion spam by any stretch of the imagination. In: Proceedings of the ACL human language technologies

  22. Petrescu M, O’Leary K, Goldring D, Mrad SB (2018) Incentivized reviews: promising the moon for a few stars. J Retail Consum Serv 41:288–295

    Article  Google Scholar 

  23. Qiao D, Lee SY, Whinston A, Wei Q (2017) Incentive provision and pro-social behaviors. In: Proceedings of the Hawaii international conference on system sciences

  24. ReviewMeta.com. (2016) Analysis of 7 million amazon reviews. https://reviewmeta.com/blog/analysis-of-7-million-amazon-reviews-customers-who-receive-free-or-discounted-item-much-more-likely-to-write-positive-review/. Accessed 6 May 2019

  25. Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the international conference on knowledge discovery and data mining, pp 1135–1144

  26. Shyong K, Frankowski D, Riedl J et al (2006) Do you trust your recommendations? an exploration of security and privacy issues in recommender systems. In: Emerging trends in information and communication security

  27. Wang J, Ghose A, Ipeirotis P (2012) Bonus, disclosure, and choice: what motivates the creation of high-quality paid reviews? In: Proceedings of the international conference on information systems

  28. Xie Z, Zhu S (2015) Appwatcher: unveiling the underground market of trading mobile app reviews. In: Proceedings of the ACM conference on security & privacy in wireless and mobile networks

Download references

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grants Nos. CNS-1564348 and CHS-1551817. We gratefully acknowledge the support of Intel Corporation for giving access to the Intel AI DevCloud platform used for this work.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Soheil Jamshidi.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jamshidi, S., Rejaie, R. & Li, J. Characterizing the dynamics and evolution of incentivized online reviews on Amazon. Soc. Netw. Anal. Min. 9, 22 (2019). https://doi.org/10.1007/s13278-019-0563-0

Download citation

Keywords

  • Incentivized online reviews
  • Machine learning
  • Modeling
  • Amazon
  • Online review