During the past few years, sellers have increasingly offered discounted or free products to selected reviewers of e-commerce platforms in exchange for their reviews. Such incentivized (and often very positive) reviews can improve the rating of a product which in turn sways other users’ opinions about the product. Despite their importance, the prevalence, characteristics, and the influence of incentivized reviews in a major e-commerce platform have not been systematically and quantitatively studied. This paper examines the problem of detecting and characterizing incentivized reviews in two primary categories of Amazon products. We describe a new method to identify explicitly incentivized reviews (EIRs) and then collect a few datasets to capture an extensive collection of EIRs along with their associated products and reviewers. We show that the key features of EIRs and normal reviews exhibit different characteristics. Furthermore, we illustrate how the prevalence of EIRs has evolved and been affected by Amazon’s ban. Our examination of the temporal pattern of submitted reviews for sample products reveals promotional campaigns by the corresponding sellers and their effectiveness in attracting other users. We also demonstrate that a classifier that is trained by EIRs (without explicit keywords) and normal reviews can accurately detect other EIRs as well as implicitly incentivized reviews. Finally, we explore the current state of explicit reviews on Amazon. Overall, this analysis sheds insightful light on the impact of EIRs on Amazon products and users.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Our manual inspection process was conducted in multiple rounds as follows: We first select all the reviews that contain our target keywords (e.g., free, discount) to create a pool. Then, we select 100 random samples of reviews from this pool to manually inspect in each round. As EIRs tend to contain some variants of the same disclaimer sentence, our manual inspection quickly identifies such signatures and uses them to automatically identify reviews in the pool that contain similar signatures. The examination of these reviews also reveals false alarm cases.
Amazon provides the date when a product becomes available for some categories of product. However, we frequently observe cases where a product has multiple versions in the same product page that have become available at different times but share the same pool of reviews. We use the time between the first and last reviews across all versions of a product to deal with this ambiguity in relating specific review to a particular version of a product.
Amazon appears to rely on some weighted averaging method (Bishop 2015) to calculate the overall rating of a product based on factors such as the recency of a review, its helpfulness and whether it is associated with a verified purchase. Since the details of Amazon’s rating method are unknown, we simply rely on a linear moving average of all ratings to determine the overall rating of each product or reviewer over time.
We consider character-based n-grams since they are shown to be more robust as they capture spelling differences (Kanaris et al. 2007) and are more effective in authorship attribution (writer identification) (Koppel et al. 2011) as they cover a little bit of lexical content, syntactic content, and even style by covering punctuation and white spaces.
aboutAmazon.com. (2016) Update customer review. https://blog.aboutamazon.com/innovation/update-on-customer-reviews
Akoglu L, Chandy R, Faloutsos C (2013) Opinion fraud detection in online reviews by network effects. In: Proceedings of the ICWSM
Amazon.com. (2018) Community guidelines, https://www.amazon.com/gp/help/customer/display.html?nodeid=14279631
Amazon.com. (2018) About amazon verified purchase reviews. https://www.amazon.com/gp/help/customer/display.html/?nodeId=201145140
Bishop T (2015) https://www.geekwire.com/2015/amazon-changes-its-influential-formula-for-calculating-product-ratings/. Accessed 6 May 2019
Burtch G, Hong Y, Bapna R, Griskevicius V (2017) Stimulating online reviews by combining financial incentives and social norms. Manag Sci 64:2065–2082
d’Agostino RB (1971) An omnibus test of normality for moderate and large size samples. Biometrika 58(2):341–348
Gunning R (1952) The technique of clear writing. McGraw-Hill, New York
Jamshidi S, Rejaie R, Li J (2018) Trojan horses in amazon’s castle: Understanding the incentivized online reviews. In: IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), pp 335–342
Jamshidi S, Rejaie R, Li J (2016-18) Characterizing the incentivized online reviews. Technical report, University of Oregon, 2016–2018. https://www.cs.uoregon.edu/Reports/TR-2018-001.pdf. Accessed 6 May 2019
Jindal N, Liu B (2008) Opinion spam and analysis. In: Proceedings of the international conference on web search and data mining
Jindal N, Liu B, Lim EP (2010) Finding unusual review patterns using unexpected rules. In: Proceedings of the ACM international conference on information and knowledge management
Kanaris I, Kanaris K, Houvardas I, Stamatatos E (2007) Words versus character n-grams for anti-spam filtering. Int J Artif Intell Tools 16(06):1047–1067
Kim SM, Pantel P, Chklovski T, Pennacchiotti M (2006) Automatically assessing review helpfulness. In: Proceedings of the ACL conference on empirical methods in natural language processing
Koppel M, Schler J, Argamon S (2011) Authorship attribution in the wild. Lang Resour Eval 45(1):83–94
Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260):583–621
Li FH, Huang M, Yang Y, Zhu X (2011) Learning to identify review spam. In: Twenty-second international joint conference on artificial intelligence
Lim P, Nguyen V, Jindal N, Liu B, Lauw H (2010) Detecting product review spammers using rating behaviors. In: Proceedings of ACM international conference on information and knowledge management
Liu J, Cao Y, Lin CY, Huang, Y, Zhou M (2007) Low-quality product review detection in opinion summarization. In: Proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL)
Mudambi S (2010) What makes a helpful online review? A study of customer reviews on amazon.com. MIS Q 34:185–200
Ott M, Choi Y, Cardie C, Hancock J T (2011) Finding deceptive opinion spam by any stretch of the imagination. In: Proceedings of the ACL human language technologies
Petrescu M, O’Leary K, Goldring D, Mrad SB (2018) Incentivized reviews: promising the moon for a few stars. J Retail Consum Serv 41:288–295
Qiao D, Lee SY, Whinston A, Wei Q (2017) Incentive provision and pro-social behaviors. In: Proceedings of the Hawaii international conference on system sciences
ReviewMeta.com. (2016) Analysis of 7 million amazon reviews. https://reviewmeta.com/blog/analysis-of-7-million-amazon-reviews-customers-who-receive-free-or-discounted-item-much-more-likely-to-write-positive-review/. Accessed 6 May 2019
Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the international conference on knowledge discovery and data mining, pp 1135–1144
Shyong K, Frankowski D, Riedl J et al (2006) Do you trust your recommendations? an exploration of security and privacy issues in recommender systems. In: Emerging trends in information and communication security
Wang J, Ghose A, Ipeirotis P (2012) Bonus, disclosure, and choice: what motivates the creation of high-quality paid reviews? In: Proceedings of the international conference on information systems
Xie Z, Zhu S (2015) Appwatcher: unveiling the underground market of trading mobile app reviews. In: Proceedings of the ACM conference on security & privacy in wireless and mobile networks
This material is based upon work supported by the National Science Foundation under Grants Nos. CNS-1564348 and CHS-1551817. We gratefully acknowledge the support of Intel Corporation for giving access to the Intel AI DevCloud platform used for this work.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Jamshidi, S., Rejaie, R. & Li, J. Characterizing the dynamics and evolution of incentivized online reviews on Amazon. Soc. Netw. Anal. Min. 9, 22 (2019). https://doi.org/10.1007/s13278-019-0563-0
- Incentivized online reviews
- Machine learning
- Online review