Skip to main content

Cheating in Ranking Systems

Abstract

Consider a software application that pays a commission fee to be sold on an on-line platform (e.g., Google Play). The sales depend on the applications’ customer rankings. Therefore, developers have an incentive to dishonestly promote their application’s ranking, e.g., by faking positive customer reviews. The platform detects dishonest behavior (cheating) with some probability, and then decides whether to ban the application. We provide an analysis and find the equilibrium behaviors of both the application (cheat or not) and the platform (setting the commission fee). We provide insights into how the platform’s detection accuracy affects the incentives of the application’s developers.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Notes

  1. Statista: The Statistics Portal https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores/.

  2. An application is an experience good. The users assume that high-rated apps are ones with which other users have had a positive experience. If an app receives a high rank by cheating, the user might be disappointed from his experience with the app.

  3. The platform maximizes its expected utility. The technicalities are characterized in Proposition 3 in the “Appendix”.

References

  • Akoglu, L., Chandy, R., & Faloutsos, C. (2013). Opinion fraud detection in online reviews by network effects. In Proceedings of the seventh international AAAI conference on Weblogs and social media.

  • Avenhaus, R., Von Stengel, B., & Zamir, S. (2002). Inspection games. In R. J. Aumann & S. Hart (Eds.), Handbook of game theory with economic applications (Vol. 3, pp. 1947—1987). North–Holland, Amsterdam.

  • Banerjee, S., Chua, A. Y., & Kim, J.-J. (2017). Don’t be deceived: Using linguistic analysis to learn how to discern online review authenticity. Journal of the Association for Information Science and Technology, 68(6), 1525–1538.

    Article  Google Scholar 

  • Barrachina, A., Tauman, Y., & Urbano, A. (2014). Entry and espionage with noisy signals. Games and Economic Behavior, 83, 127–146.

    Article  Google Scholar 

  • Becker, G. S. (1968). Crime and punishment: An economic approach. In The economic dimensions of crime (pp. 13–68). London: Palgrave Macmillan.

  • Berentsen, A. (2002). The economics of doping. European Journal of Political Economy, 18(1), 109–127.

    Article  Google Scholar 

  • Burguera, I., Zurutuza, U., & Nadjm-Tehrani, S. (2011). Crowdroid: behavior-based malware detection system for android. In Proceedings of the 1st ACM workshop on Security and privacy in smartphones and mobile devices (pp. 15–26). ACM.

  • Cabral, L., & Natividad, G. (2016). Box-office demand: The importance of being# 1. The Journal of Industrial Economics, 64(2), 277–294.

    Article  Google Scholar 

  • Carare, O. (2012). The impact of bestseller rank on demand: Evidence from the app market. International Economic Review, 53(3), 717–742.

    Article  Google Scholar 

  • Casalo, L. V., Flavian, C., Guinaliu, M., & Ekinci, Y. (2015). Do online hotel rating schemes influence booking behaviors? International Journal of Hospitality Management, 49, 28–36.

    Article  Google Scholar 

  • Chen, H., He, D., Zhu, S., & Yang, J. (2017). Toward detecting collusive ranking manipulation attackers in mobile app markets. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, ASIA CCS ’17 (pp. 58–70), New York, NY, USA. ACM.

  • Darby, M. R., & Karni, E. (1973). Free competition and the optimal amount of fraud. The Journal of Law and Economics, 16(1), 67–88.

    Article  Google Scholar 

  • Glick, M., Richards, G., Sapozhnikov, M., & Seabright, P. (2014). How does ranking affect user choice in online search? Review of Industrial Organization, 45(2), 99–119.

    Article  Google Scholar 

  • Gössling, S., Hall, C. M., & Andersson, A. C. (2016). The manager’s dilemma: a conceptualization of online review manipulation strategies. Current Issues in Tourism, 21(5), 1–20.

    Google Scholar 

  • Hałaburda, H., & Yehezkel, Y. (2016). The role of coordination bias in platform competition. Journal of Economics & Management Strategy, 25(2), 274–312.

    Article  Google Scholar 

  • Heydari, A., Tavakoli, M., & Salim, N. (2016). Detection of fake opinions using time series. Expert Systems with Applications, 58, 83–92.

    Article  Google Scholar 

  • Hu, N., Bose, I., Koh, N. S., & Liu, L. (2012). Manipulation of online reviews: An analysis of ratings, readability, and sentiments. Decision Support Systems, 52(3), 674–684.

    Article  Google Scholar 

  • Jelnov, A., Tauman, Y., & Zeckhauser, R. (2017). Attacking the unknown weapons of a potential bomb builder: The impact of intelligence on the strategic interaction. Games and Economic Behavior, 104, 177–189.

    Article  Google Scholar 

  • Kirstein, R. (2014). Doping, the inspection game, and Bayesian enforcement. Journal of Sports Economics, 15(4), 385–409.

    Article  Google Scholar 

  • Landes, W. M., & Posner, R. A. (1984). Tort law as a regulatory regime for catastrophic personal injuries. The Journal of Legal Studies, 13(3), 417–434.

    Article  Google Scholar 

  • Lee, G., & Raghu, T. S. (2014). Determinants of mobile apps’ success: Evidence from the App Store market. Journal of Management Information Systems, 31(2), 133–170.

    Article  Google Scholar 

  • Mauri, A. G., & Minazzi, R. (2013). Web reviews influence on expectations and purchasing intentions of hotel potential customers. International Journal of Hospitality Management, 34, 99–107.

    Article  Google Scholar 

  • Mayzlin, D., Dover, Y., & Chevalier, J. (2014). Promotional reviews: An empirical investigation of online review manipulation. The American Economic Review, 104(8), 2421–2455.

    Article  Google Scholar 

  • Narudin, F. A., Feizollah, A., Anuar, N. B., & Gani, A. (2016). Evaluation of machine learning classifiers for mobile malware detection. Soft Computing, 20(1), 343–357.

    Article  Google Scholar 

  • Ott, M., Choi, Y., Cardie, C., & Hancock, J. T. (2011). Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies (Vol. 1, pp. 309–319). Association for Computational Linguistics.

  • Polinsky, A. M., & Shavell, S. (2007). The theory of public enforcement of law. Handbook of Law and Economics, 1, 403–454.

    Article  Google Scholar 

  • Rahman, M., Rahman, M., Carbunar, B., & Chau, D. H. (2017). Search rank fraud and malware detection in Google Play. IEEE Transactions on Knowledge and Data Engineering, 29(6), 1329–1342.

    Article  Google Scholar 

  • Resnick, P., & Zeckhauser, R. (2002). Trust among strangers in internet transactions: Empirical analysis of ebay’s reputation system. In The economics of the internet and e-commerce (pp. 127–157). Emerald Group Publishing Limited.

  • Savage, D., Zhang, X., Yu, X., Chou, P., & Wang, Q. (2015). Detection of opinion spam based on anomalous rating deviation. Expert Systems with Applications, 42(22), 8650–8657.

    Article  Google Scholar 

  • Schuckert, M., Liu, X., & Law, R. (2016). Insights into suspicious online ratings: direct evidence from tripadvisor. Asia Pacific Journal of Tourism Research, 21(3), 259–272.

    Article  Google Scholar 

  • Seneviratne, S., Seneviratne, A., Kaafar, M. A., Mahanti, A., & Mohapatra, P. (2017). Spam mobile apps: Characteristics, detection, and in the wild analysis. ACM Transactions on the Web (TWEB), 11(1), 4.

    Google Scholar 

  • Smith, M. D., & Brynjolfsson, E. (2001). Consumer decision-making at an internet shopbot: Brand still matters. The Journal of Industrial Economics, 49(4), 541–558.

    Article  Google Scholar 

  • Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., & Zhao, B. Y. (2012). Serf and turf: crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web (pp. 679–688). ACM.

  • Ye, J., & Akoglu, L. (2015). Discovering opinion spammer groups by network footprints. ECML/PKDD, 1, 267–282.

    Google Scholar 

  • Zhu, H., Xiong, H., Ge, Y., & Chen, E. (2013). Ranking fraud detection for mobile apps: A holistic view. In Proceedings of the 22nd ACM international conference on information & knowledge management (pp. 619–628). ACM.

Download references

Acknowledgements

We wish to thank Christopher Thomas Ryan, Yair Tauman, Richard Zeckhauser, the journal editor—Lawrence J. White—and the anonymous reviewers for their helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Artyom Jelnov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Proposition 1

Consider first an equilibrium where P chooses \(\bar{r}=1\). Then c is a superior action of A. If P chooses \({\hat{b}}\), its payoff is therefore \(\gamma f r-w\). If P chooses b, its payoff is 0; and the platform prefers \({\hat{b}}\) to b for each \(d \le r \le 1\) if \(w<\gamma f d\).

Next, consider \(\bar{r}=d\). Observe, that for \(\alpha >0\), pure \({\hat{c}}\) is not an equilibrium in this case. By contrary, if A chooses pure \({\hat{c}}\), with positive probability it obtains a rating above d, and a false signal s is sent to P; therefore, b is not the best reply of the platform.

If A chooses c and P (following s) chooses pure b, A’s expected utility is \(\frac{\gamma (1-f)\beta (1+d)}{2}\). If A does not cheat and if P (following s and \(r>d\)) chooses pure b, A’s expected utility is \(\frac{\gamma (1-f)}{2}[d+(1-\alpha )(1+d)]\). Thus, A prefers c to \({\hat{c}}\) for \(\beta >\frac{d+(1-\alpha )(1+d)}{1+d}\). If A cheats with certainty, P prefers to ban for each \(d \le r \le 1\) if \(\gamma f <w\).

Consider next \(d<\bar{r}<1\). A is indifferent between c and \({\hat{c}}\) for

$$\begin{aligned} \frac{\gamma (1-f)}{1-d}\left[\int _{d}^{\bar{r}} r \,dr+\beta \int _{\bar{r}}^1 r \,dr]=\gamma (1-f)[\int _0^{\bar{r}} r \,dr+(1-\alpha )\int _{\bar{r}}^1 r \,dr\right], \end{aligned}$$

which simplifies to

$$\begin{aligned} \bar{r}=\sqrt{\frac{(1-\alpha )(1-d)-\beta +d^2}{d-\beta +(1-\alpha )(1-d)}}. \end{aligned}$$
(1)

By (1), for \(\beta <\frac{(1-\alpha )(1-d^2)+d^2}{1+d}\), \(d<\bar{r}<1\).

Let A choose c with probability \(P_c\). Given alert s and rating \(r>d\), let P(c|s) be belief of P that A cheats:

$$\begin{aligned} P(c|s)=\frac{P_c(1-\beta )\frac{r-d}{1-d}}{P_c(1-\beta )\frac{r-d}{1-d}+(1-P_c)\alpha r}. \end{aligned}$$
(2)

Following alert s, P is indifferent between b and \({\hat{b}}\) for the threshold rating r of A iff

$$\begin{aligned} \gamma f\bar{r}-wP(c|s)+v(1-P(c|s))=0, \end{aligned}$$
(3)

and by (2), this is equivalent to

$$\begin{aligned} P_c=\frac{\alpha \bar{r}(\gamma f \bar{r}+v)(1-d)}{\alpha \bar{r}(\gamma f \bar{r}+v)(1-d)+(1-\beta )(w-\gamma f \bar{r})(\bar{r}-d)}. \end{aligned}$$
(4)

By (4), \(0<P_c<1\) for \(w>\gamma f \bar{r}\).

By substitution of (2) into (3) one can verify that for sufficient high w the left hand side of (3) decreases in \(\bar{r}\). For \(r<\bar{r}\) the platform does not ban; and for \(r>\bar{r}\), it bans, as is required by a threshold strategy. \(\square \)

Proof of Corollary 1

Since conditions of part 3 of proposition 1 hold, probabilities of cheating and the threshold of banning are given by (4) and (1), respectively. Results follow directly by (4) and (1). \(\square \)

Proposition 3

  1. 1.

    If \(\frac{2w}{\gamma (1+d)}< f\), then in equilibrium the expected utility of P is \(\frac{\gamma f(1+d)}{2}-w\).

  2. 2.

    If \(\frac{2w}{\gamma (1+d)}> f\) and \(\beta >\frac{d+(1-\alpha )(1+d)}{1+d}\), then in equilibrium the expected utility of P is \(\beta (\frac{\gamma f(1+d)}{2}-w)\).

  3. 3.

    If w are sufficiently high and \(\beta <\frac{(1-\alpha )(1-d^2)+d^2}{1+d}\), then in equilibrium the expected utility of P is

    $$\begin{aligned}&(1-P_c)[\frac{\gamma f}{2}[\bar{r}^2+(1-\alpha )(1-\bar{r}^2)]+v[\bar{r}+(1-\alpha )(1-\bar{r})]]\\&\qquad +\frac{P_c}{1-d}[\frac{\gamma f}{2} [\bar{r}^2-d^2+\beta (1-\bar{r}^2)]-w[\bar{r}-d+\beta (1-\bar{r})]], \end{aligned}$$

    where \(P_c\) and \(\bar{r}\) are given by (4) and (1).

Proof

Directly from Proposition 1 and Fig. 1. \(\square \)

Proof of Proposition 2

Consider first an equilibrium where P chooses pure \({\hat{b}}\). Then c is a superior action of A. The expected utility of P in this case is \(\gamma f-w\). If P chooses b, its payoff is 0, and the platform prefers \({\hat{b}}\) to b for \(w< \gamma f\).

Next, consider that P, following s, chooses pure b. Observe, that for \(\alpha (\rho )>0\), pure \({\hat{c}}\) is not an equilibrium in this case. By contrary, if A chooses pure \({\hat{c}}\), with positive probability it obtains the rating 1 and a false signal s is sent to P; therefore, b is not the best reply of the platform.

If A chooses c and if P (following s) chooses pure b, A’s expected utility is \(\gamma (1-f)\beta (\rho )\). If A does not cheat and if P (following s) chooses pure b, A’s expected utility is \(\gamma (1-f)[\rho (1-l(\rho ))+l(\rho )(1-\alpha (\rho ))]\). Thus, A prefers c to \({\hat{c}}\) for \(\beta (\rho )>r-rl(\rho )+l(\rho )-\alpha (\rho )l(\rho )\).

Let A choose c with probability \(P_c\). Similar to the proof of Proposition 1,

$$\begin{aligned} P_c=\frac{\alpha (\rho )l(\rho )(\gamma f+v)}{\alpha (\rho )l(\rho )(\gamma f+v)+(1-\beta (\rho ))(w-\gamma f)}. \end{aligned}$$
(5)

By (5), \(0<P_c<1\) for \(w>\gamma f\).

Let \(P_b\) be a probability with which P bans the application, following alert s. A is indifferent between c and \({\hat{c}}\) for

$$\begin{aligned} \gamma (1-f)[1-(1-\beta (\rho ))P_b]=\gamma (1-f)[(1-l(\rho )) \rho +l(\rho )(1-\alpha (\rho )P_b)], \end{aligned}$$

which simplifies to

$$\begin{aligned} P_b=\frac{(1-\rho )(1-l(\rho ))}{1-\beta (\rho )-\alpha (\rho )l(\rho )}. \end{aligned}$$
(6)

By (6), \(0<P_b<1\) for \(l(\rho )-\alpha (\rho )l(\rho )+\rho - \rho l(\rho )>\beta (\rho )\). \(\square \)

Proof of Corollary 2

Since the conditions of part 3 of Proposition 2 hold, probabilities of cheating and of banning are given by (5) and (6), respectively. Results follow directly by (5) and (6). \(\square \)

Proof of Corollary 3

Since the conditions of part 3 of Proposition 2 hold, probabilities of cheating is given by (5). The result follows directly by (5) and , by \(\frac{\partial l(\rho )}{\partial \rho }>0\) and \(\frac{\partial \beta (\rho )}{\partial \rho }\ge 0\). \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dery, L., Hermel, D. & Jelnov, A. Cheating in Ranking Systems. Rev Ind Organ 58, 303–320 (2021). https://doi.org/10.1007/s11151-020-09754-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11151-020-09754-2

Keywords

  • Manipulation
  • Ranking fraud
  • Ranking systems
  • Ratings