Evaluative Patterns and Incentives in YouTube

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10540)

Abstract

Users of social media are not only producers and consumers of online content, they also evaluate each other’s content. Some social media include the possibility to down vote or dislike the content posted by other users, posing the risk that users who receive dislikes might be more likely to become inactive, especially if the disliked content is about a person. We analyzed the data on more than 150,000 YouTube videos to understand how video impact and user incentives can be related to the possibility to dislike user content. We processed images related to videos to identify faces and quantify if evaluating content related to people is connected to disliking patterns. We found that videos with faces on their images tend to have less dislikes if they are posted by male users, but the effect is not present for female users. On the contrary, videos with faces and posted by female users attract more views and likes. Analyzing the probability of users to become inactive, we find that receiving dislikes is associated with users becoming inactive. This pattern is stronger when dislikes are given to videos with faces, showing that negative evaluations about people have a stronger association with user inactivity. Our results show that user evaluations in social media are a multi-faceted phenomenon that requires large-scale quantitative analyses, identifying under which conditions users disencourage other users from being active in social media.

Keywords

Social psychology Incentives YouTube 

References

  1. 1.
    Abisheva, A., Garcia, D., Schweitzer, F.: When the filter bubble bursts: collective evaluation dynamics in online communities. In: Proceedings of the 8th ACM Conference on Web Science, pp. 307–308 (2016)Google Scholar
  2. 2.
    Abisheva, A., Garimella, V.R.K., Garcia, D., Weber, I.: Who watches (and shares) what on YouTube? and when? using Twitter to understand YouTube viewership. In: Proceedings of the 7th ACM International Conference on Web Search and Data Mining, pp. 593–602 (2014)Google Scholar
  3. 3.
    Alstott, J., Bullmore, E., Plenz, D.: powerlaw: a python package for analysis of heavy-tailed distributions. PLoS ONE 9(1), e85777 (2014)CrossRefGoogle Scholar
  4. 4.
    Aragón, P., Gómez, V., Kaltenbrunner, A.: To thread or not to thread: the impact of conversation threading on online discussion. In: ICWSM, pp. 12–21 (2017)Google Scholar
  5. 5.
    Bates, D., Mchler, M., Bolker, B., Walker, S.: Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67(1), 1–48 (2015)CrossRefGoogle Scholar
  6. 6.
    Cashmore, P.: Should Facebook add a dislike button? In: CNN articles (2010). http://cnn.it/2t7tu2h
  7. 7.
    Castells, M.: The Rise of the Network Society, vol. 1. Wiley, Hoboken (1996)MATHGoogle Scholar
  8. 8.
    Clauset, A., Shalizi, C.R., Newman, M.E.J.: Power-law distributions in empirical data. SIAM Rev. 51(4), 661 (2009)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Crane, R., Sornette, D.: Robust dynamic classes revealed by measuring the response function of a social system. Proc. Natl. Acad. Sci. 105(41), 15649–15653 (2008)CrossRefGoogle Scholar
  10. 10.
    Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H.E., Quattrociocchi, W.: The spreading of misinformation online. Proc. Natl. Acad. Sci. 113(3), 554–559 (2016)CrossRefGoogle Scholar
  11. 11.
    Garcia, D., Mavrodiev, P., Casati, D., Schweitzer, F.: Understanding popularity, reputation, and social influence in the Twitter society. Policy Internet (2017)Google Scholar
  12. 12.
    Garcia, D., Mavrodiev, P., Schweitzer, F.: Social resilience in online communities: the autopsy of friendster. In: Proceedings of the 1st ACM Conference in Online Social Networks (COSN 2013), pp. 39–50 (2013)Google Scholar
  13. 13.
    Garcia, D., Mendez, F., Serdult, U., Schweitzer, F.: Political polarization and popularity in online participatory media: an integrated approach. In: Proceedings of the 1st Workshop on Politics, Elections and Data - PLEAD 2012, pp. 3–10 (2012)Google Scholar
  14. 14.
    Garcia, D., Strohmaier, M.: The qwerty effect on the web: how typing shapes the meaning of words in online human-computer interaction. In: Proceedings of the 25th International Conference on World Wide Web, pp. 661–670 (2016)Google Scholar
  15. 15.
    Greenwald, A.G., Banaji, M.R.: Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychol. Rev. 102(1), 4 (1995)CrossRefGoogle Scholar
  16. 16.
    Hannák, A., Wagner, C., Garcia, D., Mislove, A., Strohmaier, M., Wilson, C.: Bias in online freelance marketplaces: evidence from taskrabbit and fiverr. In: CSCW, pp. 1914–1933 (2017)Google Scholar
  17. 17.
    Huang, T.K., Ribeiro, B., Madhyastha, H.V., Faloutsos, M.: The socio-monetary incentives of online social network malware campaigns. In: Proceedings of the Second ACM Conference on Online Social Networks, pp. 259–270 (2014)Google Scholar
  18. 18.
    Karimi, F., Wagner, C., Lemmerich, F., Jadidi, M., Strohmaier, M.: Inferring gender from names on the web: a comparative evaluation of gender detection methods. In: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 53–54 (2016)Google Scholar
  19. 19.
    Karnstedt, M., Hennessy, T., Chan, J., Hayes, C.: Churn in social networks: a discussion boards case study. In: 2010 IEEE Second International Conference on Social Computing (SocialCom), pp. 233–240. IEEE (2010)Google Scholar
  20. 20.
    Kooti, F., Magno, G., Weber, I.: The social name-letter effect on online social networks. In: International Conference on Social Informatics, pp. 216–227 (2014)Google Scholar
  21. 21.
    Lakkaraju, H., McAuley, J.J., Leskovec, J.: What’s in a name? understanding the interplay between titles, content, and communities in social media. ICWSM 1(2), 3 (2013)Google Scholar
  22. 22.
    Magno, G., Weber, I.: International gender differences and gaps in online social networks. In: International Conference on Social Informatics, pp. 121–138 (2014)Google Scholar
  23. 23.
    McAuley, J., Leskovec, J.: Hidden factors and hidden topics: understanding rating dimensions with review text. In: Proceedings of the 7th ACM Conference on Recommender Systems, RecSys 2013, pp. 165–172. ACM (2013)Google Scholar
  24. 24.
    Mieghem, P.V.: Human psychology of common appraisal: the reddit score. IEEE Trans. Multimed. 13(6), 1404–1406 (2011)CrossRefGoogle Scholar
  25. 25.
    Newman, M.E.: Power laws, pareto distributions and zipf’s law. Contemp. Phys. 46(5), 323–351 (2005)CrossRefGoogle Scholar
  26. 26.
    Nilizadeh, S., Groggel, A., Lista, P., Das, S., Ahn, Y.Y., Kapadia, A., Rojas, F.: Twitter’s glass ceiling: the effect of perceived gender on online visibility. In: ICWSM, pp. 289–298 (2016)Google Scholar
  27. 27.
    Oksanen, A., Garcia, D., Sirola, A., Nsi, M., Kaakinen, M., Keipi, T., Rsnen, P.: Pro-anorexia and anti-pro-anorexia videos on YouTube: sentiment analysis of user responses. J. Med. Internet Res. 11(17), e2560 (2015)Google Scholar
  28. 28.
    Palmer, C.: Risk perception: another look at the ‘white male’ effect. Health, Risk Soc. 5(1), 71–83 (2003)CrossRefGoogle Scholar
  29. 29.
    Ribeiro, B.: Modeling and predicting the growth and death of membership-based websites. In: Proceedings of the 23rd International Conference on World Wide Web, WWW 2014, pp. 653–664. ACM (2014)Google Scholar
  30. 30.
    Simmel, G.: Fashion. Am. J. Sociol. 62(6), 541–558 (1957)CrossRefGoogle Scholar
  31. 31.
    Stommel, S., Garcia, D., Abisheva, A., Schweitzer, F.: Anticipated shocks in online activity: response functions of attention and word-of-mouth processes. In: Proceedings of the 8th ACM Conference on Web Science, pp. 274–275 (2016)Google Scholar
  32. 32.
    Szabo, G., Huberman, B.A.: Predicting the popularity of online content. Commun. ACM 53(8), 80 (2010)CrossRefGoogle Scholar
  33. 33.
    Szell, M., Thurner, S.: Measuring social dynamics in a massive multiplayer online game. Soc. Netw. 32(4), 313–329 (2010)CrossRefGoogle Scholar
  34. 34.
    Tassi, P.: Facebook Didn’t Kill Digg, Reddit Did. In: Forbes (2012). http://bit.ly/2tx8e5C
  35. 35.
    Tufekci, Z.: Big questions for social media big data: representativeness, validity and other methodological pitfalls. In: ICWSM (2014)Google Scholar
  36. 36.
    Turney, P.D.: Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 417–424 (2002)Google Scholar
  37. 37.
    Walker, J., Ante, S.E.: Once a social media star, Digg sells for $500,000. The Wall Street J. (2012). http://on.wsj.com/2uv1AAS
  38. 38.
    Yasseri, T., Sumi, R., Rung, A., Kornai, A., Kertsz, J.: Dynamics of conflicts in Wikipedia. PLOS ONE 7(6), 1–12 (2012)CrossRefGoogle Scholar
  39. 39.
    Zhou, E., Cao, Z., Yin, Q.: Naive-deep face recognition: touching the limit of LFW benchmark or not? arXiv preprint arXiv:1501.04690 (2015)

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • David Garcia
    • 1
  • Adiya Abisheva
    • 1
  • Frank Schweitzer
    • 1
  1. 1.ETH ZurichZurichSwitzerland

Personalised recommendations