Advertisement

CbI: Improving Credibility of User-Generated Content on Facebook

  • Sonu GuptaEmail author
  • Shelly Sachdeva
  • Prateek Dewan
  • Ponnurangam Kumaraguru
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11297)

Abstract

Online Social Networks (OSNs) have become a popular platform to share information with each other. Fake news often spread rapidly in OSNs especially during news-making events, e.g. Earthquake in Chile (2010) and Hurricane Sandy in the USA (2012). A potential solution is to use machine learning techniques to assess the credibility of a post automatically, i.e. whether a person would consider the post believable or trustworthy. In this paper, we provide a fine-grained definition of credibility. We call a post to be credible if it is accurate, clear, and timely. Hence, we propose a system which calculates the Accuracy, Clarity, and Timeliness (A-C-T) of a Facebook post which in turn are used to rank the post for its credibility. We experiment with 1,056 posts created by 107 pages that claim to belong to news-category. We use a set of 152 features to train classification models each for A-C-T using supervised algorithms. We use the best performing features and models to develop a RESTful API and a Chrome browser extension to rank posts for its credibility in real-time. The random forest algorithm performed the best and achieved ROC AUC of 0.916, 0.875, and 0.851 for A-C-T respectively.

Keywords

Online social media Facebook Credibility 

References

  1. 1.
    Alrubaian, M., Al-Qurishi, M., Hassan, M., Alamri, A.: A credibility analysis system for assessing information on Twitter. IEEE Trans. Dependable Secur. Comput. 15(4), 661–674 (2016)Google Scholar
  2. 2.
    Castillo, C., Mendoza, M., Poblete, B.: Information credibility on Twitter. In: Proceedings of the 20th International Conference on World Wide Web, pp. 675–684. ACM (2011)Google Scholar
  3. 3.
    Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)CrossRefGoogle Scholar
  4. 4.
    Dewan, P., Bagroy, S., Kumaraguru, P.: Hiding in plain sight: characterizing and detecting malicious Facebook pages. In: 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 193–196. IEEE (2016)Google Scholar
  5. 5.
    Dewan, P., Bagroy, S., Kumaraguru, P.: Hiding in plain sight: the anatomy of malicious Pages on Facebook. In: Kaya, M., Kawash, J., Khoury, S., Day, M.-Y. (eds.) Social Network Based Big Data Analysis and Applications. LNSN, pp. 21–54. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-78196-9_2CrossRefGoogle Scholar
  6. 6.
    Dewan, P., Kumaraguru, P.: Towards automatic real time identification of malicious posts on Facebook. In: 2015 13th Annual Conference on Privacy, Security and Trust (PST), pp. 85–92. IEEE (2015)Google Scholar
  7. 7.
    Gupta, A., Kumaraguru, P.: Credibility ranking of tweets during high impact events. In: Proceedings of the 1st Workshop on Privacy and Security in Online Social Media, p. 2. ACM (2012)Google Scholar
  8. 8.
    Gupta, A., Kumaraguru, P., Castillo, C., Meier, P.: TweetCred: real-time credibility assessment of content on Twitter. In: Aiello, L.M., McFarland, D. (eds.) SocInfo 2014. LNCS, vol. 8851, pp. 228–243. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-13734-6_16CrossRefGoogle Scholar
  9. 9.
    Gupta, A., Lamba, H., Kumaraguru, P., Joshi, A.: Faking sandy: characterizing and identifying fake images on Twitter during hurricane sandy. In: Proceedings of the 22nd International Conference on World Wide Web, pp. 729–736. ACM (2013)Google Scholar
  10. 10.
    Haralabopoulos, G., Anagnostopoulos, I., Zeadally, S.: The challenge of improving credibility of user-generated content in online social networks. J. Data Inf. Qual. (JDIQ) 7(3), 13 (2016)Google Scholar
  11. 11.
    Li, H., Sakamoto, Y.: Computing the veracity of information through crowds: a method for reducing the spread of false messages on social media. In: 2015 48th Hawaii International Conference on System Sciences (HICSS), pp. 2003–2012. IEEE (2015)Google Scholar
  12. 12.
    Mendoza, M., Poblete, B., Castillo, C.: Twitter under crisis: can we trust what we RT? In: Proceedings of the First Workshop on Social Media Analytics, pp. 71–79. ACM (2010)Google Scholar
  13. 13.
    Ratkiewicz, J., et al.: Truthy: mapping the spread of astroturf in microblog streams. In: Proceedings of the 20th International Conference Companion on World Wide Web, pp. 249–252. ACM (2011)Google Scholar
  14. 14.
    Saikaew, K.R., Noyunsan, C.: Features for measuring credibility on Facebook information. Int. Sch. Sci. Res. Innov. 9(1), 174–177 (2015)Google Scholar
  15. 15.
    Tanaka, Y., Sakamoto, Y., Matsuka, T.: Toward a social-technological system that inactivates false rumors through the critical thinking of crowds. In: 2013 46th Hawaii International Conference on System Sciences (HICSS), pp. 649–658. IEEE (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Sonu Gupta
    • 1
    Email author
  • Shelly Sachdeva
    • 2
  • Prateek Dewan
    • 3
  • Ponnurangam Kumaraguru
    • 3
  1. 1.Jaypee Institute of Information TechnologyNoidaIndia
  2. 2.National Institute of Technology, DelhiNew DelhiIndia
  3. 3.Indraprastha Institute of Information Technology, DelhiNew DelhiIndia

Personalised recommendations