Advertisement

Detecting Suspicious Members in an Online Emotional Support Service

  • Yu Li
  • Dae Wook Kim
  • Junjie Zhang
  • Derek Doran
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 255)

Abstract

Online emotional support systems provide free support to individuals who experience stress, anxiety, and depression by bridging individuals (i.e., users) with a crowd of voluntary paraprofessionals. While most users tend to legitimately seek mental support, others may engage maliciously by attacking volunteers with trolling, flaming, bullying, spamming, and phishing behaviors. Besides attacking the mental health of trained paraprofessionals, these suspicious activities also introduce threats against the long-term viability of the platform by discouraging new volunteers and encouraging current volunteers to leave. Towards curtailing suspicious users, we propose a novel system, namely TeaFilter, that effectively detects suspicious behaviors by integrating a collection of light-weight behavioral features together. We have performed extensive experiments based on real user data from 7 Cups, a leading online emotional support system in the world. Experimental results have demonstrated that our system can accomplish a high detection rate of 77.8% at a low false positive rate of 1%.

References

  1. 1.
    Binik, Y.M., Cantor, J., Ochs, E., Meana, M.: From the couch to the keyboard: psychotherapy in cyberspace. Cult. Internet, 71–100 (1997)Google Scholar
  2. 2.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(Jan), 993–1022 (2003)zbMATHGoogle Scholar
  3. 3.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  4. 4.
    Calvete, E., Orue, I., Estévez, A., Villardón, L., Padilla, P.: Cyberbullying in adolescents: modalities and aggressors profile. Comput. Hum. Behav. 26(5), 1128–1135 (2010)CrossRefGoogle Scholar
  5. 5.
    Calzarossa, M.C., Massari, L., Doran, D., Yelne, S., Trivedi, N., Moriarty, G.: Measuring the users and conversations of a vibrant online emotional support system. In: 2016 IEEE Symposium on Computers and Communication (2016)Google Scholar
  6. 6.
    Carpenter, D.: Free app provides emotional support for students. http://www.iowastatedaily.com/news/student_life/article_21540770-b496-11e4-9dcd-df0295772d43.html (2015)
  7. 7.
    Chen, Y.R., Chen, H.H.: Opinion spammer detection in web forum. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM (2015)Google Scholar
  8. 8.
    Chu, Z., Gianvecchio, S., Wang, H., Jajodia, S.: Detecting automation of twitter accounts: are you a human, bot, or cyborg? IEEE Trans. Dependable Secur. Comput. 9(6), 811–824 (2012)CrossRefGoogle Scholar
  9. 9.
    Chu, Z., Gianvecchio, S., Koehl, A., Wang, H., Jajodia, S.: Blog or block: detecting blog bots through behavioral biometrics. Comput. Netw. 57(3), 634–646 (2013)CrossRefGoogle Scholar
  10. 10.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press, Cambridge (2000)CrossRefGoogle Scholar
  11. 11.
    Doran, D., Yelne, S., Massari, L., Calzarossa, M.C., Jackson, L., Moriarty, G.: Stay awhile and listen: user interactions in a crowdsourced platform offering emotional support. In: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (2015)Google Scholar
  12. 12.
    Duggan, M.: Online harassment. Pew Research Center (2014)Google Scholar
  13. 13.
    Fakhraei, S., Foulds, J., Shashanka, M., Getoor, L.: Collective spammer detection in evolving multi-relational social networks. In: Proceedings of the 21th ACM SIGKDD (2015)Google Scholar
  14. 14.
    Fawcett, T.: An introduction to ROC analysis. Pattern Recognit. Lett. 27(8), 861–874 (2006)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Finn, J.: A survey of online harassment at a university campus. J. Interpers. Violence 19(4), 468–483 (2004)CrossRefGoogle Scholar
  16. 16.
    Gao, H., Hu, J., Wilson, C., Li, Z., Chen, Y., Zhao, B.Y.: Detecting and characterizing social spam campaigns. In: Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement. ACM (2010)Google Scholar
  17. 17.
    Han, J., Kamber, M., Pei, J.: Data Mining: Concepts and Techniques. Morgan kaufmann, Los Altos (2006)zbMATHGoogle Scholar
  18. 18.
    Hinduja, S., Patchin, J.W.: Bullying, cyberbullying, and suicide. Arch. Suicide Res. 14(3), 206–221 (2010)CrossRefGoogle Scholar
  19. 19.
    Hu, X., Tang, J., Liu, H.: Leveraging knowledge across media for spammer detection in microblogging. In: Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (2014)Google Scholar
  20. 20.
    Hu, X., Tang, J., Liu, H.: Online social spammer detection. In: Proceedings of the 28th AAAI Conference, pp. 59–65. AAAI (2014)Google Scholar
  21. 21.
    Huang, M.P., Alessi, N.E.: The internet and the future of psychiatry. Am. J. Psychiatry 153(7), 861–869 (1996)CrossRefGoogle Scholar
  22. 22.
    Lee, S., Kim, J.: WarningBird: detecting suspicious URLs in twitter stream. In: NDSS, vol. 12, pp. 1–13 (2012)Google Scholar
  23. 23.
    Lee Rodgers, J., Nicewander, W.A.: Thirteen ways to look at the correlation coefficient. Am. Stat. 42(1), 59–66 (1988)CrossRefGoogle Scholar
  24. 24.
    Miller, Z., Dickinson, B., Deitrick, W., Hu, W., Wang, A.H.: Twitter spammer detection using data stream clustering. Inf. Sci. 260, 64–73 (2014)CrossRefGoogle Scholar
  25. 25.
    Moriarty, G.: 7 cups member agreement. https://www.7cups.com/inc/memberTOS.html
  26. 26.
    RColorBrewer, S., Liaw, M.A.: Package randomForest (2012)Google Scholar
  27. 27.
    Rogers, C.R., Farson, R.E.: Active listening. Industrial Relations Center of the University of Chicago (1957)Google Scholar
  28. 28.
    Sievert, C., Shirley, K.E.: LDavis: a method for visualizing and interpreting topics. In: Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces, pp. 63–70 (2014)Google Scholar
  29. 29.
    Wu, F., Shu, J., Huang, Y., Yuan, Z.: Social spammer and spam message co-detection in microblogging with social context regularization. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 1601–1610. ACM (2015)Google Scholar
  30. 30.
    Yang, C., Harkreader, R.C., Gu, G.: Die free or live hard? Empirical evaluation and new design for fighting evolving twitter spammers. In: Sommer, R., Balzarotti, D., Maier, G. (eds.) RAID 2011. LNCS, vol. 6961, pp. 318–337. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23644-0_17CrossRefGoogle Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018

Authors and Affiliations

  • Yu Li
    • 1
  • Dae Wook Kim
    • 2
  • Junjie Zhang
    • 1
  • Derek Doran
    • 1
  1. 1.Wright State UniversityFairbornUSA
  2. 2.Eastern Kentucky UniversityRichmondUSA

Personalised recommendations