Seminar Users in the Arabic Twitter Sphere

  • Kareem DarwishEmail author
  • Dimitar Alexandrov
  • Preslav Nakov
  • Yelena Mejova
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10539)


We introduce the notion of “seminar users”, who are social media users engaged in propaganda in support of a political entity. We develop a framework that can identify such users with 84.4% precision and 76.1% recall. While our dataset is from the Arab region, omitting language-specific features has only a minor impact on classification performance, and thus, our approach could work for detecting seminar users in other parts of the world and in other languages. We further explored a controversial political topic to observe the prevalence and potential potency of such users. In our case study, we found that 25% of the users engaged in the topic are in fact seminar users and their tweets make nearly a third of the on-topic tweets. Moreover, they are often successful in affecting mainstream discourse with coordinated hashtag campaigns.


Seminar users Astroturfing Politics Propaganda Malicious users Social media Twitter Social bots 


  1. 1.
    Abokhodair, N., Yoo, D., McDonald, D.W.: Dissecting a social botnet: growth, content and influence in Twitter. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW 2015, Vancouver, British Columbia, Canada, pp. 839–851 (2015)Google Scholar
  2. 2.
    Adewole, K.S., Anuar, N.B., Kamsin, A., Varathan, K.D., Razak, S.A.: Malicious accounts: dark of the social networks. J. Netw. Comput. Appl. 79, 41–67 (2017)CrossRefGoogle Scholar
  3. 3.
    Agarwal, S., Sureka, A.: Characterizing linguistic attributes for automatic classification of intent based racist/radicalized posts on Tumblr micro-blogging website. arXiv preprint arXiv:1701.04931 (2017)
  4. 4.
    Almaatouq, A., Shmueli, E., Nouh, M., Alabdulkareem, A., Singh, V.K., Alsaleh, M., Alarifi, A., Alfaris, A., et al.: If it looks like a spammer and behaves like a spammer, it must be a spammer: analysis and detection of microblogging spam accounts. Int. J. Inf. Secur. 15(5), 475–491 (2016)CrossRefGoogle Scholar
  5. 5.
    Benevenuto, F., Magno, G., Rodrigues, T., Almeida, V.: Detecting spammers on Twitter. In: Proceedings of the Conference on Collaboration, Electronic Messaging, Anti-abuse and Spam, CEAS 2010, Redmond, Washington, USA, vol. 6, p. 12 (2010)Google Scholar
  6. 6.
    Binns, A.: DON’T FEED THE TROLLS! Managing troublemakers in magazines online communities. Journal. Pract. 6(4), 547–562 (2012)Google Scholar
  7. 7.
    Bu, Z., Xia, Z., Wang, J.: A sock puppet detection algorithm on virtual spaces. Know.-Based Syst. 37, 366–377 (2013)CrossRefGoogle Scholar
  8. 8.
    Buckels, E.E., Trapnell, P.D., Paulhus, D.L.: Trolls just want to have fun. Pers. Individ. Differ. 67, 97–102 (2014)CrossRefGoogle Scholar
  9. 9.
    Cambria, E., Chandra, P., Sharma, A., Hussain, A.: Do not feel the trolls. In: Proceedings of the 3rd International Workshop on Social Data on the Web, SDoW 2010, Shanghai, China (2010)Google Scholar
  10. 10.
    Castillo, C., Mendoza, M., Poblete, B.: Predicting information credibility in time-sensitive social media. Internet Res. 23(5), 560–588 (2013)CrossRefGoogle Scholar
  11. 11.
    Chen, C., Wu, K., Srinivasan, V., Zhang, X.: Battling the internet water army: detection of hidden paid posters. In: Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2013, Niagara, Ontario, Canada, pp. 116–120 (2013)Google Scholar
  12. 12.
    Chen, Y., Zhou, Y., Zhu, S., Xu, H.: Detecting offensive language in social media to protect adolescent online safety. In: Proceedings of the 2012 International Conference on Privacy, Security, Risk and Trust and of the 2012 International Conference on Social Computing, PASSAT/SocialCom 2012, Amsterdam, Netherlands, pp. 71–80 (2012)Google Scholar
  13. 13.
    Cole, K.K.: “It’s like she’s eager to be verbally abused”: Twitter, trolls, and (en) gendering disciplinary rhetoric. Feminist Media Stud. 15(2), 356–358 (2015)CrossRefGoogle Scholar
  14. 14.
    Cresci, S., Di Pietro, R., Petrocchi, M., Spognardi, A., Tesconi, M.: Fame for sale: efficient detection of fake Twitter followers. Decis. Support Syst. 80, 56–71 (2015)CrossRefGoogle Scholar
  15. 15.
    Cross, J.P., Greene, D., Belford, M.: Tweeting Europe: a text-analytic approach to unveiling the content of political actors’ Twitter activities in the European parliament. In: Proceedings of 6th Annual General Conference of the European Political Science Association, EPSA 2016, Brussels, Belgium (2016)Google Scholar
  16. 16.
    Darwish, K., Magdy, W., Mourad, A.: Language processing for Arabic microblog retrieval. In: Proceedings of the 21st ACM International Conference on Information and Knowledge Management. CIKM 2012, Maui, Hawaii, USA, pp. 2427–2430 (2012)Google Scholar
  17. 17.
    Dave, K., Lawrence, S., Pennock, D.M.: Mining the peanut gallery: opinion extraction and semantic classification of product reviews. In: Proceedings of the 12th International World Wide Web Conference, WWW 2003, Budapest, Hungary, pp. 519–528 (2003)Google Scholar
  18. 18.
    Davis, C.A., Varol, O., Ferrara, E., Flammini, A., Menczer, F.: BotOrNot: a system to evaluate social bots. In: Proceedings of the 25th International Conference Companion on World Wide Web, WWW 2016, Montréal, Québec, Canada, pp. 273–274 (2016)Google Scholar
  19. 19.
    Ferrara, E., Varol, O., Davis, C., Menczer, F., Flammini, A.: The rise of social bots. Commun. ACM 59(7), 96–104 (2016)CrossRefGoogle Scholar
  20. 20.
    Ferrara, E., Wang, W.-Q., Varol, O., Flammini, A., Galstyan, A.: Predicting online extremism, content adopters, and interaction reciprocity. In: Spiro, E., Ahn, Y.-Y. (eds.) SocInfo 2016. LNCS, vol. 10047, pp. 22–39. Springer, Cham (2016). doi: 10.1007/978-3-319-47874-6_3 CrossRefGoogle Scholar
  21. 21.
    Galán-García, P., De La Puerta, J.G., Gómez, C.L., Santos, I., Bringas, P.G.: Supervised machine learning for the detection of troll profiles in Twitter social network: application to a real case of cyberbullying. Logic J. IGPL 24, 42–53 (2015)MathSciNetGoogle Scholar
  22. 22.
    Galán-García, P., de la Puerta, J.G., Gómez, C.L., Santos, I., Bringas, P.G.: Supervised machine learning for the detection of troll profiles in Twitter social network: application to a real case of cyberbullying. In: Proceedings of the International Joint Conference SOCO 2013-CISIS 2013-ICEUTE 2013, pp. 419–428. Advances in Intelligent Systems and Computing (2014)Google Scholar
  23. 23.
    Gupta, A., Lamba, H., Kumaraguru, P., Joshi, A.: Faking sandy: characterizing and identifying fake images on Twitter during hurricane Sandy. In: Proceedings of the 22nd International Conference on World Wide Web, WWW 2013, pp. 729–736 (2013)Google Scholar
  24. 24.
    Hardaker, C.: Trolling in asynchronous computer-mediated communication: from user discussions to theoretical concepts. J. Politeness Res. 6(2), 215–242 (2010)CrossRefGoogle Scholar
  25. 25.
    Hardalov, M., Koychev, I., Nakov, P.: In search of credible news. In: Dichev, C., Agre, G. (eds.) AIMSA 2016. LNCS (LNAI), vol. 9883, pp. 172–180. Springer, Cham (2016). doi: 10.1007/978-3-319-44748-3_17 CrossRefGoogle Scholar
  26. 26.
    Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2004, Seattle, Washington, USA, pp. 168–177 (2004)Google Scholar
  27. 27.
    Kaplan, A.M., Haenlein, M.: Users of the world, unite! The challenges and opportunities of social media. Bus. Horiz. 53(1), 59–68 (2010)CrossRefGoogle Scholar
  28. 28.
    Kumar, S., Cheng, J., Leskovec, J., Subrahmanian, V.: An army of me: sockpuppets in online discussion communities. In: Proceedings of the 26th International Conference on World Wide Web, WWW 2017, Perth, Australia, pp. 857–866 (2017)Google Scholar
  29. 29.
    Lendvai, P., Reichel, U.D.: Contradiction detection for rumorous claims. arXiv preprint arXiv:1611.02588 (2016)
  30. 30.
    Li, W., Zhong, N., Liu, C.: Combining multiple email filters based on multivariate statistical analysis. In: Esposito, F., Raś, Z.W., Malerba, D., Semeraro, G. (eds.) ISMIS 2006. LNCS (LNAI), vol. 4203, pp. 729–738. Springer, Heidelberg (2006). doi: 10.1007/11875604_81 CrossRefGoogle Scholar
  31. 31.
    Liu, D., Wu, Q., Han, W., Zhou, B.: Sockpuppet gang detection on social media sites. Front. Comput. Sci. 10(1), 124–135 (2016)CrossRefGoogle Scholar
  32. 32.
    Lukasik, M., Cohn, T., Bontcheva, K.: Point process modelling of rumour dynamics in social media. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, China, pp. 518–523 (2015)Google Scholar
  33. 33.
    Ma, J., Gao, W., Wei, Z., Lu, Y., Wong, K.F.: Detect rumors using time series of social context information on microblogging websites. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM 2015, Melbourne, Australia, pp. 1751–1754 (2015)Google Scholar
  34. 34.
    Magdy, W., Darwish, K., Weber, I.: #FailedRevolutions: using Twitter to study the antecedents of ISIS support. First Monday 21(2) (2016). doi: 10.5210/fm.v21i2.6372
  35. 35.
    Maity, S.K., Chakraborty, A., Goyal, P., Mukherjee, A.: Detection of sockpuppets in social media. In: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW 2017, Portland, Oregon, USA, pp. 243–246 (2017)Google Scholar
  36. 36.
    McCord, M., Chuah, M.: Spam detection on twitter using traditional classifiers. In: Calero, J.M.A., Yang, L.T., Mármol, F.G., García Villalba, L.J., Li, A.X., Wang, Y. (eds.) ATC 2011. LNCS, vol. 6906, pp. 175–186. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-23496-5_13 CrossRefGoogle Scholar
  37. 37.
    Mihaylov, T., Georgiev, G., Nakov, P.: Finding opinion manipulation trolls in news community forums. In: Proceedings of the Nineteenth Conference on Computational Natural Language Learning, CoNLL 2015, Beijing, China, pp. 310–314 (2015)Google Scholar
  38. 38.
    Mihaylov, T., Koychev, I., Georgiev, G., Nakov, P.: Exposing paid opinion manipulation trolls. In: Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2015, Hissar, Bulgaria, pp. 443–450 (2015)Google Scholar
  39. 39.
    Mihaylov, T., Nakov, P.: Hunting for troll comments in news community forums. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, Berlin, Germany, pp. 399–405 (2016)Google Scholar
  40. 40.
    Moore, M.J., Nakano, T., Enomoto, A., Suda, T.: Anonymity and roles associated with aggressive posts in an online forum. Comput. Hum. Behav. 28(3), 861–867 (2012)CrossRefGoogle Scholar
  41. 41.
    Morris, M.R., Counts, S., Roseway, A., Hoff, A., Schwarz, J.: Tweeting is believing?: understanding microblog credibility perceptions. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, CSCW 2012, Seattle, Washington, USA, pp. 441–450 (2012)Google Scholar
  42. 42.
    Morstatter, F., Wu, L., Nazer, T., Carley, K., Liu, H.: A new approach to bot detection: striking the balance between precision and recall, ASONAM 2016, San Francisco, California, USA, pp. 533–540 (2016)Google Scholar
  43. 43.
    Mubarak, H., Darwish, K., Magdy, W.: Abusive language detection on Arabic social media. In: Proceedings of the Workshop on Abusive Language Online, ALW 2017, Vancouver, British Columbia, Canada (2017)Google Scholar
  44. 44.
    Muhammad, A., Kübler, S., Diab, M.: Samar: a system for subjectivity and sentiment analysis of Arabic social media. In: Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis, WASSA 2012, Copenhagen, Denmark, pp. 19–28 (2012)Google Scholar
  45. 45.
    Ortega, F.J., Troyano, J.A., Cruz, F.L., Vallejo, C.G., Enriquez, F.: Propagation of trust and distrust for the detection of trolls in a social network. Comput. Netw. 56(12), 2884–2895 (2012)CrossRefGoogle Scholar
  46. 46.
    Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., Menczer, F.: Truthy: mapping the spread of astroturf in microblog streams. In: Proceedings of the 20th International Conference on World Wide Web, WWW 2011, Hyderabad, India, pp. 249–252 (2011)Google Scholar
  47. 47.
    Ruiz, C., Domingo, D., Micó, J.L., Díaz-Noci, J., Meso, K., Masip, P.: Public sphere 2.0? The democratic qualities of citizen debates in online newspapers. Int. J. Press/Polit. 16(4), 463–487 (2011)CrossRefGoogle Scholar
  48. 48.
    Sarna, G., Bhatia, M.: Content based approach to find the credibility of user in social networks: an application of cyberbullying. Int. J. Mach. Learn. Cybern. 8(2), 677–689 (2017)CrossRefGoogle Scholar
  49. 49.
    Seah, C.W., Chieu, H.L., Chai, K.M.A., Teow, L.N., Yeong, L.W.: Troll detection by domain-adapting sentiment analysis. In: Proceedings of the 18th International Conference on Information Fusion, FUSION 2015, Washington, D.C., USA, pp. 792–799 (2015)Google Scholar
  50. 50.
    Sebastiani, F.: Machine learning in automated text categorization. ACM Comput. Surv. (CSUR) 34(1), 1–47 (2002)CrossRefGoogle Scholar
  51. 51.
    Shachaf, P., Hara, N.: Beyond vandalism: Wikipedia trolls. J. Inf. Sci. 36(3), 357–370 (2010)CrossRefGoogle Scholar
  52. 52.
    Slee, P.T., Skrzypiec, G.: School bullying, victimization and pro-social behaviour. Well-Being, Positive Peer Relations and Bullying in School Settings. PE, pp. 109–133. Springer, Cham (2016). doi: 10.1007/978-3-319-43039-3_6 CrossRefGoogle Scholar
  53. 53.
    Solorio, T., Hasan, R., Mizan, M.: Sockpuppet detection in Wikipedia: a corpus of real-world deceptive writing for linking identities. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, pp. 1355–1358 (2014)Google Scholar
  54. 54.
    Song, Y., Dai, X.Y., Wang, J.: Not all emotions are created equal: expressive behavior of the networked public on China’s social media site. Comput. Hum. Behav. 60, 525–533 (2016)CrossRefGoogle Scholar
  55. 55.
    Stieglitz, S., Brachten, F., Berthelé, D., Schlaus, M., Venetopoulou, C., Veutgen, D.: Do social bots (still) act different to humans? - comparing metrics of social bots with those of humans. In: Meiselwitz, G. (ed.) SCSM 2017, HCI 2017. LNCS, vol. 10282, pp. 379–395. Springer, Cham (2017). doi: 10.1007/978-3-319-58559-8_30 Google Scholar
  56. 56.
    Thacker, S., Griffiths, M.D.: An exploratory study of trolling in online video gaming. Int. J. Cyber Behav. Psychol. Learn. (IJCBPL) 2(4), 17–33 (2012)CrossRefGoogle Scholar
  57. 57.
    Varol, O., Ferrara, E., Davis, C.A., Menczer, F., Flammini, A.: Online human-bot interactions: detection, estimation, and characterization. arXiv preprint arXiv:1703.03107 (2017)
  58. 58.
    Virkar, S.: Trolls just want to have fun: electronic aggression within the context of e-participation and other online political behaviour in the United Kingdom. Int. J. E-Polit. 5(4), 21–51 (2014)CrossRefGoogle Scholar
  59. 59.
    Waisbord, S., Amado, A.: Populist communication by digital means: presidential Twitter in Latin America. Inf. Commun. Soc. 20(9), 1330–1346 (2017). Populist Online CommunicationCrossRefGoogle Scholar
  60. 60.
    Wei, W., Joseph, K., Liu, H., Carley, K.M.: The fragility of Twitter social networks against suspended users. In: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, ASONAM 2015, Paris, France, pp. 9–16 (2015)Google Scholar
  61. 61.
    Wells, C., Shah, D.V., Pevehouse, J.C., Yang, J., Pelled, A., Boehm, F., Lukito, J., Ghosh, S., Schmidt, J.L.: How Trump drove coverage to the nomination: hybrid media campaigning. Polit. Commun. 33(4), 669–676 (2016)CrossRefGoogle Scholar
  62. 62.
    Zubiaga, A., Kochkina, E., Liakata, M., Procter, R., Lukasik, M.: Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations. arXiv preprint arXiv:1609.09028 (2016)
  63. 63.
    Zubiaga, A., Liakata, M., Procter, R., Wong Sak Hoi, G., Tolmie, P.: Analysing how people orient to and spread rumours in social media by looking at conversational threads. PLoS ONE 11(3), 1–29 (2016)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Kareem Darwish
    • 1
    Email author
  • Dimitar Alexandrov
    • 2
  • Preslav Nakov
    • 1
  • Yelena Mejova
    • 1
  1. 1.Qatar Computing Research InstituteHBKUDohaQatar
  2. 2.Sofia UniversitySofiaBulgaria

Personalised recommendations