An empirical study of socialbot infiltration strategies in the Twitter social network

  • Carlos Freitas
  • Fabrício Benevenuto
  • Adriano Veloso
  • Saptarshi Ghosh
Original Article


Online social networks (OSNs) such as Twitter and Facebook have become a significant testing ground for Artificial Intelligence developers who build programs, known as socialbots, that imitate human users by automating their social network activities such as forming social links and posting content. Particularly, Twitter users have shown difficulties in distinguishing these socialbots from the human users in their social graphs. Frequently, socialbots are effective in acquiring human users as followers and exercising influence within them. While the success of socialbots is certainly a remarkable achievement for AI practitioners, their proliferation in the Twitter sphere opens many possibilities for cybercrime. The proliferation of socialbots in Twitter motivates us to assess the characteristics or strategies that make socialbots most likely to succeed. In this direction, we created 120 socialbot accounts in Twitter, which have a profile, follow other users, and generate tweets either by reposting others’ tweets or by generating their own synthetic tweets. Then, we employ a \(2^k\) factorial design experiment to quantify the infiltration performance of different socialbot strategies, and examine the effectiveness of individual profile and activity-related attributes of the socialbots. Our analysis is the first of a kind, and reveals what strategies make socialbots successful in the Twitter sphere.


Twitter social network Socialbots Infiltration strategies Factorial design experiment 


  1. 46 % of Twitter users have less than 100 followers-Simplify360 (2014) Accessed 1 May 2014
  2. Ahmad MA, Ahmed I, Srivastava J, Poole MS (2011) Trust me, i’m an expert: trust, homophily and expertise in mmos. In: International conference on privacy, security, risk and trust (passat) and international conference on social computing (socialcom), pp 882–887Google Scholar
  3. Aiello LM, Deplano M, Schifanella R, Ruffo G (2012) People are strange when you’re a stranger: impact and influence of bots on social networks. In: Proceedings of AAAI international conference on web and social media (ICWSM)Google Scholar
  4. Barbieri G, Pachet F, Roy P, Esposti MD (2012) Markov constraints for generating lyrics with style. In: Proceedings of European conference on artificial intelligenceGoogle Scholar
  5. Benevenuto F, Magno G, Rodrigues T, Almeida V (2010) Detecting spammers on twitter. In: Proceedings of annual collaboration, electronic messaging, anti-abuse and spam conference (CEAS)Google Scholar
  6. Boshmaf Y, Muslukhov I, Beznosov K, Ripeanu M (2011) The socialbot network: when bots socialize for fame and money. In: Proceedings of annual computer security applications conference (ACSAC)Google Scholar
  7. Cha M, Haddadi H, Benevenuto F, Gummadi KP (2010) Measuring user influence in twitter: the million follower fallacy. In: Proceedings of AAAI international conference on web and social media (ICWSM)Google Scholar
  8. Chandra J, Scholtes I, Ganguly N, Schweitzer F (2012) A tunable mechanism for identifying trusted nodes in large scale distributed networks. In: Proceedings of IEEE international conference on trust, security and privacy in computing and communications (TRUSTCOM), pp 722–729Google Scholar
  9. Chhabra S, Aggarwal A, Benevenuto F, Kumaraguru P (2011) The phishing landscape through short URLs. In: Proceedings of collaboration, electronic messaging, anti-abuse and spam conference (CEAS)Google Scholar
  10. Chu Z, Gianvecchio S, Wang H, Jajodia S (2012) Detecting automation of twitter accounts: are you a human, bot, or cyborg? IEEE Trans Dependable Secure Comput 9(6):811–824CrossRefGoogle Scholar
  11. Coburn Z, Marra G (2008) Realboy: believable twitter Bots. Accessed 1 Dec 2015
  12. Creating a bot on wikipedia (2015) Accessed 1 Dec 2015
  13. Edwards J (2013) There are 20 million fake users on twitter, and twitter can’t do much about them—business insider. Accessed 1 Dec 2013
  14. Ferrara E, Varol O, Davis C, Menczer F, Flammini A (2014) The rise of social bots. arXiv:1407.5225
  15. Freitas C, Benevenuto F, Ghosh S, Veloso A (2015) Reverse engineering socialbot infiltration strategies in twitter. In: Proceedings of ACM/IEEE international conference on advances in social networks analysis and mining (ASONAM)Google Scholar
  16. Ghosh S, Viswanath B, Kooti F, Sharma NK, Korlam G, Benevenuto F, Gummadi, K. P. (2012) Understanding and combating link farming in the twitter social network. In: Proceedings of World Wide Web Conference (WWW)Google Scholar
  17. Ghosh S, Zafar MB, Bhattacharya P, Sharma N, Ganguly N, Gummadi K (2013) On sampling the wisdom of crowds: random vs. expert sampling of the twitter stream. In: Proceedings of ACM conference on information knowledge management (CIKM)Google Scholar
  18. Gyöngyi Z, Garcia-Molina H (2005) Link spam alliances. In: Proceedings of international conference on very large data bases (VLDB)Google Scholar
  19. Jain R (1991) The art of computer systems performance analysis: techniques for experimental design, measurement, simulation, and modeling. Wiley, LondonMATHGoogle Scholar
  20. Jurafsky D, Martin JH (2000) Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition, 1st edn. Prentice Hall PTR, Englewood CliffsGoogle Scholar
  21. Klout—The standard for influence (2015) Accessed 1 Dec 2015
  22. Klout—wikipedia (2015) Accessed 1 Dec 2015
  23. Kouloumpis E, Wilson T, Moore J (2011) Twitter sentiment analysis: the good, the bad and the OMG! In: Proceedings of AAAI international conference on web and social media (ICWSM)Google Scholar
  24. Lee K, Caverlee J, Webb S (2010) Uncovering social spammers: social honeypots + machine learning. In: Proceedings ACM SIGIR conference on research and development in information retrieval (SIGIR)Google Scholar
  25. Lee K, Eoff BD, Caverlee J (2011) Seven months with the devils: a long-term study of content polluters on twitter. In: Proceedings of AAAI international conference on web and social media (ICWSM)Google Scholar
  26. Let’s make candidates pledge not to use bots (2014) Accessed 1 Dec 2015
  27. Messias J, Schmidt L, Rabelo R, Benevenuto F (2013) You followed my bot! Transforming robots into influential users in Twitter. First Monday 18(7)
  28. Orcutt M (2012) Twitter mischief plagues Mexico’s election. Accessed 1 Dec 2015
  29. Pandora Bots (2015) Accessed 1 Dec 2015
  30. Pitsillidis A, Levchenko K, Kreibich C, Kanich C, Voelker GM, Paxson, V, Weaver N, Savage S (2010) Botnet judo: fighting spam with itself. In: Proceedings of symposium on network and distributed system security (NDSS), San Diego, CAGoogle Scholar
  31. Ratkiewicz J, Conover M, Meiss M, Gonçalves B, Patil S, Flammini A, Menczer F (2011) Truthy: mapping the spread of astroturf in microblog streams . In: Proceedings of World Wide Web Conference (WWW)Google Scholar
  32. Roy A, Ahmad MA, Sarkar C, Keegan B, Srivastava J (2012) The ones that got away: false negative estimation based approaches for gold farmer detection. In: International conference on privacy, security, empirical study of socialbots in twitter 27 risk and trust (passat) and international conference on social computing (socialcom), pp 328–337)Google Scholar
  33. Shane S, Hubbard B (2014) ISIS displaying a deft command of varied media. Accessed 1 Dec 2015
  34. Shutting down spammers (2012) Accessed 1 Dec 2015
  35. Stone-Gross B, Holz T, Stringhini G, Vigna G (2011) The underground economy of spam: a Botmaster’s perspective of coordinating large-scale spam campaigns. In: Proceedings of USENIX conference on large- scale exploits and emergent threats (LEET)Google Scholar
  36. Subrahmanian VS, Azaria A, Durst S, Kagan V, Galstyan A, Lerman K, Waltzman R et al. (2016) The DARPA twitter bot challenge. arXiv:1601.05140
  37. The twitter rules—twitter help center (2015) Accessed 1 Dec 2015
  38. Viswanath B, Post A, Gummadi KP, Mislove A (2010a) An analysis of social network-based Sybil defenses. ACM SIGCOMM Comput Commun Rev 40(4):363–374CrossRefGoogle Scholar
  39. Viswanath B, Post A, Gummadi KP, Mislove A (2010b) An analysis of social network-based sybil defenses. SIGCOMM Comput Commun Rev 40(4):363–374CrossRefGoogle Scholar
  40. Wagner C, Liao V, Pirolli P, Nelson L, Strohmaier M (2012) It’s not in their tweets: modeling topical expertise of twitter users. In: Proceedings of AASE/IEEE international conference on social computing (SocialCom)Google Scholar
  41. Wagner C, Mitter S, Körner C, Strohmaier M (2012) When social bots attack: modeling susceptibility of users in online social networks. In: Proceedings of workshop on making sense of microposts (with WWW)Google Scholar
  42. Wald R, Khoshgoftaar TM, Napolitano A, Sumner C (2013) Which users reply to and interact with twitter social bots? In: Proceedings of IEEE conference on tools with artificial intelligence (ICTAI)Google Scholar
  43. Web Ecology Project (2015)
  44. William R, Avison JDM (eds) BAP (2007) Mental health, social mirror. Springer, BerlinGoogle Scholar

Copyright information

© Springer-Verlag Wien 2016

Authors and Affiliations

  • Carlos Freitas
    • 1
  • Fabrício Benevenuto
    • 1
  • Adriano Veloso
    • 1
  • Saptarshi Ghosh
    • 2
  1. 1.Computer Science DepartmentUniversidade Federal de Minas GeraisBelo HorizonteBrazil
  2. 2.Department of Computer Science and TechnologyIndian Institute of Engineering Science and Technology ShibpurHowrahIndia

Personalised recommendations