Skip to main content

Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

  • Chapter
  • First Online:
The Palgrave Handbook of Malicious Use of AI and Psychological Security

Abstract

This chapter argues that social media robots, more commonly known as “bots,” are becoming a formative tool of online radicalization. Besides pushing targeted and personalized content perceived by users as more persuasive, algorithmic innovations in “affect recognition” have given artificially intelligent agents the ability to exploit the emotional state of social media users. On encrypted social media channels, such social media bots are especially useful to extremist groups which lack human and financial resources to enact large-scale psychological warfare campaigns. As newer generations of social bots grow their ability to read and respond to human emotions and, in turn, increase their anthropomorphic tendencies, we argue that these automated headhunters will play a dominant role in online radicalization. This chapter undertakes a qualitative study of the affective role of conversational AI in establishing an emotional relationship with potential recruits in online radicalization. We also assess current efforts by Western security agencies, social media companies, and academic researchers to counter online radicalization strategies by extremist organizations. Our chapter points to the need of greater cross-platform multi-disciplinary research, study of bots that operate in foreign languages, and detection of extremist content in multimodal forms such as memes, music, videos, and selfies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A prime example of distributed dissent (although not “physically” violent) is “Operation Payback” in 2008. In retaliation for VISA and MASTERCARD companies blocking of payments to WikiLeaks, the hacker collective Anonymous initiated a series of distributed denial-of-service (DDoS) attacks against the credit card companies’ websites (Sauter, 2014). In this case, unaffiliated hackers from all over the globe using a botnet flooded VISA and MASTERCARD sites with so much traffic that it caused their respective servers to shut down.

References

Download references

Acknowledgments

This study is part of the project “Emotional AI in Cities: Cross Cultural Lessons from UK and Japan on Designing for an Ethical Life” funded by JST-UKRI Joint Call on Artificial Intelligence and Society (2019). (Grant No. JPMJRX19H6). www.ethikal.ai

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Mantello .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mantello, P., Ho, T.M., Podoletz, L. (2023). Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization. In: Pashentsev, E. (eds) The Palgrave Handbook of Malicious Use of AI and Psychological Security. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-22552-9_4

Download citation

Publish with us

Policies and ethics