The role of digital media in radicalization has received increasing attention from researchers, policymakers, the media, and the public (e.g., Neumann 2019; Thompson, 2012; Alfano et al., 2018). For instance, the Christchurch right-wing terrorist attack on the 15th March 2019 that resulted in 50 deaths of men, women, and children at a mosque was livestreamed on Facebook, and subsequent investigations showed a strong radical online presence of the perpetrator (Battersby & Ball, 2019; Hutchinson, 2019). To prevent such acts, it is necessary to understand the role of digital media in radicalization. Radicalization describes the development of attitudes (i.e., cognitive radicalization) and actions (i.e., behavioral radicalization) that are characterized by rejecting relevant societal, legal, and political norms (Neumann, 2013). Attitudes and behaviors that oppose basic human rights and equality and promote a desire to abolish or replace them, for instance, by using violence, are radical and even extremist (Beelmann, 2020). Therefore, attitudes that idealize inequality (e.g., promulgating superiority of a group because of their skin color or socioeconomic status) are radical, with violent extremism being one possible endpoint of the radicalization process. Several systematic reviews and meta-analyses have identified risk factors for radicalization (Vergani et al., 2018; McGilloway et al., 2015; Wolfowicz et al., 2019). Among them are push factors, such as relative deprivation and lower education; pull factors, such as propaganda, social dynamics (strong group bonds, identification with the group), or support for violence; and personal factors, such as mental stress, often connected to feeling lost and searching for meaning. However, earlier studies focused on factors of offline radicalization, and their impact on online radicalization is less clear.
Social Media and Radicalization
The Internet, particularly social media, is an accessible, easy way to find, share, and discuss radical content and develop and communicate radical ideologies, often under the disguise of free speech (Struck et al., 2020; Alava et al., 2017), therefore described as a catalyst for online radicalization. In social networks, like-minded individuals can easily connect and interact, which can speed up social identification and group-building, for instance, among right-wing groups, by establishing common values, goals, and forms of communication (Koehler, 2014; Meleagrou-Hitchens et al., 2017; Pauwels & Schils, 2016). Circulating messages and promoting images of a supposedly superior in-group with distinct features (e.g., strength, willpower, and an ideal of masculinity) can foster social identification in persons who assume these features as either shared or desired traits and devalue out-groups that do not share these traits (Roccas et al., 2008; Tajfel & Turner, 1986; Schmitt et al., 2020). Thus, many researchers conclude that processes of social identification (e.g., identification, engagement, and deference towards a social group) are important factors of radicalization. In their work on political mobilization, for instance, Moskalenko and McCauley (2009) underline the importance of intergroup conflict for radicalization. They differentiate legal, non-violent (activism), and illegal, violent (radicalism) political action to reach politically motivated goals, and they point to associations between both, activism and radicalism, and support for intergroup conflicts. Such conflicts are especially pronounced in social media, because of its reach, accessibility, and flexibility of social networks.
To further examine the role of social media, radicalization research often uses big data analytics to examine communication patterns surrounding experiences of deprivation or discrimination, to identify themes of online support for violent or extremist groups, and to study determining factors of liking and sharing radicalized content (Batzdorfer et al., 2020; Meleagrou-Hitchens et al., 2017). However, most studies focus on text mining (of Twitter messages) and report difficulties in differentiating serious, ironic, or sarcastic messages based on tone, intention, and structure. Furthermore, text mining neglects image-based and meme-based communication. This makes it difficult to compare and triangulate findings from text-based analysis (e.g., Twitter) and image-based or video-based analysis (e.g., Instagram, Facebook) to advance our understanding of complex online radicalization processes (Parekh et al., 2018).
Internet memes combining images, videos, and text messages have emerged as a popular form of online communication (Knobel & Lankshear, 2007) in radical groups. Radical groups often pair pictures or videos mirroring or evoking emotionally charged events with text explaining causes or results of said events (Jukschat & Kudlacek, 2017; Huntington, 2018). They connect these explanations to their political attitudes, extremist beliefs, or values to create an ideology. The concept of memes derives from Richard Dawkins’s (1976) seminal work on evolutionary expression of culture, meaning a spread of ideas, concepts, catch-phrases or other culturally coded content from person to person, thus influencing human ideology, goal-setting, and behavior. Accordingly, successful memes are of high fidelity (i.e., a meme’s ability to be copied and passed on), fecundity (i.e., reproduction rate, robustness), and longevity (i.e., survival time) (Dawkins, 1976). While the original definition of memes referred to diverse ideas and concepts and their survival among different (social) groups, Internet memes or online memes refer to communicating these ideas online (Knobel & Lankshear, 2007). Likewise, Internet memes have a referential or ideational system (e.g., what information is being conveyed and how?), a contextual or interpersonal system (e.g., how does this meme relate to the people viewing it?), and an ideological or worldview system (e.g., what larger topics or themes are being conveyed?). Pictures, videos, text messages, and other media content shared online (e.g., via social media) can become a meme when they aim to convey a message (e.g., free Tibet), express political or other ideological assumptions (e.g., the meme “free Tibet” implies that Tibet is unfree and needs to be set free), and have implications for social groups (e.g., a shared belief of an unfree Tibet among proponents of the meme and a call to action). Similar to traditional memes, Internet memes differ in their fidelity (e.g., diverse combinations of images and text messages), fecundity (e.g., political campaign slogans attracting a large audience before an election), and longevity (e.g., pictures of GIFs of ambiguous facial expressions that encourage reuse and reinterpretation).
Research on meme-based communication of right-wing groups points to their efforts to establish, disseminate, and rationalize political worldviews such as xenophobia, via arousing imagery (i.e., evoking fear or anger) and ideological narratives, providing arguments for these circumstances and scapegoating others (Struck et al., 2020; Doerr, 2017; Flam & Doerr, 2015; Schmitt et al., 2020). Right-wing groups try to gather attention and mobilize sympathizers by creating content that either empowers the in-group or incites (discriminatory) action against out-groups; for instance, religious groups or refugees (Doerr, 2017; Flam & Doerr, 2015; Schmitt et al., 2020; Struck et al., 2020) describe the use of memes in right-wing groups to communicate via echo chambers, and private networks of their in-group (e.g., on Discord servers, often directly inciting violence against out-groups), and to communicate publicly to mobilize interest (e.g., in Instagram posts, often with ambiguous or humorous messages). Despite this body of literature, empirical and experimental studies linking exposure to radicalizing Internet memes and subsequent radicalization are scarce (Pauwels & Schils, 2016). As a theoretical framework, the social influence model of violent extremism proposes mechanisms linking memes to radicalization.
Social Influence Model of Violent Extremism
The social influence model of violent extremism (SIM-VE) (Smith & Talbot, 2019) describes cognitive, emotional, and behavioral factors that have implications for processing online propaganda. The SIM-VE describes social influence towards violent extremism as transforming a person’s identity towards alignment with a violent extremist group, shaping their beliefs to align with extremist ideology, and reconstructing their moral position to allow for violent action becoming acceptable (Smith & Talbot, 2019). The model captures individual beliefs and identity (micro level, e.g., as a traditionalist), social identities and dynamics, (meso level, e.g., devaluating out-groups, such as libertarian groups, and their worldviews), and societal and normative factors (macro level, e.g., believing in a judicial system that punishes outgroups). With social influence being multidirectional, this means that individuals can shape social processes by presenting their opinions, introducing different perspectives or values into social settings and groups, but groups can also attract individuals by aligning with their goals or motivations or provide means to achieve attractive ends (e.g., financial success, attractiveness, social dominance). Specific cognitive (e.g., seeking information) and affective mechanisms (e.g., avoiding negative affect) further influence this process. For example, Huntington (2018) finds that political Internet memes perceived as political arguments receive less favorable assessments and more scrutiny regarding their persuasiveness than non-political memes. It is important to note, however, that congruency of one’s own political beliefs with the message attenuates this effect, pointing to an increased susceptibility or vulnerability in persons with congruent beliefs.
In a similar manner, a recent systematic review (Hassan et al., 2018) examined the connection between the exposure to online radicalized content and radicalization towards (political) violence. Overall, radicalizing content, such as Internet memes, seems to foster radical attitudes, such as xenophobia, in vulnerable groups (e.g., among visitors of a neo-Nazi online discussion forum or deprived Kyrgyzian youth), but seems less impactful in the general population. However, the review also revealed that only two studies linked exposure and attitudes in a pre-post design (Lee & Leets, 2002; Rieger et al., 2013), challenging the causal interpretation of the association between exposure and radicalization. Alava et al. (2017) made a similar observation in their review of the literature on youth and violent extremism on social media. They state that most studies were retrospective, and of low methodological quality, and most studies “do not inform about which process of Internet and social media use might have led to actual violent radicalization” (ibd., p. 44). Therefore, we take a closer look at the two studies that prospectively investigated the impact of radicalized online content.
Lee and Leets (2002) observed a short-term increase in perceived persuasiveness in teenagers exposed to messages and images of White supremacy. Perceived persuasiveness dropped over time, but it remained higher in individuals with pre-exposure predisposition to White supremacy beliefs. Rieger et al. (2013) examined psychological effects of extremist Internet videos on physiological arousal and attitudes of university and vocational students. While extremist content did not affect physiological arousal, extremist videos (i.e., showing an Islamic suicide bomber) received more negative evaluations, including shame and aversion. However, these videos also ranked higher among participants’ interests, which was connected to an increased willingness to search for further information about the topic. This discrepancy between cognitive interest and affective rejection points to a gap in current research. Hence, it seems promising to combine psychophysiological and self-reported indicators of cognitive and affective appraisal to further our understanding of individual radicalization processes. In this sense, research on radicalization can benefit from psychophysiological research on attention and information processing.
Psychophysiological Perspectives on Radicalization
Psychophysiological research has connected eye gaze patterns to visual information processing, such as selective attention towards emotional stimuli (Calvo & Lang, 2004), facial and eye regions for emotion recognition (Lim et al., 2020), and examined eye movements as an indicator of social preference (Jiang et al., 2016), for instance, characterized by more frequent fixations and longer gaze durations towards relevant stimuli.
Regarding political attitudes, previous studies have established connections between a visual preference of aversive as opposed to appetitive stimuli and right-wing orientation (Dodd et al., 2012). The authors argue that participants with right-leaning beliefs show stronger physiological reactions and thus pay more attention to potential threats to their identity. Similarly, in an eye tracking study on race-based messages, White participants were focused on Black instead of White persons, who represent an out-group based on race (Granot et al., 2016). The focus on the in-group is stronger, though, if visual stimuli are connected to text messages (e.g., in political advertisements): Participants with congruent right-wing beliefs exhibited a preference (i.e., longer dwell time) of right-wing party advertisements and avoided incongruent liberal advertisements (Marquart et al., 2016; Schmuck et al., 2020). These findings have implications for meme-based research and correspond to the model of selective exposure self-and affect management (Knobloch-Westerwick, 2014). The model assumes a reciprocal association between media and self-cognitions in that the current self-concept drives selective media exposure, but media-activated self-concepts also determine further exposure and attention. Thus, participants with right-wing attitudes may be selective in their attention towards political stimuli and political stimuli may activate political self-concepts in control participants (i.e., without a pre-defined political group affiliation), and subsequently guide their attention. However, while previous studies underline the usefulness of eye tracking in analyzing visual attention towards picture-based political stimuli and (political) Internet memes, they fall short in describing how these stimuli were processed. In most studies, participants had to make visual choices between appetitive/congruent and aversive/incongruent messages, but it was not clear what part of these materials attracted and kept their attention. Moreover, a recent study revealed that political messages that were perceived as visually attractive and less negative received longer dwell times and lead to higher willingness of political participation (Geise et al., 2020). Yet, this study examined German university students and did not account for party or political group affiliation.
Taken together, previous findings suggest that exposure to extremist online content might induce cognitive and emotional reactions, but their associations with radicalization are less clear. Therefore, we aim to add to the literature on the impact of radical Internet memes by performing a quasi-experimental study that integrates eye tracking, self-reports of attitudes, and evaluative ratings of Internet memes in participants that do (right-wing group) or do not (control group) identify with a political right-wing group. The study explores visual attention patterns in individuals exposed to radicalized online material. Thus, it extends previous research by focusing on the prospective impact of online material (e.g., Hassan et al., 2018), and it provides information on differential information processing in social groups (based on their self-reported social identity). To assess the impact of radical online content, we connect our research to the SIM-VE (Smith & Talbot, 2019) in that we examine ideological (i.e., political group affiliation), social (i.e., modes of social identification with one’s political group), and behavioral factors (i.e., political mobilization; support for violence), and their association with visual processing patterns and explicit evaluations of political Internet memes. In sum, this study examines the following research questions:
How do participants from political right-wing groups and controls differ in their emotional and cognitive evaluations of Internet memes?
Do these groups differ in their visual attention towards Internet memes?
How are visual attention patterns connected to the evaluation of these memes?
Since this is a pilot study, our approach is exploratory. This approach does not allow for causal inference, but we think it can prove the feasibility of our methodological approach, and provide implications for further inquiry into the association between political attitudes and visual attention in radicalization into violence.