1 Introduction

There has been significant recent interest in developing AI methods to combat fake news, a phenomenon that gathered much attention during and beyond the 2016 US presidential elections [1]. It has been observed that fake news is an issue that could substantively impact democracy in a negative way, given its potential to harm social cohesion [2]. In the recent past, fake news has led to serious issues in the offline world, especially within the Global South. These include mob lynching in India [3], amplification of xenophobia within South Africa [4] and election-time propaganda in Kenya [5] amongst other things. The impact of fake news on public understanding of phenomena such as climate change and global warming [6] has provided another facet of significant concern for the global society. It is obviously hard or impossible to list out impacts of fake news across the broad spectrum of domains; yet, it remains that fake news has indeed emerged as a phenomenon that has wide-ranging consequences across most sectors of human activity globally. The consequences of fake news, while globally present, is of different shades, sizes and shapes across varying societies and nations. As a particular example, we note a recent case-study [7] where the entry of AI-powered social media, arguably designed for usage within the First Amendment-protected media ecosystem within the US, wrecked havoc within Myanmar, playing a part in making it a ’rumour filled society’.

This expository analysis is borne out of the conviction that technologies ought to be suited to the societies in which they would eventually operate. This, we believe, is especially true of AI technologies to combat fake news, one that is being increasingly acknowledged as among the major challenges of the 21st century [8]. Our focus is on extant biases within the current state-of-the-art of AI for fake news towards Global North contexts, making them potentially ill-suited and harmful for usage within Global South which are distinctively different from Global North while encompassing a diverse set of cultural, historical and political contexts; we refer to this as geo-political bias henceforth.

Against this backdrop, we initiate investigation of geo-political bias through the facet of affect within AI methods that seek to detect or otherwise mitigate fake news, fake news AI in short. Affect refers to the underlying experience of humans relating primarily to the expression of sentiments and emotions. While sentiments could be positive, negative or neutral, emotions denote a plurality of human expressions such as joy, fear, anger, disgust and surprise. With the proliferation of digital data of human expressions in the 90 s and thereafter, the foray of computing methods into recognizing, interpreting, processing and simulating human affects heraled the era of affective computing [9]. However, affect, as understood within social sciences, is regarded as a complex phenomenon. As noted, ‘affects can be expressed spatially as existing across and among people and things, not within them’ [10]. Affect by nature connects bodies thereby increasing their proximity with the flows between them, i.e., news. With the emergence of digital platforms as the venue of consumption of media, affect can be consciously and deliberately engineered via technologies that interact with (e.g., facilitate or mitigate) fake news. This makes the overlap between AI and affect a very interesting frontier of investigation.

Against this backdrop, we look to critically analyze the contours of affective geo-political bias in fake news AI. In doing so, we focusing on aspects other than data bias, given that data bias is well-understood as a vehicle of geo-political bias across data-driven AI in general. We adopt analysis of extant literature as our methodology towards identifying the first insights in this direction.

2 Data bias

Given that modern AI is dominated by data-driven methods, arguably founded on the ethos of dataism [11], the first place to look for geo-political AI bias is the data itself. Indeed, geographical data bias pervades most modern data-driven systems [12], and is probably the most well understood facet of geo-political bias. While our analysis seeks to keep data bias out of remit to enable focus on more nuanced aspects of fake news AI, we briefly summarize this for context. Recent systematic reviews [13, 14] suggest that fake news detection AI methods report massive successes, claiming accuracies \(>95\%\). However, most of the popular methods have been evaluated over a handful of popular datasets, mostly comprising news from Global North contexts. A survey of evaluation datasets [15] (especially, Tables 3 and 4 therein) throws some light into the predominance of Global North media sources and languages within currently popular fake news datasets. While the fact that most datasets are focused on English (apart from a few covering Spanish and Chinese) is quite apparent, the dominance of Global North contexts is visible on a closer analysis of the datasets. Apart from Global South data deficit within training data that impacts AI, Global South contexts may suffer from data quality issues [16] that exacerbate the problem. In addition to the data used to train and test the models, bias towards the Global North pervades external knowledge sources such as Wikipedia [17] used in fake news detection AI. There have been some arguments that fake news datasets must encompass cultural and language diversities [18].

3 Affective geo-political bias in fake news AI

The backdrop that led us to explore geo-political affective bias within fake news AI - a shorthand for AI methods targeted at fake news detection - involves two main factors, viz., empathic media and cultural affect divergences.

Empathic Media: Over the past decade, there has been a rise of personally and emotionally targeted news, often tailored by algorithms, leading to transforming the online media space into what has been called “empathic media” [19]. The backdrop of the empathic media has been observed to underpin contemporary fake news and sophisticated variants of the phenomena such as empathically optimized fake news [20]. While democracy benefits from the prevalence of empathy and altruism within societies, [20] suggests that empathic media enables forces to exploit empathy selectively to produce wrongly informed and emotionally outraged citizens, consequently damaging democracy. It has been observed through survey-based studies that there is a positive correlation between reliance on emotions and fake news [21], a connection that has been indirectly illustrated computationally as well [22]. The idea of exploiting emotions for fake news detection has been recently exploited within a small cross-section of the state-of-the-art scholarship in fake news detection [23, 24].

Geo-political Divergences in Affect and allied Factors: Popular theories on the nature of emotions such as the one due to Ekman [25] posit that humans have a set of discrete basic emotions, as a shared and unique characteristic we have been endowed with through evolution. While Ekman’s theory of discrete emotions is computationally convenient and is popular in affective computing literature, the conception of discrete and universally expressed and recognized emotions has come under increasing critiques. For example, there has been emerging scholarship that dismantle the assumption of emotions being hardwired, while illustrating that emotions are continuously made [26]. Indeed, it follows that emotions manifest and operate in significantly different ways across cultures. A computational study of emotions around the global phenomenon of the COVID-19 epidemic suggests that there are significant variations in emotion expression across US and China [27]; one illustration of this difference is the consistent higher prevalence of the ‘disgust’ emotion within US-dominated Twitter as compared to China-dominated Weibo. There are significant cultural differences in the consequences of emotions across cultures too. For example, [28] suggests that positive emotions are more correlated with depression among European Americans and Asian Americans, but not immigrant Asians. These differences could have some roots within the different ‘systems of thought’ prevalent across cultures [29]. It has been observed [28, 29] that ‘Easterners’Footnote 1 use more holistic approaches and dialectical reasoning than formal logic, whereas ‘Westerners’ use more analytic reasoning, focusing on category boundaries and logic. It may be noted here that Easterners and Westerners roughly correspond to Global South and Global North respectively. In a related work [30], the authors trace such fundamental differences, especially the gradients in the usage of ‘Eastern’ principles of contradiction (dialectics), change and context, to the expression and experience of emotions across different cultures. The authors also suggest that cultural differences on thought and experiencing emotions could explain dramatic differences in prevalence of clinical depression and anxiety between cultures, such as the 4–10 times higher prevalence of such conditions in the ‘West’ as compared to Asian cultures. These, they observe, could stem from the different relationships to negative emotions across cultures. Holistic approaches could view negative emotions with acceptance and curiosity than with avoidance and fear, thus fundamentally altering the nature of consequences of negative emotions across cultures. The increased prevalence of negative emotions within the content of fake news [31] would thus inevitably create different forms of consequences across cultures. A recent US-based study on the emotions and fake news [32] reported significant differences in the nature of responses to fake news across people with different political affiliations, and different levels of intrinsic propensity to emotional responses. Given such observed complexity within a region, one may envisage that the diverse nature of reception to fake news would be far more varied across cultures and across Global North and Global South.

We now restate our key question of inquiry: What are the contours of affective aspects of geo-political bias in fake news AI?

3.1 The neglect of affect in fake news AI

Our main insight based on our exploration of the contours of affective geo-political bias in fake news AI relate to the neglect of affect in fake news AI.

Over the last decade that has seen much activity in developing fake news detection AI, there has been some scant recent interest in considering the role of emotions and sentiments. The general lack of enthusiasm towards leveraging emotions in the state-of-the-art of fake news identification within top AI avenues, despite it being understood to have a role in fake news [21] through survey-based studies, is quite notable. However, if one observes topical work critically, there are clues as to why this may be critically linked to geo-political bias. We present an assemblage of evidences herein.

As a background towards appreciating the analyses that follow, we briefly outline the traditions that dominate AI research. In a recent systematic study [33] of top AI publications, it was shown that the top values driving AI research included performance, generalization and quantitative evidence. This indicates that current state-of-the-art AI advances in directions that improve performance, as evidenced quantitatively. Thus, the uptake of particular data facets - not just affect - within AI research is often predicated by their utility towards enabling progress along such directions. With most of AI development being within Global North contexts - with an oft-observed concentration around the Silicon Valley - the advancement of AI may also be influenced by the cultural ethos of those contexts (e.g., Californian ideology [34]), which would invariably influence the design choices that parameterize the AI algorithms.

In one of the early and well-cited work exploiting affective information in AI for fake news [35], the authors devise a new emotion feature, emoratio,Footnote 2 which is the ratio of the count of negative emotional words to positive emotional words. They first conduct a statistical test to examine whether the emoratios of fake and real news are significantly different. The test yields a p-valueFootnote 3 of \(\approx 0.02\), which is significant at \(<0.05\), but not so at \(<0.01\). The paper also illustrates that the lift in accuracy obtained through usage of the emoratio feature within simplistic ML methods is around \(4\text {-}5\) percentage points, a moderate improvement. Another work [22] considers the case of health fake news, reporting that the improvements obtained by inclusion of emotion feature is quite modest and in the \(1\text {-}3\) percentage point range across a variety of methods for the tasks of clustering and classification. It is worth noting that all these were observations based on datasets sourced from Global North contexts. The quantum of improvements within the above two representative papers are consistent with those reported elsewhere [36] over Global North datasets too. In this context, it is observed that a survey on usage of sentiment analysis for fake news detection expresses skepticism about the quantum of real improvements in final classification performance brought about by sentiment analysis usage [37]. This poses an interesting dilemma, one regarding the apparent disconnect between the assertion of the consistent influence of affect in fake news as analyzed within social science literature (as outlined earlier), and the relatively modest gains that usage of affect brings about in computational algorithms for fake news detection.

In a somewhat unlikely way, a recent paper [23] offers an interesting vantage point towards understanding and positioning the disconnect above, one that relates to geo-political bias. We first summarize the goals of the paper; the paper considers the usage of the emotional information contained within news stories, and their relationship with the emotional information within reactions to those stories, towards enhancing fake news detection. The usage of both kinds of emotions is indicated by the usage of their terminology dual emotions in the title. In particular, it identifies publisher emotions (emotions contained within the news story) and social emotions (an awkward terminology that has been embedded well in literature, denoting the emotional content within comments and other forms of reactions) separately, and uses those along with the gap between them as features to enhance fake news detection. As an example of an emotion gap, observe that the emotion contained within the article (e.g., a narration of a terrorist incident could contain sadness) could be different from the emotion it invokes within the readers (e.g., disgust, anger) that get reflected in the comments, creating an emotion gap. The geo-political context within this paper is by way of its usage of a US-dominated Twitter dataset along with a China-dominated Weibo dataset. As a preliminary statistical analysis (Sec 4.2 in [23]), they test how well they can reject the null hypothesis that news veracity is independent of the dual emotion signals using popular statistical tests. For the Twitter dataset, they are able to reject the null hypothesis just beyond the threshold for a significance level of \(< 0.05\); in contrast, for the Weibo dataset, the null hypothesis is rejected at four times the threshold required for a significance level of \(< 0.01\). While the result for the Twitter dataset is consistent with results from other Global North datasets (seen above), the result for Weibo is quite interesting. These contrasting trends are further re-affirmed in their quantitative analyses where the exclusive usage of emotion features deliver Macro F1 scores of \(\approx 0.73\) for Weibo, whereas the corresponding numbers are \(\approx 0.33\) for Twitter. These suggest that dual emotions and emotion gap are a tell-tale feature for fake news detection within Weibo. Of the 60+ papers citing [23], only \(<10\) papers appeal to any form of emotion-oriented fake news detection. This relative inattention to emotion, even among the subset of papers citing a paper centered on emotion usage for fake news, re-confirm the observations made earlier on the predominance of Global North contexts within AI-based fake news detection research. Of the few that make use of emotions, [38] present results that reflect the relative trends across Twitter and Weibo. Across Tables 4 and 5 therein, their emotion oriented approach is able to achieve a \(\approx 6\) percentage point improvement on Weibo, while the improvement on the Twitter dataset is \(\approx 2\) percentage points.

To summarize, the neglect of affect is largely driven by its limited utility in enhancing performance of fake news detection AI within Global North contexts. Thus, the neglect of affect, it may be asserted, is likely an important facet of geo-political affective bias in fake news AI.

3.2 Beyond neglect: what next?

Given the overarching neglect of affect in fake news AI and the paucity of affect-oriented fake news methods, there isn’t enough literature to perform a detailed analysis on how varied usages of affect within fake news AI could accentuate or mitigate geo-political bias. That said, the observed affective geo-political bias in fake news AI could be seen as a socio-political problem with multiple diverse aspects. We outline some pertinent aspects to consider within this context.

What’s behind affective news in Global South? The enhanced utility of emotions for fake news detection in the Global South could be seen as sitting in some contrast to understandings that cultures within the Global South are robust to emotional nudges, as outlined earlier. It is also notable that such contrasts may need to be read with a dose of skepticism on both sides, viz., technology and culture. For example, with Global South representing a wide ensemble of several geographic and other subcultures, the robustness to emotional nudges could be more pronounced in some than others. Yet, could this mean that fake news authors are peppering fake news with a higher degree of emotional content within the Global South to evoke emotional responses and consequent higher engagement? Such aspects, if present, could even partly be responses to the incentive structures within the attention economy within which news is increasingly consumed [39]. Consequently, we may ask whether it the higher emotional density within Global South fake news that gets reflected as higher utility of emotional features in fake news detection within Global South scenarios? A contrastive analysis between traditional media and platformized media within the Global South could throw light into such trends.

Different Cultures, Different Emotions? Classical theories [40] characterize Global North cultures as much more individualistic (as opposed to collectivist) than those in the Global South. These differences could reasonably engender a difference in the affect profile of fake news across cultures; this could open a different kind of divergence than emotional densities. We noted earlier that [27] recorded a higher prevalence of disgust in a Global North dataset. While lacking evidence, there may be grounds to speculate that the individualized cultures in Global North may record a different distribution of self-conscious emotions (such as shame and pride along with disgust, as defined in [41]) than those in Global South.

Affectively Diverse Fake News AI With the diversity across global cultures and their information ecosystems, it is eminently arguable that there be a diversity of fake news detection AI methods, each customized to a cultural and geo-political context, among other facets. It is to be noted that this stands at odds with the contemporary political economy of AI that favors centralization to capitalize on data network effects [42] to create large monolithic technologies. Our observations point specifically to the need for diversity in handling affect among the plurality of fake news AI methods that would need to be developed.

Countering Reductionism: If we step back at look at usage of affect within AI literature, we may observe it being dominated by a positivist orthodoxy that stresses the ability to make precise measurements of emotions using computational means from content analyses. This synergizes well with Ekman’s reductionist conception of emotions, but observations on dual emotions [23] implicitly highlight the deficits of the approach. Dual emotions stresses that the emotions embedded within the article could be different from emotions evoked by the article, indicating that emotions may be better understood as interpersonal and social. Geo-political and cultural differences in such interpersonal and social expressions could present another frontier of challenge for fake news AI.

The above suggest that we need to enhance our understanding of affective divergences in fake news to ensure that fake news AI doesn’t end up being discriminatory through innocuous choices in algorithm design, even after addressal of the issue of neglect of emotions. It is not just enough to consider affect towards mitigating geo-political bias in fake news AI, we may need to consider them in bespoke ways to ensure applicability across varying global cultures, pointing to ways in which we may build a suite of fake news AI techniques towards addressing hetereogeneous global settings.

4 Conclusions and future work

We considered, for the first time, the role of affect in bringing about geo-political bias in AI technologies for fake news. While data is a well acknowledged source of bias in AI technologies, others, such as affect, are more downstream, latent, and often entrenched within seemingly innocuous algorithm design choices; this makes them harder to uncover. Through providing a background of cultural diversities, we outlined the backdrop of our investigations. We outlined neglect of affect as a critical insight towards understanding affective geo-political bias, while acknowledging the paucity of studies focused on Global South contexts towards uncovering this more extensively. We concluded by outlining some potential considerations in developing emotionally oriented AI methods that would be more applicable across global cultures.

The choice architectures within AI technologies could embed geo-political biases in latent and nuanced ways. While the values embedded within design choices are often not expressly discussed or debated within the high-tech ecosystems where AI techniques get developed, the ethos of the cultural contexts where AI is developed would invariably get encoded within the algorithms. We hope our work would inspire interest in studying such phenomena across wider application domains of AI, especially those that are socially and politically important. Apart from issues such as the choice of data aspects and their usage, it would be therefore important to contextualize these choices amidst the broader context of the demographics of the sub-population who make the most critical choices behind the AI algorithms.