Introduction

False information is incubated across complex, interconnected communication and technological environments, imbricating individuals and society. Here, we introduce two key concepts. The first is the economics of emotion: namely, the optimisation of datafied emotional content for financial gain. Our second concept is the politics of emotion: namely, the optimisation of datafied emotional content for political gain. Optimising emotions, whether for financial or political gain, entails understanding people in terms of demography, interests and disposition; creation of content (by machines or by people) optimised to resonate with profiled individuals and groups; strategic ambition to elicit emotion to cause contagion; and recording of this datafied emotion expression, to feed into the next wave of info-contagion. We see the economics of emotion as the core incubator of false information online, as this stems from the business model of globally dominant digital platforms while also enabling the business model of digital influence mercenaries. However, the politics of emotion readily exploits the tools at its disposal. This chapter foregrounds these economic and political incubators of false information, leaving the messier discussion of impacts on audiences to later chapters.

The Economics of Emotion

In this section, we explore the link between emotions, attention and revenue, and how these have been optimised to monetise deception. We illustrate this by focusing on the business models of digital influence mercenaries and of globally dominant social media and search engine companies. We end by reflecting on how economic decisions made by these digital platforms have destroyed the business model for news production, thereby further propelling false information online.

The Attention Economy and Optimised Emotion

Social media and search engine platforms make most of their revenue from selling online advertising. In 2020, advertising revenue made up 98% of Meta’s total revenue and 86% of Twitter’s, and more than 80% of Alphabet’s revenue came from Google Ads (Alphabet Inc., 2020; Iqbal, 2022, January 11; Statista Research Department, 2022, February 18). To maximise how much advertising they can sell, these platforms’ algorithms, interfaces and default settings are designed to maximally attract new users and keep users on their platform by holding their attention.

Alphabet-owned Google was the first company to create a standard market for online attention when, in the early 2000s, it launched Google Adwords (rebranded as Google Ads in 2018). Fully automated, Google Ads uses the PageRank algorithm to thematically match the offer and demand for advertising (namely, keywords searched for by users and targeted by marketers). It establishes advertising prices via automated asynchronous auctions for the keywords, pairing the bid amount with a ‘Quality Score’ assessment of marketers’ ads, keywords and landing pages (higher quality ads potentially lead to lower prices and better ad positions). This allows Google to handle microtransactions unprofitable to traditional advertising agencies and to scale up its network, thereby helping to establish behavioural advertising as the dominant business model for websites (McStay, 2016).

Catchily termed ‘surveillance capitalism’ (Zuboff, 2019), this informational capitalism pioneered by Google (followed by Facebook) extracts as much data as possible about users. These digital platforms have built technical infrastructures and business models that link individual sites into a suite of services (like Google’s many services) or ecosystem (as with Facebook’s ‘Like’ buttons scattered across the web), creating incentives for users to remain within the platform’s ecosystem. They then turn that data into increasingly comprehensive ‘profiles’ or behavioural predictions (Gillespie, 2014). These profiles are monetised through internal use or sale to third parties in order to know, predict and modify behaviour. While Google uses keywords from search queries as its bidding criteria, Facebook uses demographic and behavioural information about its users based on their activity on Facebook and the wider web (Levy, 2020).

Attention is grabbed by emotions. Therefore, it was perhaps inevitable that informational capitalism, which transforms human subjectivity into usable quantitative representations (McStay, 2014), would embrace ‘affective economies’ where marketers seek to manage consumers by data-mining sentiment, as well as demographic and behavioural information (Andrejevic, 2011, p. 606). Arguably, across the entire neoliberal platform architecture, where all aspects of life are marketised, affect is a prime currency, as contemporary digital platforms are designed to commodify and manipulate formats for emotional expression (McStay, 2018; Stark & Crawford, 2015).

Facebook, for instance, maintains its attention economy by continually experimenting with algorithms, new data types and platform design, measuring users’ actions to improve user interfaces (to increase users’ time on, and engagement with, the site) and encouraging virality of posts. This includes design features to collect and manipulate emotional data about users’ interests to fuel its advertising-based business model (Levy, 2020; McNamee, 2019; Stark, 2018). In 2009, Facebook introduced the ‘Like’ button (a ‘thumbs up’ emoji), forming one of its social plug-ins that can be placed on third-party websites, so allowing Facebook to track users across the web, providing it with a massive data source. Clicking ‘Like’ provides a crucial signal to help rank posts in a user’s News Feed (renamed simply ‘Feed’ in 2022), also making the content appears in News Feeds of that user’s friends. ‘Getting ‘Likes’ incentivises users to habitually return to Facebook to see how many ‘Likes’ their posts received’ (McNamee, 2019, p. 63). In 2010, the ability to ‘Like’ comments was added. In 2016, after several years of testing, Facebook rolled out its new Reaction Icons globally (users long-press the ‘Like’ button for an option to use one of five predefined emojis, namely, ‘Love’, ‘Haha’, ‘Wow’, ‘Sad’ or ‘Angry’). Reactions were extended to comments in May 2017. In April 2020, responding to COVID-19, Facebook added a new Reaction for ‘Care’. Such data helps earn advertising revenue. It allows Facebook to understand users on a more emotional level, enabling personalisation of what content Facebook shows each user; and businesses can quickly tell which content resonated with target audiences.

While much as been written (and leaked) by, and about, Facebook, other social media platforms are similarly emotional by design in encouraging production of attention-grabbing content. All social media platforms use ‘vanity metrics’ that encourage users to return to, and engage with, the site (Rogers, 2018). Reaction buttons are used by platforms such as Meta-owned Instagram (which has eight quick Reactions), Twitter (its ‘Like’ button predates Facebook’s) and Reddit (with ‘upvotes’ and ‘downvotes’). Other platforms are structured so that only the most engaging material survives, such as threads on social media site 4Chan (Vaidhyanathan, 2018). Such affordances made 4Chan’s environment an incubator for outlandish conspiracy theories that confirm users’ preconceived biases through emotional appeals (Tuters & Hagen, 2020; Tuters et al., 2018). TikTok, which excels at engaging users (it was the most popular domain in 2021), injects a continuous fire hose of short videos into peoples’ screens by guessing what users like based on their passive viewing habits, and signals such as likes, comments and who a user follows or blocks (Benton, 2022). An experiment by NewsGuard (a business that provides trust ratings for online content) in March 2022 on how TikTok funnels information about the Ukraine war finds that a new account that does nothing but scroll TikTok’s algorithmically curated ‘For You’ page, watching in full videos about the war, results in analysts’ feeds being almost exclusively populated with both accurate and false war-related content, with no distinction made between disinformation and reliable sources (Hern, 2022, March 21). Google-owned YouTube’s recommendation algorithm promotes video clips that draw strong traffic: with news-related subjects, such results tend to be those with more extreme views (Larson, 2020). YouTube also financially rewards content producers based on engagement, which may also encourage production of inaccurate information that is more engaging, despite YouTube’s efforts to reduce false information (Matamoros-Fernández et al., 2021). As the design of the algorithms and interfaces of globally dominant social media platforms maximise emotional engagement, we regard social media as a primary site of datafied emotion worldwide.

Monetising Emotion and Deception

The economics of emotion monetises deception online in two main ways. The first way is through a service contract from digital influence mercenaries to exploit social media’s affordances to achieve a paying client’s strategic influence objectives. The second way is by attracting users’ attention through deceptive content and then selling that user attention to advertisers. We discuss both contract-based and advertising-based means of generating revenue in this section.

The practices of electioneering, lobbying and information warfare are increasingly outsourced to ‘digital influence mercenaries’, namely, paid individuals or companies with skills relevant to digital influence campaigns (Forest, 2022). For a fee from paying customers seeking to exert digital influence, these manipulation service providers are increasing and prospering, according to a report from the North Atlantic Treaty Organisation (NATO) (Bay et al., 2020). They add a key service within systems characterised by Howard (2020) as ‘lie machines’. NATO draws attention to the immense scale of this increasingly global and interconnected industry, with hundreds of providers generating an infrastructure for social media manipulation software, generating fictitious accounts and providing mobile proxies. For instance, a European service provider will likely depend on Russian manipulation software and infrastructure providers who, in turn, use contractors from Asia for much of the manual labour (Bay et al., 2020).

The second way that the economics of emotion monetises deception online is by attracting users’ attention through deceptive content (such as via fake news websites) and then leveraging visitor attention to sell as advertising opportunities to advertisers. A key underpinning mechanism has been use of cookie-based behaviourally targeted ads (Bakir & McStay, 2018, 2020). Online behavioural advertising underpinned by advertising technology (‘adtech’) tracks people’s online behaviour (for instance, by planting cookies on users’ computers to collect identifying information about a device and software thereon) and serves ads based on what people do online. While advertising spaces are ultimately owned by the web publisher (such as a news website), they are effectively outsourced and rented to entities called ‘ad networks’ (such as Google’s DoubleClick), which offer advertisers a massive range of websites to exhibit their ads, allowing them to reach potentially large, profiled, audiences. Furthermore, programmatic techniques (termed ‘programmatic’ by the industry) have allowed advertisers to automatically target consumers drawing on even wider varieties of data sources, based on algorithmically obtained metrics. The process often involves real-time bidding, where a potential advertiser (through automated methods) sees information about a person (such as type of device used, websites visited and search queries) and bids for the opportunity to display the ad to a person (Information Commissioners Office, 2019, June 20). This also provides opportunity to use automated means to create (and target) ads, personalising the ad for identified audiences. Such automation of the ad space buying process has resulted in ads for brands such as Honda, Thomson Reuters and Disney appearing on websites and YouTube videos promoting extremist ideologies such as neo-Nazi content. Similarly, if the user looks at a fake news site, the ads will appear there (Bakir & McStay, 2018). This programmatic arrangement produces a financial incentive for fake news provision, motivating deceptive content due to the fact that content can be highly attention-grabbing because it is not beholden to truth. Revenue is, in turn, generated by impressions (namely, how many times an ad is served and judged to have been seen) and click-throughs (the act of clicking on an ad to reach other content owned by the advertiser) (McStay, 2016).

Indeed, a vital driver of false information online is the desire to make money from civic bodies undergoing strong conflicted emotions. For instance, journalists traced a significant amount of the fake news upsurge on Facebook during the 2016 US presidential election campaign to students in Veles, Macedonia, who mostly created fake news stories for money rather than propaganda: their experiments with left-leaning content simply underperformed compared to pro-Trump content. In December 2019, an investigative press report highlighted how a small group of Israeli administrators commercially harvest Islamophobic hate from fake news posts on a network of Facebook pages from 21 far-right outlets in Australia, Austria, Canada, Israel, the UK and the USA, with a combined one million followers. This network funnels audiences to ten ad-heavy websites masquerading as news sites, thereby enabling the administrators to profit from the traffic (Knaus et al., 2019, December 5).

In the first study to systematise the auditing process of fake news revenue flows, its analysis (conducted in 2021) of 1044 unique, popular fake news sites (with millions of monthly visitors) and 1368 real news websites shows that well-known legitimate ad networks, such as Google, Index Exchange and AppNexus, still have a direct advertising relation with over 40% of these fake news websites and a re-seller advertising relation with more than 60% of them. The entities who own fake news websites also operate other types of websites for entertainment, business and politics, indicating that owning a fake news website is part of a broader business operation (Papadogiannakis et al., 2022). Indeed, five years after commercially oriented fake news online was recognised as problematic, a report in 2021 estimates that American household brands still fund false information online by buying programmatic ads. It examined 7500 websites, finding that for every $2.16 in digital advertising revenue sent to legitimate newspapers, $1 goes to false information websites (Businesshala, 2021, August 5).

However, as of 2022, the behavioural advertising environment is undergoing significant change with leading browsers such as Apple’s Safari now blocking by default third-party tracking. Arguably more significant given Google’s centrality to the online advertising industry, Google and its browser Chrome no longer allow cookies and related identifiers to collect user data, in effect stopping selling of web ads targeted to individual users’ browsing habits. The idea instead is to assemble groups of similar generalised interests. Under Google’s ‘Topics’ programme, the Chrome browser determines top interests for that week based on browsing history, which Google says are kept for three weeks and then deleted. These ‘Topics’ are selected on a person’s device without use of external servers. When a person visits a site of one of Google’s client publishers, Topics are shared with the site and its advertising partners. Google also says that Topics will exclude sensitive categories, such as gender, religion or race (Goel, 2022). The key difference is that the specific sites visited by a person are no longer shared across the web with hard-to-identify third parties. Yet, general interest targeted advertising may still fund questionable publishers.

Moreover, as explained in Chap. 1, people worldwide are most concerned about false information on Facebook. As such, the role of Facebook’s business model in incubating false information online warrants further scrutiny. Across the second decade of the twenty-first century, Facebook increasingly deployed machine-learning models to maximise user engagement. This created faster, more personalised, feedback loops that led to increasingly extreme, false content being shared. Central to this is Facebook’s News Feed. This is a constantly updated, personally customised scroll of friends’ photos, posts and links to news stories. It accounts for most of the time Facebook’s users spend on the platform. Based on insights derived from in-app behaviour, and that collected from usage of other apps and the web, the company sells that user attention to advertisers on Facebook and Instagram, accounting for nearly all of its $86 billion in revenue in 2020. A proprietary algorithm controls what appears in each user’s News Feed, deciding a post’s position based on predictions about each user’s preferences and tendencies, ensuring that engaging material appears near the top. This is enabled by machine learning. Unlike traditional algorithms, which are hard-coded by engineers, Facebook’s machine-learning algorithms train on input data to learn correlations within that data. The trained algorithm (known as a machine-learning model) then automates future decisions. As these algorithms could be trained to predict who would like or share what posts in a person’s News Feed, this enabled Facebook to then give those posts more prominence. By mid-2016, Facebook had trained over a million machine-learning models, including models for image recognition, ad targeting and content moderation.

Internal Facebook documents leaked in 2021 shed light on this opaque, evolving process. In 2009, the News Feed ranking algorithm was relatively straightforward, prioritising signals such as ‘Likes’, clicks and comments to decide what to amplify. However, seeking to grow user engagement, the ranking algorithm became ever more sophisticated so that by 2021, it could take in over 10,000 different signals to predict a user’s likelihood of engaging with a single post (Oremus et al., 2021, October 26). For instance, it considers users’ friends, what kind of groups they joined, what pages they ‘Liked’, which advertisers have paid to target them, what types of stories drive conversation, how many long comments posts generate, whether a video is live or recorded, whether comments were made in plain text or with cartoon avatars, the computing load that each post requires and the strength of the user’s Internet signal (Hagey & Horwitz, 2021, September 15; Merrill & Oremus, 2021, October 26; Oremus et al., 2021, October 26).

This increasing complexity of the News Feed ranking algorithm arose because of Facebook’s desire for continued growth, given new competitors and shifting user behaviour, alongside the rise of machine learning that could predict what content would resonate with which user (Levy, 2020). In 2012, as Facebook was preparing for its initial public offering (the process of offering shares of a private corporation to the public in a new stock issuance), its goal was to increase revenue and take on Google, which then had most of the online advertising market (Hao, 2021, March 11). At the time, Facebook’s News Feed ranking algorithm prioritised ‘Likes’, clicks and comments, and had led to publishers, brands and users learning how to craft ‘clickbait’ content with misleading, teaser headlines. Realising that users were growing wary of clickbait, Facebook recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as amount of time spent on the site. In 2016, it added a ranking signal to measure a post’s value based on the amount of time users spent with it. In 2017, Facebook added another ranking signal for video: completion rate (videos that keep people watching to the end are shown to more people) (Newberry, 2022). By 2017, under an internal point system used to measure its success, the algorithm assigned Reaction emoji (‘Love’, ‘Haha’, ‘Wow’, ‘Sad’ and ‘Angry’) five times the weight of a simple ‘Like’ (Oremus et al., 2021, October 26); and a significant comment, message, reshare or RSVP was assigned 30 times the weight of a ‘Like’. Additional multipliers were added depending on whether interactions were between members of a group, friends or strangers (Hagey & Horwitz, 2021, September 15). These fed into Facebook’s algorithmic change in 2018 that prioritised meaningful social interactions, namely, ‘posts that spark conversations and meaningful interactions’. Posts from friends, family and Facebook groups were prioritised over organic content from pages. Brands would now need to earn more engagement to signal value to the algorithm. In 2019, Facebook prioritised high-quality, original video that keeps viewers watching longer than one minute. Facebook also prioritised content from ‘close friends’ (those that people engage with the most). In 2020, the algorithm also started to evaluate the credibility and quality of news articles to promote substantiated news rather than false information (Newberry, 2022).

Throughout, these algorithms were creating faster, more personalised feedback loops for tailoring each user’s News Feed to increase engagement. The same algorithm produces different results for each user because it learns from their individual behaviours. Facebook found that, for the most politically oriented one million American users, nearly 90% of content that Facebook shows them is about politics and social issues. However, those groups also received the most misinformation, especially users associated with mostly right-leaning content, who were shown 1 misinformation post out of every 40 (Oremus et al., 2021, October 26). Indeed, Facebook’s data scientists confirmed in 2019 that posts that sparked ‘Angry’, ‘Wow’ and ‘Haha’ Reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news (Merrill & Oremus, 2021, October 26). In giving outsize weight to emotional reactions and posts that sparked interactions, this generated and consolidated communities sharing false, extremist information (Hao, 2021, March 11; Oremus et al., 2021, October 26). In the midst of the fake news furore following the 2016 US presidential election, and increasingly disturbing evidence of proliferation of extremist hate speech worldwide, the first downgrade to the Angry emoji weighting came in 2018, when Facebook cut it to four times the value of a ‘Like’, keeping the same weight for all other emojis. In April 2019, Facebook created another mechanism to demote content receiving disproportionately angry reactions. In 2020, in efforts to improve the friend ecosystem while reducing virality and its associated problems, Facebook cut the weight of all the Reactions to one and a half times that of a ‘Like’. In September 2020, Facebook finally stopped using the Angry Reaction as a signal of what users wanted, cutting its weight to zero: its weight remained zero in 2021. As a result, users began to get less false information, less ‘disturbing’ content and less ‘graphic violence’, company data scientists found. At the same time, Facebook boosted ‘Love’ and ‘Sad’ to be worth two ‘Likes’ (Merrill & Oremus, 2021, October 26).

Increasing engagement on a platform is not inherently bad. As a Facebook staffer pointed out in the leaked documents, anger-generating posts might be essential to protest movements against corrupt regimes (Merrill & Oremus, 2021, October 26). However, increasing engagement with otherwise rare, extremist and false information is highly problematic. Furthermore, Facebook’s own research also showed that content that is hateful, divisive and polarising was what kept people on its platform (Pelley, 2021, October 4). Ultimately, should such decisions about optimising emotions for financial gain that affect the civic body be left to platforms? Regardless of whether it is overall better or worse for the civic body, Facebook’s (and Google’s) optimisation of emotions and engaging content are a fundamental, but untransparent, part of their business model.

Destroying the Business Model for Real News and Fuelling False Information

Inadvertently, as well as fuelling fake news and extremist content, the economic decisions made by digital platforms have devastated journalism’s business model in three ways, all of which harm the civic body.

Firstly, news sites are reliant on black boxed algorithms of globally dominant social media and search engine platforms to drive content to their site. This is because, increasingly, more people worldwide use search engines and social media as their main source of news. Globally, across 46 countries and five continents surveyed in 2021, only 25% of participants prefer to start their news journeys with a news website or app with most starting elsewhere such as social media, search, aggregators and email. Facebook was the most used social network for accessing news everywhere except Africa. In Africa, 60% used Facebook to access news (this was also the highest Facebook figure across the five continents), but 61% used Meta-owned WhatsApp (Newman et al., 2021). Such ‘distributed discovery’ means that news organisations have less control over how people find their news (Cornia et al., 2016). As such, appealing to platforms’ algorithms became vital for the economic survival of news outlets. Accordingly, Facebook’s algorithmic change in 2018 towards prioritising content from friends and family also hurt online publishers. In the first half of 2018, US-based outlet ABC News lost 12% of its traffic compared with the prior six months, BuzzFeed lost 13% and Breitbart lost 46%. To combat such audience loss, misleading headlines in mainstream news outlets seek clickbait audiences to generate Facebook shares for Internet traffic and advertising income. For instance, following the algorithmic change, Jonah Peretti, BuzzFeed’s chief executive, wrote to Facebook that his staff felt ‘pressure to make bad content’ including material exploiting ‘racial divisions’, ‘fad/junky science’ and ‘extremely disturbing news’ (Hagey & Horwitz, 2021, September 15). Also, the only viable way for news to stay free to users is to attract massive audiences (to sell to advertisers). Hence, free sites will often be aggressively populist, such as British outlet, MailOnline, or those funded primarily for propaganda such as Breitbart or RT (formerly Russia Today) (Rapacioli, 2018, p. 92).

The second way in which digital platforms’ economic decisions have devastated journalism’s business model is in depriving news sites of advertising funds. Targeted digital advertising revenue is largely controlled by a Google-Meta duopoly. Hence, advertisers go to Google and Meta (rather than to news sites) for cheap, targeted advertising. This greatly reduces the amount of advertising income that news websites receive, making it far harder to fund their journalism, despite experimentation with paywalls, donations and digital subscriptions (McChesney, 2016; Nielsen & Fletcher, 2020; Nielsen & Ganter, 2017). Since the 2016 fake news furore captured public and political attention, Google and Facebook began to voluntarily pay publishers around the world to sponsor news-related projects. However, the amounts have been determined by the platforms and, being voluntary payments, are subject to change depending on the platforms’ strategic priorities (Benton, 2022).

Thirdly, it is hard to find consumers who will pay for news given that much information is available for free on social media. Across 40 countries surveyed by YouGov in February 2020, most people, especially young adults, do not pay for online news, a trend observable since social media’s onset two decades prior and continuing today (Newman et al., 2022). For instance, across 2019, the proportion of people who paid for online news averaged only 26% in Nordic countries, 20% in the USA, 8% in Japan and 7% in the UK (Newman et al., 2020, pp. 21–22).

At best, this contraction of income flowing into news organisations damages product quality. It increases newsrooms’ reliance on (free) press releases rather than (expensive) original reporting (Davies, 2008). Rather than fostering in-depth journalism, it leads to newsroom strategies that focus on the immediacy of ‘breaking’ news events such as sensational crime and disasters, seeking audience engagement and responding to real-time feedback from analytics companies (Usher, 2018). It also leads to contractions in news provision, generating local news deserts (Curran, 2022; Starr, 2020). In countries that do not subsidise their public service news, this damages overall news quality (McChesney, 2016). At worst, people bypass news sites altogether, getting all their information from social media, a situation that can greatly damage the civic body.

Such situations are particularly problematic in poor countries where Facebook has been incentivising use of its platform. Through Internet.org, Facebook partners with telecommunications companies who, through ‘zero-rating’ policies, make several stripped-down web services (including Facebook) freely available through a mobile app (without tapping into users’ mobile data plans). Most charitably, this fulfils the social mission of Meta’s Chief Executive Officer, Mark Zuckerberg, of ‘connecting the world’ where Internet penetration is low (Zuckerberg, 2013, August 21). Less charitably, this helps grow Facebook’s international user base while damaging competitors, potentially inspiring future paid use of Facebook when users’ financial situation improves. First launched in 2013, and renamed ‘Free Basics’ in 2015, by 2019, it was available in 65 countries, including 30 African nations. This makes many people in poorer countries entirely reliant on Facebook for information access, eschewing paid-for content (including reputable news outlets). Unfortunately, many of these countries are characterised by weak public sphere institutions; lack government regulation to protect and educate citizens about false, emotive information; and are in countries where Facebook has been slower to introduce content moderating tools (Hempel, 2018, May 17; Nothias, 2020).

To summarise, the economics of emotion finances fake news websites and extremist content online. It involves both contract-based and advertising-based means of generating revenue across digital platforms. It leads to more emotionalised presentation of online news; greatly damages the economic viability, and hence quality, of news; and leads to many users relying on free, but false, information online. As such, the economics of emotion is an important incubator of false information online. So too is the politics of emotion, the subject of the next section.

The Politics of Emotion

The politics of emotion is the phenomenon of optimising datafied emotional content for political gain (Bakir & McStay, 2020). Appealing to a civic body’s emotions are long-standing practices among politicians seeking election and nation states conducting information warfare. As we show below, such practices are super-charged in the digital media ecology, exploiting digital platforms’ profiling and optimisation affordances. Depending on their own priorities, dominant digital platforms may (or may not) intervene to moderate harmful content.

While for decades, opinion polling allowed political parties to merge broad demographic data with psychographic insights on how to craft emotionally resonant messages, the targeting is now fiercely more granular (see Chaps. 5 and 6). As noted earlier, because of the economics of emotion, social media platforms favour emotionality: mainstream platforms such as Facebook surface posts that are emotionally engaging rather than neutral and niche platforms, such as 4Chan, encourage offensive content to get noticed. Indeed, the politics of emotion led to complaints to Facebook by major political parties in Poland and Spain in 2019 (Hagey & Horwitz, 2021, September 15). A leaked Facebook report states that the political parties feel strongly that Facebook’s algorithmic change (prioritising Meaningful Social Interactions) ‘forced them to skew negative in their communications on Facebook… leading them into more extreme policy positions’ (Pelley, 2021, October 4). Facebook researchers wrote in their internal report that Polish parties complained that it made ‘political debate on the platform nastier’ because the parties were now incentivised to attract reshares, achieved by tapping into anger, with similar complaints from political parties in Taiwan and India (Hagey & Horwitz, 2021, September 15).

False political information on social media makes us angry and less analytical. Barfar’s (2019) analysis of user comments on nearly 2100 political posts from popular sources of political disinformation on US Facebook in 2018 finds that compared to true news, political disinformation received significantly less analytic responses and is filled with more anger and incivility (whereas true news elicits more anxiety). This tallies with research by Facebook’s data scientists, discussed earlier, which confirmed that posts sparking the ‘Angry’ reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news (Merrill & Oremus, 2021, October 26). Unsurprisingly, then, hate speech features in political disinformation worldwide. For instance, the 2019 Indonesian national elections were characterised by misinformation, populism and rampant use of religion, racial and divisive issues by their followers (Neyazi & Muhtadi, 2021). Instructively, leaked Facebook documents from February 2019, not long before India’s General Election, show how a dummy account set up to understand the experience of a new, young, female adult user in Facebook’s largest market was flooded with pro-Modi propaganda and anti-Muslim hate speech. Although Hindi and Bengali are respectively the fourth- and seventh-most spoken languages worldwide, Facebook only introduced hate speech classifiers in Hindi in 2018 and Bengali in 2020; systems for detecting violence and incitement in Hindi and Bengali were not added until 2021 (Zakrzewski et al., 2021, October 24).

It is not just domestic actors that exploit the politics of emotion but international actors strategically applying power in the information domain. Acts of warfare themselves, such as invading another country, unleash raw emotions. When seeded with disinformation, the emotional charge resists debunking or fact-checking, a phenomenon long observed by military historians (Rid, 2021). There are also more subtle ways of applying power in the information domain. Depending on contexts, intergovernmental military alliance, NATO, has numerous terms for this including information warfare, psychological operations, influence operations, strategic communications, computer Network operations and military deception. Russia takes a more integrated view of informational power, covering the full range of practices above, while also applying ‘information-psychological warfare’ to both wartime and peacetime conflicts (Giles, 2016, p. 9). According to a NATO report, Russia uses information warfare deploying technical, cognitive and emotional facets to covertly introject distorted facts and ‘emotional impressions’ on policymakers in attempts to influence decisions (Giles, 2016, p. 21). Tactics include targeting politicians on social media and the comments sections of major online news outlets and manipulating polls in Western media, for instance, to skew survey results on whether sanctions against Russia were supported following its invasion of Ukraine in 2022 (The Guardian, 2022, May 1). Countries engaging in information warfare also seek to generate ontological insecurity, or intense anxiety, using covert means to attack citizens’ sense of being. Examples include destabilising the national narrative, or sense of home, that individuals are embedded within, and fracturing their sense of self by turning factions upon each other (Bolton, 2021). Race-baiting disinformation is an old tactic used in information warfare, deployed, for instance, by the USSR’s main security agency, the KGB, in the 1960s to stir up trouble in Black and Jewish communities in American cities (Rid, 2021). As Bolton (2021, p. 134) puts it, such tactics subvert, ‘existing frameworks for managing anxiety around existential questions: eroding certainty over where threats reside (existence), undermining the stability of established belief systems (meaninglessness), and curbing positive subgroup recognition (condemnation)’.

A study by the Australian Strategic Policy Institute (a government-funded defence and strategic policy think tank) into elections and referenda held between 2016 and 2019 in 97 free or partly free countries (as defined by Freedom House, a non-profit, majority US government-funded research and advocacy organisation) finds evidence for cyber-enabled foreign interference targeting 20 countries. It largely (allegedly) emanates from Russia and China, but occasionally also Iran, the UK and Venezuela. The foreign interference targeted voting infrastructure in five countries (Colombia, Finland, Indonesia, Ukraine and the USA) and voter turnout in North Macedonia and the USA. Across ten countries (France, Israel, Italy, Malta, the Netherlands, North Macedonia, Spain, Taiwan, Ukraine and the USA), the interference also targeted the wider information environment, for instance, creating and spreading disinformation to undermine a candidate and creating fake personae to provide inflammatory commentary on divisive issues. There were also longer-term efforts to erode public trust in governments, political leadership and public institutions identifiable in ten countries (Australia, Brazil, the Czech Republic, Germany, Montenegro, Norway, the Netherlands, Singapore, Ukraine and the USA) (Hanson et al., 2019). Facebook also regularly reports on networks of accounts, pages and groups engaged in ‘coordinated inauthentic behaviour’ targeted at domestic audiences (for instance, in the USA, Georgia, Myanmar and Mauritania) and international audiences (for instance, emanating from Russia and Iran) (Facebook, 2020, April, 2018, August 28).

Deliberate deceptions, whether originating from domestic or international political actors, or non-state actors and digital influence mercenaries, are often recirculated as misinformation, exacerbated by the technological affordances of dominant media systems. Even a global behemoth like Facebook has limits on what resources it will devote to tackling false information online. While long prioritising international growth, Facebook has not safeguarded this by employing sufficient people speaking local languages, thereby damaging its ability to moderate content worldwide (Levy, 2020). A newspaper investigation examining internal documentation leaked by ex-Facebook data scientist, Sophie Zhang, shows how Facebook allows major abuses of its platform in poor, small, non-Western countries while prioritising addressing abuses that attract media attention and negative public relations or that affect the USA and other wealthy countries (where its average revenue per user is higher). For instance, Facebook acted quickly to address disinformation affecting the USA, Poland, South Korea and Taiwan while moving slowly or not at all on cases in Afghanistan, Albania, Iraq, Mexico, Mongolia, Tunisia and much of Latin America (Wong, 2021, April 12). Leaked Facebook documents show that in 2020, Facebook employees and contractors spent over 3.2 million hours searching out, labelling or taking down information that the company concluded was false or misleading, but only 13% of those hours were spent on content from outside the USA (Scheck et al., 2021, September 16) although 90% of Facebook’s monthly active users are outside North America (Facebook, 2021).

Like other social media platforms, Facebook’s content moderation on hate speech has also proven inadequate to protecting the civic body. Facebook reports that its proactive detection methods for hate speech have improved following advances in AI (where automated systems are trained on hundreds of thousands of different examples of violating content and common attacks). However, hate speech online can be hard to identify as it evolves rapidly, with code words and in-jokes for racial and gendered slurs (Ribeiro et al., 2018). For instance, use of triple parentheses around the hate target’s name is an anti-Jewish slur on Twitter (Duarte et al., 2017). In the Philippines, gendered online disinformation about Senator Leila de Lima uses the term ‘saba queen’ rather than her name (referencing rumours about her having an affair); and hashtags are used to similar effect (Judson et al., 2020, October). Given such difficulties, Facebook also relies on human reviewers to assess nuance and context. In 2020, Zuckerberg stated that Facebook removes 94% of the hate speech it finds before a human reports it (Lima, 2021, October 26). However, a leaked internal study from Facebook in 2021 states: ‘we estimate that we may action as little as 3-5% of hate and about 6-tenths of 1% of V & I [violence and incitement] on Facebook despite being the best in the world at it’ (Pelley, 2021, October 4).

In some countries, worst-case scenarios prevail. In 2018, the United Nations called out Facebook for allowing hateful posts amplifying ethnic tensions between Buddhist nationalists and Muslim minorities in Myanmar (formerly, Burma). This led to over 9000 Rohingya Muslims killed across 2017 and 800,000 fleeing to Bangladesh to escape genocide in 2018 (Hempel, 2018, May 17). Myanmar was a fragile democracy, having emerged from five decades of military rule in 2011. In 2012, only 1.1% of the population used the Internet, and few had telephones, as the military junta had kept citizens isolated. In 2013, when a quasi-civilian government oversaw telecommunications deregulation, SIM cards became affordable. By 2016, nearly half the population had mobile phone subscriptions (mostly smartphones), and Facebook’s app went viral as Myanmar’s mobile phone operators adopted zero-rating policies under Free Basics. Yet, Facebook employed only four Burmese speakers as content moderators in 2015, in a digitally illiterate population. Most people in Myanmar do not speak English, yet Facebook’s system for reporting problematic posts was then only in English (Stecklow, 2018, August 15). Furthermore, the Burmese language does not always use international standard Unicode online but a unique font difficult for Facebook’s system to read (Levy, 2020, p. 437). Facebook’s investigation into their role in the genocide found that seemingly independent news, entertainment, beauty and lifestyle pages were linked to the Myanmar military, and celebrity and entertainment accounts pushed military propaganda. Facebook’s response across 2018 was to take down the pages, groups and accounts of military officials, organisations and networks that sought to incite the violence (Facebook, 2018, August 28). Yet, in August 2018, Reuters found over 1000 posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims on Facebook in the previous week, some urging extermination (Stecklow, 2018, August 15).

While Facebook claims to have learned lessons from Myanmar, a similar situation emerged in Ethiopia in 2020, where armed groups associated with Ethiopia’s government and state media posted inciting viral comments on Facebook against the Tigrayan minority, some calling for Tigrayans to be exterminated. Violence escalated when the government launched an attack on the Tigray capital, Mekelle. In the context of minimal press freedoms, low Internet penetration and fewer Facebook users in Ethiopia (only 6.7 million in 2020 in a population of 115 million) (Internet World Stats, 2021), political ethnic issues dominated discourse on popular Facebook sites in preceding years (Skjerdal & Gebru, 2020). Once again, leaked Facebook internal communications show it did not have enough employees speaking relevant languages to monitor the situation, and AI systems that form the backbone of Facebook’s enforcement do not cover most languages used on the site. Facebook claims to have since increased its review capacity in Ethiopian languages and improved its automated systems to stop harmful content (Scheck et al., 2021, September 16; Simonite, 2021, October 25).

Clearly, the politics of emotion, and the false information that it propels, is observable worldwide. In practice, democracies vary in their vulnerability to false information based on factors such as extent of domestic and external manipulation, the digital literacy of its citizens and their access to trustworthy information, and willingness of global digital platforms to engage in resource-intensive content moderation (as well as other factors, discussed in Chap. 3).

Conclusion

To explain what incubates contemporary false information in civic bodies, we introduced two concepts. The economics of emotion delineates the optimisation of datafied emotional content for financial gain. We explored how it finances digital influence mercenaries, fake news websites and extremist content online; how it leads to more emotionalised presentation of online news; how it greatly damages the economic viability and quality of news; and how it leads to many people relying on free, but false, information online. Our concept of the politics of emotion (the phenomenon of optimising datafied emotional content for political gain) demonstrates how the long-standing practice of crafting emotive messages to engage target audiences is super-charged in contemporary informational environments. This exposes citizens to emotive, false information via behavioural targeting on social media platforms, exploited by domestic and international political actors. This generates affective feedback loops, ranging from intense anxiety to hatred of the other, that are not adequately dealt with by digital platforms’ content moderation. Given the varied, and complicated, global picture on vulnerability to false information, in the next chapter we illustrate how the economics and politics of emotion fuel false information in different democracies and under different affective contexts.