Introduction

In Part I, we deconstructed core features of contemporary false information online, exploring its dynamics across the world and synthesising interdisciplinary scholarship on disinformation, misinformation, affect, emotion, profiling, targeting and the increasing datafication and optimisation of emotional life. Starting with the metaphor of the civic body, we highlighted the interconnectedness of bodies (individual and societal) and data about emotions. We identified core incubators of false information online to be the economics of emotion and the politics of emotion—namely, optimising content for economic or political gain. We discussed how different affective contexts worldwide fuel false information, thereby highlighting the need to understand local specificities of affective contexts, as well as their intersections with international information flows (for instance, regarding information warfare, ideological struggles and platform resources for content moderation). We clarified the nature of false information and its occurrence online, drawing out implications for citizen-political communications. We investigated the role of affect, emotion and moods as an energising force in opinion formation and decision-making, which drives false information online. Finally, we delved into profiling and targeting as the core means of delivering emotively charged, false information throughout the civic body, exploring this dynamic in political campaigning in democracies with different data protection regimes and digital literacies. Building on this knowledge, Part II explores how we can strengthen the civic body across dominant and emergent uses of emotional AI.

Opening this discussion, this chapter examines six core social and democratic harms arising from false information on digital platforms. (1) It produces wrongly informed citizens that (2) in certain circumstances, for certain communities, are likely to stay wrongly informed in digital echo chambers and (3), more widely, be emotionally provoked (given the affective nature of much false information), thereby fuelling polarisation, partisan misperceptions, incivility and hatred. Added to this is a fourth problem: (4) contagion, where false, emotive information incubated in digital echo chambers and highly partisan enclaves influences wider social media and mainstream news, thereby spreading its pollutants far and wide. Meanwhile, (5) profiling and microtargeting raise core democratic harms comprising fragmentation of important national conversations; targeted suppression of voters; and undue influence over susceptible citizens, although this is hard to directly prove. Also related (6) is the impact of false information in seeding distrust in important civic processes and institutions.

Harm 1: Wrongly Informed Citizens

Making decisions on the basis of false information cannot be good for the individual or society. Unfortunately, studies indicate that most people have poor information hygiene; most older people are not at all confident that they can recognise false information (see Chap. 1); and people are poor at recognising deepfakes (see Chap. 4). Furthermore, misperceptions, once formed, are difficult to correct (Flynn et al., 2017, p. 130). US experiments show that repeated exposure to fake news headlines increases their perceived accuracy: this ‘illusory truth effect’ for fake news headlines occurs despite low levels of overall believability and even when stories are labelled as contested by fact-checkers or are inconsistent with the reader’s political ideology (Pennycook et al., 2018). US experiments also show that exposure to elite discourse about fake news leads to lower levels of trust in media and less accurate identification of real news (Guess et al., 2017; Van Duyn & Collier, 2019).

Such negative effects may be unequally distributed across the civic body, for instance, if citizens are differentially targeted with, or exposed to, poor-quality, false information. This may be particularly problematic in elections and more broadly among historically marginalised communities (Gandy, 2009). For instance, according to reports from American non-profit organisations, research into disinformation targeted at Spanish-language-speaking communities in the US 2018 mid-term elections and 2020 presidential elections identifies the problem of ‘data voids’ in search engines (Golebiewski & Boyd, 2018, May). With little high-quality Spanish-language content online on political candidates or on the voting rights of those of Latin American cultural or ethnic identity, disinformation actors fill this gap (Thakur & Hankerson, 2021).

Of course, questions of facts, reason and evidence are not the only pertinent factors in ensuring a well-functioning democracy. Farkas and Schou (2020) argue that democracy should aspire to popular sovereignty and rule by the people if it is to be true to its Greek roots: demos (people) and kratos (rule). This requires interlocking exchanges between the individual and the people, with resulting competing political ideas about how society should be structured. However, Farkas and Schou (2020, p. 8) also acknowledge that facts, reason and evidence should not be decoupled from exchange of competing political ideas. Unfortunately, this is precisely what has happened for some communities, as shown in the next harm that we discuss.

Harm 2: Remaining Wrongly Informed in Digital Echo Chambers

A second harm from contemporary false information online is that if it goes uncorrected, it could lead citizens to remain wrongly informed in echo chambers. Echo chambers exist where information, ideas or beliefs are amplified and reinforced by communication and repetition inside a defined system where competing views are under-represented. Sunstein (2002, p. 176) describes this group polarisation phenomenon, where ‘members of a deliberating group predictably move toward a more extreme point in the direction indicated by the members’ predeliberation tendencies’. He ascribes this group polarisation to two forces. Firstly, ‘social comparison’: namely, people’s desire to maintain their reputation within the group and self-conception. The second force is ‘persuasive arguments’: there are limited ‘argument pools’ within a group whose members are already inclined in a certain direction, with a disproportionate number of arguments supporting that same direction, so the result of discussion will be to move individuals further in the direction of their initial inclinations. Echo chambers would be problematic for democracy, because, to make informed decisions, citizens need access to, and engagement with, a sufficiently diverse body of information about public life (Sunstein, 2002, 2017). Sunstein (2002, p. 195) concludes that for deliberation to be valuable as a social phenomenon, we should ‘create spaces for enclave deliberation without insulating enclave members from those with opposing views, and without insulating those outside of the enclave from the views of those within it’. Certainly, across the world, people state that they do not desire echo chambers in their news diet. In 2021, Reuters Institute surveyed the digital news consumption of people in 46 countries, finding that most (74%) prefer news that reflects a range of views (Newman et al., 2021).

It is much debated whether echo chambers are natural psycho-social phenomena, the product of the digital media ecology, or even exist at all. On the side of nature, a long line of research highlights the role of people’s natural biases and cognition processes. Selective exposure, where people prefer and tune into information that supports their existing beliefs, is an old and consistent finding in communication research, but operates mainly among a small minority of highly partisan individuals (Arguedas et al., 2022; Lazarsfeld et al., 1944). A closely related psychological phenomenon is confirmation bias, or people’s tendency to search for, interpret, notice, recall and believe information that confirms their pre-existing beliefs (Wason, 1960). Another related phenomenon is motivated reasoning—an information processing theory that holds that citizens are more accepting of false information that matches their pre-existing worldview (Kunda, 1990; Walter et al., 2020).

Fears have been expressed that when selective exposure, confirmation bias and motivated reasoning are combined with false information fed into self-reinforcing algorithmic systems (namely, filter bubbles), there is little chance of citizens correcting the false information and hence they will remain within their digital echo chamber. Pariser (2011) posits that ‘filter bubbles’ arise when algorithms applied to online content selectively gauge what information users want to see based on information about the users, their connections, browsing history, purchases and what they post and search. This results in users becoming separated from exposure to wider information that disagrees with their views.

Whether digital echo chambers and filter bubbles exist and whether they are a democratic problem has been vigorously debated. Synthesising these studies, the following sections present empirical evidence that indicate that digital echo chambers exist on some social media platforms for some communities and are damaging and, conversely, that digital echo chambers are minimal and do not pose a threat.

Digital Echo Chambers Exist for Some and Are Damaging

A number of studies suggest that digital echo chambers exist on some social media platforms for some countries and communities. For instance, a field experiment conducted in 2018 of over 17,000 American participants randomly offered participants subscriptions to conservative or liberal news outlets on Facebook. It then examined the causal chain of media effects (subscriptions to outlets, exposure to news on Facebook, visits to online news sites, sharing posts, and changes in political opinions and attitudes). It finds that news sites visited through Facebook are associated with more segregated, pro-attitudinal and extreme news, compared to other news sites visited, and that Facebook’s content-ranking algorithm may limit users’ exposure to news outlets offering viewpoints contrary to their own (Levy, 2021).

Big data studies also find evidence of digital echo chambers. Analysis of the USA’s press and social media landscape across 18 months leading up to the 2016 presidential election shows that the right-wing media ecosystem (dominated by Breitbart and Fox News) was more insulated than the left-wing media ecosystem and so was susceptible to disinformation (Faris et al., 2017, August 16). Computational approaches from other countries also empirically demonstrate that digital echo chambers exist on specific platforms for some communities and result in limited exposure to, and lack of engagement with, different ideas and other people’s viewpoints (Bessi et al., 2016; Cinelli et al., 2021; Cossard et al., 2020; del Vicario et al., 2016; Milani et al., 2020). For instance, Milani et al.’s (2020) social network analysis of how vaccination-related images are shared on Twitter (over 9000 English-language tweets from 2016) finds pro- and anti-vaccination users formed two polarised networks that hardly interacted with each other and disseminated images among their members differently. Bessi et al.’s (2016) examination of information consumption patterns of 1.2 million Italian Facebook users shows that their engagement with verified content (science news) or unverified content (conspiracy news) correlates with the number of friends having similar consumption patterns (homophily). While there is a scarcity of comparative studies across platforms on digital echo chambers, one such analysis of over 100 million pieces of content on controversial topics (including gun control, vaccination and abortion) from Facebook, Twitter, Reddit and Gab (sampling different time periods across 2010–2017) finds differences between the platforms. Digital echo chambers dominate online interactions on Facebook and Twitter (platforms that did not have a feed algorithm tweakable by users), but not in Reddit and Gab (platforms whose feed algorithm was tweakable by users). The study’s comparison of news consumption on Facebook and Reddit also finds higher segregation on Facebook (Cinelli et al., 2021).

Evidence from computational approaches shows that users accept confirmatory information on Facebook even if containing deliberately false claims (Bessi et al., 2014, 2016). For instance, Bessi et al.’s (2016) Italian Facebook study finds that users who are polarised towards conspiracy are most inclined to spread unverified rumours. Other studies show that dissenting information is mainly ignored or might even increase group polarisation. For instance, Zollo et al. (2017) examine the effectiveness of debunking through a quantitative analysis of 54 million US Facebook users across five years (2010–2014), comparing how users interact with proven (scientific) and unsubstantiated (conspiracy-like) information. They find that attempts at debunking are largely ineffective because only a small fraction of consumers of unsubstantiated information interact with the posts; those few are often the most committed conspiracy users; and rather than internalising debunking information, they often react to it negatively by retaining, or even increasing, engagement with the unsubstantiated information.

Digital Echo Chambers Are Minimal and Not a Threat

Yet, a sizeable body of research suggests that the extent and threat of digital echo chambers and filter bubbles has been overblown. While search engines were the anecdotal evidence for filter bubbles used by the originator of the concept, Eli Pariser (2011), studies of personalisation in Google News in the USA and Germany find only small differences between news stories suggested to different ‘profiles’ (Haim et al., 2018; Nechushtai & Lewis, 2019). Research by Facebook into how 10.1 million American Facebook users interacted with socially shared news across 2014–2015 finds that it is users’ clicking behaviour on its platform that plays a larger role than algorithmic ranking in limiting exposure to contrary content (Bakshy et al., 2015). Studies on Twitter that look beyond inherently ideological or polarised communities also find less homophily and polarisation in non-political contexts, observing considerable cross-connections between political groups (Bruns, 2019). For instance, a network analysis mapping Australian Twitter’s follower connections from 2015 to 2016 finds many interconnections around topics from politics to sports, although also finding that for some topics (hard-right politics, education and porn) followers have very few interconnections with others (Bruns et al., 2017). A network analysis mapping Norwegian Twitter’s follower connections in 2016 also suggests that digital echo chambers did not exist there at that time (Bruns & Enli, 2018). A big data study conducted in 2020 to evaluate the effectiveness of rumour rebuttal about COVID-19 on China’s Weibo concludes that there might not be a significant digital echo chamber effect on community interactions (Wang & Qian, 2021).

Some studies further show that social networks lead to greater exposure to diverse ideas (Flaxman et al., 2016; Messing & Westwood, 2012). For instance, a study of a three-month period in 2013 of web browsing histories for 50,000 American users who regularly read online news finds that this both increases ideological segregation (namely, echo chambers) and (counterintuitively) exposure to diverse perspectives (Flaxman et al., 2016). An online survey of incidental exposure to news on social media in Australia, Italy, the UK and USA in 2015 finds that incidentally exposed users use significantly more online news sources than people who never use social media (Fletcher & Nielsen, 2018).

Surveys that ask users about their overall media diet (rather than their activity on a single social media platform) also find that echo chambers are very small. Surveys on samples representative of Internet users in Denmark, France, Germany, Greece, Italy, Poland, Spain, UK and USA between 2015 and 2018 find that social media mostly do not constitute digital echo chambers or filter bubbles, as most users see a mixture of political content with which they agree and disagree (Vaccari & Valeriani, 2021). Fletcher et al.’s (2021) study of online survey data in 2020 from seven countries (Austria, Denmark, Germany, Norway, Spain, UK, USA) finds that while politically partisan online news echo chambers exist, in most countries, only a minority (about 5% of Internet users) inhabit them. The figure for the USA is slightly higher: on average, 10% are in a left-wing online news echo chamber, and 3% in a right-wing online news echo chamber.

On balance, then, the research indicates that digital echo chambers and filter bubbles exist on some social media platforms for some communities, but do not exist for search engine results, other social media communities, or for most people.

Harm 3: Affective Content, Polarisation, Partisan Misperceptions, Incivility and Hate

A third harm from false information online is that it is often deliberately affective, as explained in Chaps. 2, 3 and 5. This promotion of content with high emotional appeal can generate various harms including encouraging affective polarisation and extreme views, fuelling partisan misperceptions, promoting incivility and increasing hate crimes.

In March 2021, Facebook executives circulated a memo to employees to discredit the idea that its social media platforms contribute to political polarisation. In testimony before a US House of Representatives subcommittee that month, Mark Zuckerberg instead blamed the USA’s media and political environment. Indeed, Chap. 3 highlights the long-standing affectively polarised media and politics of the USA. Yet, this does not absolve social media platforms, as studies show the importance of polarising discourse on affective polarisation and ideological polarisation. A recent review of empirical studies on social media and polarisation (most of them US-based) concludes that social media shapes affective and ideological polarisation through partisan selection, message content, platform design and algorithms (Van Bavel et al., 2021). Bail’s (2021) study of thousands of US-based social media users concludes that although the source of political tribalism on social media lies deep inside Americans, tapping their fears and resentments, social media distorts and amplifies these already strong emotions, fuelling status-seeking extremists and muting moderates who see little point discussing politics on social media.

Indeed, leaked Facebook documents confirm that extreme positions on social media are encouraged algorithmically. Facebook’s internal research from 2016 found extremist content thriving in over a third of large German political groups on the platform. Swamped with racist, conspiracy-minded and pro-Russian content, 64% of new members in extremist groups joined because of Facebook’s recommendation tools (Horwitz & Seetharaman, 2020, May 26). Leaked Facebook documents from 2019 include a report titled ‘Carol’s Journey to QAnon’ (a cult that holds that a cabal of Satanic cannibals operates a global child sex trafficking ring and conspired against Donald Trump while he was US president). The documents examine how Facebook’s recommendation algorithms affected the feeds to an experimental account representing a conservative mother in North Carolina. It finds that rapid polarisation was an entrenched feature in the platform’s operation: the first QAnon page landed in the conservative user’s feed in just five days, even though the account set out to follow conservative political news and humour content and began by following high-quality conservative pages (Timberg et al., 2021, October 22).

Such social media polarisation, in turn, can skew the actual political offer. A study of US Twitter politicians and their followers from 2010 finds that politicians with more extreme ideological views had more followers than those with less extreme views. If politicians use social media feedback to inform their political stance, and if social media represents polarised views back to politicians, this can escalate polarised political offerings (Hong & Kim, 2016). Indeed, Chap. 2 points to leaked Facebook documents that show that political parties in Poland, Spain, India and Taiwan objected to Facebook’s change to its algorithm in 2018 (that rewarded more emotionalised engagement and reshares) on the grounds that it forced them into more negative, extreme policy positions in their communications on Facebook to reach wider audiences (Hagey & Horwitz, 2021, September 15; Pelley, 2021, October 4).

Such affective content may also fuel partisan misperceptions. Politically motivated reasoning is thought to be driven by automatic affective processes that establish the direction and strength of biases (Taber & Lodge, 2006, p. 756), with people updating their beliefs towards political objects using their existing affective evaluations (Flynn et al., 2017). Indeed, Chap. 5 discusses several American studies that show that there are notable increases in belief in fake news as audience emotionality increases and that people are more likely to believe fake news political headlines that align with their existing beliefs (also see Weeks, 2015).

A further problem arising from the affective nature of false information online is the relationship between affective content and incivility. If civility constitutes political argumentation characterised by speakers who present themselves as reasonable, courteous and respectful of those with whom they disagree (Berry & Sobieraj, 2014), incivility involves ‘speech that is impolite, insulting, or otherwise offensive’ (Ott, 2017, p. 62). Online incivility levels differ greatly worldwide, according to Microsoft (2021), and worsened during the first year of COVID-19, especially public (rather than private) interactions, and for women. While passionate politics is lauded by some, for others incivility is the antithesis of the norms of a well-functioning democracy which requires citizens and politicians to engage respectfully, even on controversial topics. As with media effects research in general, studies on the extent and impacts of mediated incivility on politics are contradictory and mixed (for overviews, see Otto et al., 2019). For instance, American studies show that exposure to mediated political incivility (namely, violation of social norms in the media) erodes political trust and decreases perceived legitimacy of political figures (Fridkin & Kenney, 2008; Mutz, 2007). A study in the Netherlands, UK and Spain shows that mediated political incivility reduces political participation intention and policy support (Otto et al., 2019). Yet, more positively, incivility and negative political speech can enable social engagement and information diffusion, leading to higher participation and voter turnout (Geer & Lau, 2006; Lu & Myrick, 2016).

While incivility can be democratically beneficial as well as harmful, scholarship on hate crimes is less equivocal. Several studies show that social media usage has measurable causal effects on hate crimes. One study isolates the causal effect of anti-refugee social media posts (on the Facebook page of Germany’s far-right AfD party) on hate crimes against refugees by examining associations with local Internet and Facebook outages. The association between Facebook posts and attacks disappears in localities where Internet outages prevented access to Facebook (Müller & Schwarz, 2020). Similar results are found in a longitudinal study (2007–2018) on the causal effects of Russia’s most popular social media platform, VKontakte (VK), on ethnic hate crimes and xenophobic attitudes in Russia. According to the study conducted by the US-based, non-partisan, National Bureau of Economic Research, the presence of this platform (measured by its extent of penetration across Russian cities) significantly increases hate crime in areas where there is pre-existing support for nationalist and xenophobic political party, Rodina (Bursztyn et al., 2019).

Harm 4: Contagion

In 2012, Facebook demonstrated that emotional expression is contagious on its platform (although it should be noted that expressions and the emotion that a person may be undergoing can be quite different (McStay, 2018)). Studies have since confirmed similar contagion of emotional expression on social media platforms from the USA (Facebook and Twitter) and beyond (China’s Weibo) (see Chap. 5). Although there is little consensus on what type of emotion expressions lead to stronger contagion, especially considering different languages and cultures (Goldenberg & Gross, 2020), joy, moral-emotional words and especially anger appear to be front runners. Indeed, according to leaked Facebook documents, when Facebook tweaked its News Feed algorithm in 2018 in search of increased user engagement, it made Facebook an angrier place (Hagey & Horwitz, 2021, September 15).

It has also been shown that false information is contagious online, influencing mainstream news and wider social media, thereby spreading its pollutants far and wide. Chapter 4 documents big data studies on Twitter that find that falsehood diffuses significantly farther, faster, deeper and more broadly than the truth, inspiring fear, disgust and surprise; that misinformation spreads faster and more widely than fact-checking content; and that low-credibility content is equally or more likely to spread virally as fact-checked articles.

Emotional and deceptive contagion online and offline also works through careful organisation by architects of disinformation. For instance, following Donald Trump’s loss of the 2020 presidential election to Joe Biden in November 2020, in the run-up to the 6 January 2021 congressional certification of electoral votes, Trump riled up his supporters via Twitter and Facebook. He repeatedly summoned them to Washington to protest, leading to pro-Trump supporters storming the Capitol on 6 January 2021. In Facebook’s internal analysis several months later, titled ‘Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement’, Facebook notes that 67% of Stop the Steal joins came through group invites: and 30% of invites came from just 0.3% of inviters. Such activity also helps avoid enforcement of Facebook’s content moderation as backup groups replace disabled groups. The Facebook report highlights how it was unable to cope with this level of growth:

In response [to the rapid growth of anti-quarantine Groups], a cap of 100 invites/person/day was implemented. We released an additional new invite rate limit of 30 adds/hour (now deprecated) during the growth of Stop the Steal Groups for users adding new friends (<3 days) to new groups (<7 days) to Groups with some certain ACDC properties. However, all of the rate limits were effective only to a certain extent and the groups were regardless able to grow substantially. (Mac et al., 2021, April 26)

In terms of false information on social media infecting the press, big data studies of the American press and social media landscape in the 18 months prior to the 2016 presidential election conclude that while highly partisan and clickbait news sites existed on both sides of the partisan divide, especially on Facebook, on the right-wing these sites were amplified and legitimated through an ‘attention backbone’ that tied the most extreme conspiracy to bridging sites such as Breitbart (Faris et al., 2017, August 16; also see Benkler et al., 2017). Another computational study investigating the role of fake news in the online media landscape from 2014 to 2016 finds that not only is fake news particularly responsive to the agendas of partisan media across many issues but also it has a relatively stable ability to influence the entire mediascape. Across all three years, fake news set the agenda for the key issue of international relations, and for two years, it set the agenda on the economy and religion (Vargo et al., 2018). Mainstream news media pay attention to fake news because exposing and correcting lies is a basic imperative of the journalistic profession. Less charitably, covering fake news stories is made much easier by the growth of independent fact-checkers whose fact-checking provides information subsidies for news organisations (Tsfati et al., 2020). Arguably, according to a study from Data and Society (an independent, non-profit, US research organisation that seeks evidence-based public debate about emerging technology), mainstream news also amplify false information in media environments where there is low public trust in media; a proclivity for sensationalism; lack of resources for fact-checking and investigative reporting; and lack of media pluralism at the hands of corporate consolidation (Marwick & Lewis, 2017). More worryingly, in countries where the state or political parties have undue political or commercial influence over legacy media, disinformation narratives developed online have ready outlets for widespread contagion.

Harm 5: Microtargeting

Profiling and microtargeting practices have been empirically demonstrated in elections in the USA, UK and India (see Chap. 6). They raise three key democratic harms: fragmentation of important national conversations; targeted suppression of voters; and undue influence.

Fragmentation of National Conversations

Data-driven politics is about communicating efficiently, talking to voters who are most useful to a campaign. Microtargeting has potential democratic benefits such as reaching social groups that are hard to contact, increasing knowledge among voters about individually relevant issues, and increasing the efficiency of political parties’ campaigns. However, as Anstead (2017, p. 309) argues, ‘inefficient targeting’ might lead to better democratic outcomes as it could include more people in the electoral conversation. The UK’s data regulator agrees that it is essential that ‘voters have access to the full spectrum of political messaging and information and understand who the authors of the messages are’ (Information Commissioners Office, 2018, November 6).

The opacity of online profiling and targeting provides capacity for ‘dog whistle’ campaigns that emphasise a provocative position only to sympathetic audiences while remaining invisible to others. It also enables targeted, secretive delivery of ‘wedge’ issues (namely, issues that are highly important to specific segments of a voting population) to mobilise small, but crucial, segments (Tufekci, 2014). According to a report for the Electoral Reform Society (a UK-based independent campaigning organisation which promotes electoral reform), such activities could lead to campaigners focusing on voters in marginal seats while ignoring voters considered less politically valuable, such as those in traditionally safe electorates (Dommett & Power, 2020). Indeed, during the 2019 UK General Election, ads tended to be targeted at marginal constituencies and certain demographics. For example, early in the campaign, the Conservatives pitched ads about the National Health Service, schools and police to women, while men received a ‘Get Brexit Done’ message. As observed by First Draft (a now ceased, non-profit coalition that provided practical guidance on how to find, verify and publish content sourced from the social web), such microtargeting matching of content with demographics had not been done in previous British elections (First Draft, 2019).

The importance of, and threat to, shared national conversations must be recognised where microtargeted ads deprive recipients of wider, diverse collective scrutiny of the messages therein. For instance, a study by First Draft during the 2019 UK General Election finds that a significant number of ads from all political parties contained statements flagged as at least partially incorrect by independent fact-checkers (Newman et al., 2020). If such false information disseminates through microtargeting and if this is not scrutinised by mass media (or if citizens are no longer paying attention to such sources), then there is little chance of those elected on such platforms being held to public account. While the UK has some protection from fragmentation of important national conversations in that it has a well-funded and regulated broadcasting sector, and over 50% of its population trust broadcast, local and regional news (Newman, 2022), this is not the case in all parts of the world. Furthermore, such microtargeting makes it difficult for regulators to enforce advertising rules because, by the very nature of the microtargeting, a regulator is unlikely to see those ads. This risk will intensify if algorithmic marketing techniques become available to all political parties, as the UK’s data regulator observes has already happened in the UK (Information Commissioners Office, 2020, November) (see Chap. 6). This would enable parties to routinely run millions of algorithmically tuned messages, on a scale that could overwhelm regulators, with deleterious consequences for the transparency and political accountability of campaigns.

Targeted Suppression of Voters

There are few academic studies on targeted voter suppression online, not least because of methodological difficulties in studying this area. A big data study of American voters and Twitter in the 2018 mid-term elections failed to find evidence of voter suppression, but this may be due to methodological failings (Deb et al., 2019) and does not mean that voter suppression is not attempted. Certainly, parliamentary inquiries, investigative journalists, civil rights groups and think tanks have unearthed multiple offers and efforts to dissuade certain types of people from voting.

For instance, in the UK, evidence submitted to the UK Inquiry into Disinformation and Fake News describes a pitch during the ‘Brexit’ Referendum campaign to the Leave.EU group from Cambridge Analytica/SCL Group to choose their company for electoral data analytics. Part of this pitch offered voter suppression, namely, ‘groups to dissuade from political engagement or to remove from contact strategy altogether’ (Bakir, 2020). Similarly, in the 2016 US presidential campaign, Trump’s digital campaign (called ‘Project Alamo’) involved Cambridge Analytica working with the Republican National Committee. Brad Parscale, the digital director of Trump’s campaign in 2016, reportedly used Facebook’s Lookalike Audiences ad tool to identify voters who were not Trump supporters, to then target them with psychographic, personalised negative messages designed to discourage them from voting. Campaign operatives openly referred to such efforts as ‘voter suppression’ aimed at three targeted groups: idealistic White liberals, young women and African Americans (Green & Issenberg, 2016, October 27). This targeted voter suppression of Black Americans was confirmed in 2020 by investigative journalists, based on leaked data used by Project Alamo on almost 200 million American voters. It found that in 16 key battleground states, millions of Americans were separated by an algorithm into one of eight categories, to then be targeted with tailored ads on social media: one of the categories was named ‘Deterrence’ and disproportionately held 3.5 million Black Americans. While causality cannot be proven, not least as there are numerous sources of voter suppression in the USA beyond online campaigning efforts (Boyd-Barrett, 2020), the 2016 campaign preceded the first fall in Black turnout in 20 years and allowed Trump to win in key states by thin margins (Channel 4 News Investigations Team, 2020, September 28). According to a report from the Center for Democracy and Technology (a US non-profit organisation whose stated aims include enhancing freedom of expression globally and stronger legal controls on government surveillance), attempted targeted suppression of Spanish-language-dominant voters in the 2020 US presidential elections has also been observed, with disinformation about basic voting details and messaging intended to intimidate such voters (Thakur & Hankerson, 2021).

Undue Influence

If profiled and behaviourally driven messages are being used to try to surreptitiously influence people, then this may contravene the right to Freedom of Thought (Alegre, 2017, 2021, May). This right protects our mental inner space. It formally became international law in 1976 as part of Article 18 of the International Covenant on Civil and Political Rights (McCarthy-Jones, 2019). Alegre notes that ‘the concept of “thought” is potentially broad including things such as emotional states, political opinions and trivial thought processes’ (Alegre, 2017, p. 224). It includes the right to keep our thoughts private, the right not to have our thoughts manipulated and the right not to be penalised for our thoughts and opinions (Alegre, 2017, 2021, May). McCarthy-Jones (2019, p. 2) adds that thought includes attentional and cognitive agency, as well as external actions that are arguably constitutive of thought (such as reading, writing and many forms of Internet search behaviour). Unsurprisingly, then, freedom of thought (unlike freedom of expression) is protected as an absolute right in international human rights law: in other words, there are no restrictions allowed (Alegre, 2017). Freedom of thought has been described as ‘the foundation of democratic society’ and ‘the basis and origin of all other rights’ (Alegre, 2017, p. 221). Yet, this right has received little attention in the courts, partly because of an assumption that our inner thoughts were beyond reach (Alegre, 2017; McCarthy-Jones, 2019). This lacuna is problematic as recent developments in technology are providing new ways to access, alter and potentially manipulate our thoughts in ways we had not previously conceived.

While challenges to the right to freedom of thought are well worth exploring, it remains the case that studies on the impact of political messages on political behaviour are at best, mixed, with more studies finding minimal effects, but with recent studies also finding that targeted, data-driven campaigns have some influence (as discussed in Chap. 6). With few studies examining actual effects on political behaviour of false information campaigns conducted on social media, Bail et al.’s (2020) US study is instructive in calling into question their effectiveness. Their study uses longitudinal survey data and privileged access to Twitter data to assess the impact in late 2017 of a Russian Twitter false information campaign on political attitudes and behaviours of frequent American-based Twitter users who identified as either strong or weak partisans. They show that it was those users who were already highly polarised that engaged the most with the misinformation content. They also find no evidence that interacting with accounts linked to the false information campaign substantially impacted issue attitudes, partisan stereotypes or political behaviours that they measured. Proving that undue influence has taken place is hard. Yet, with few studies attempting to disentangle the influence of online disinformation in real-world settings, it would be unwise to dismiss concerns about undue influence at this stage, especially regarding carefully crafted and targeted disinformation. What is more apparent, however, from user-based studies in the USA and UK, is that people dislike the premise of being manipulated via their emotions, especially for political ends (Andalibi & Buss, 2020; also see Chap. 9).

It is also worth reflecting on the conditions that would enable undue influence (or manipulation). In their discussion of the online environment, Susser et al. (2019, p. 3, 26) define manipulation as using hidden or covert means to subvert another person’s decision-making power, undermining their autonomy. Bakir et al. (2019) argue that persuasive communications, to avoid being manipulative, should be guided by principles akin to informed consent. In short, to ethically persuade (rather than manipulate) people towards a particular viewpoint, the persuadee’s decision should be both informed (with sufficient information provided and none of it of a deceptive nature) and freely chosen (namely, no coercion or incentivisation). Yet, these conditions would be disabled by widespread false information (deception) driven by affect and emotion (that prompt gut reactions, thereby raising questions about the extent to which the decision was freely chosen), and profiling and targeting (that might exclude people from exposure to sufficient information).

Harm 6: Seeding Distrust in the Civic Body

False information seeds distrust in important civic processes and institutions, from health messaging to democratic processes.

Where the knowledge base is uncertain, people are more susceptible to false information, as evidenced by the COVID-19 pandemic (explored in Chap. 5). The impacts of false COVID-19 information on trust in government vary across the globe. Several months into the pandemic, Newman et al.’s (2020) survey of six countries (conducted in April 2020) finds high levels of trust in news and information about COVID-19 from scientists and doctors (83%), national health organisations (76%) and global health organisations (73%). However, only a small majority trusts the national government (59%) and news organisations (59%), raising concerns about the impact of public health messaging where behaviour change is needed across the entire population. In some countries, this is an improving or deteriorating situation depending on how governments have responded. In Vietnam, for instance, initially confusing governmental responses in a chaotic sphere of false information online and incivility greatly heightened public anxiety and fear. However, this then forced Vietnam’s one-party state to become unusually transparent in responding to public concerns across 2020, leading to every new COVID-19 case being immediately published on governmental websites, mainstream and social media (Nguyen & Nguyen, 2020). By contrast, in Africa, governmental denial, secrecy and misinformation, together with the fact that mainstream media are state controlled, encouraged alternative narratives of the COVID-19 crisis, especially online (where WhatsApp and Facebook are the two most common platforms). It also encouraged public distrust, apprehension and ambivalence towards public health messaging by governments in many parts of the continent. This builds on a long-standing cultural practice where rumour is how state narratives are routinely subverted in challenges by the public, civil society and religion (Ogola, 2020).

Disinformation also seeds distrust in wider democratic processes—an information warfare aim of Russia and, to a lesser extent, China, but also engaged in by populist domestic actors (see Chap. 2). While the impact of such efforts is unclear, a study by the Australian Strategic Policy Institute (a government-funded defence and strategic policy think tank) suggests that the very perception of interference could be enough to threaten democratic outcomes if, for instance, people refuse to accept that an election result is legitimate (Hanson et al., 2019). For example, during the 2020 US presidential election, President Trump repeatedly made false statements, attacking the integrity of the USA’s voting process, spawning diverse false claims online (Clayton et al., 2020; Lytvynenko & Silverman, 2020, November 3). Surveys indicate that among Trump’s supporters, the cumulative impact of such claims erodes trust and confidence in elections and increases belief that the election is rigged (Clayton et al., 2020, also see Pennycook & Rand, 2021). The conviction behind such false beliefs is evident in that it resulted in a violent mob descending on Capitol Hill on 6 January 2021 to overturn the election results. It is telling that in Facebook’s internal analysis, it notes that Stop the Steal and Patriot Party were harmful at the network level: ‘as a movement, it normalized delegitimization and hate in a way that resulted in offline harm and harm to the norms underpinning democracy’ (Mac et al., 2021, April 26). Indeed, some psychological studies find that exposure to anti-government conspiracy theories lowers intention to vote and decreases political trust among American and British citizens (although in other countries such as Germany, it increases intention to engage in political action) (Douglas et al., 2019: 20; Kim & Cao, 2016).

Conclusion

There are numerous social and democratic harms to the civic body arising from false information online. Across the six core harms that we have identified in this chapter, false information attacks our shared knowledge base, our togetherness, our democratic institutions and processes, and perhaps even our individual agency (although more studies are needed on this aspect).

In terms of harm 1 (wrongly informed citizens), people have trouble recognising fake news and deepfakes; and misperceptions, once formed, are difficult to correct. Furthermore, there is some evidence from the USA that, due to data voids, marginalised Spanish-language communities are exposed to poor-quality, false information on voting. More research into the extent to which this harm is present in countries beyond the USA as well as the extent to which it is unequally distributed across civic bodies is needed.

In terms of harm 2 (remaining wrongly informed in echo chambers), on balance, most scholarship agrees that digital echo chambers and filter bubbles exist for certain communities (right-wing, anti-vax and conspiracy groups, and on controversial topics), in certain countries (the USA and Italy, themselves, polarised societies) and on some platforms (on Twitter and on Facebook, the world’s biggest social media platform). This incubates conspiracy theories, rumours and fake news, and makes users resistant to debunking. Overall, however, digital echo chambers and filter bubbles are inhabited by a small proportion of national populations; social networks and recommendation algorithms lead to greater exposure to diverse ideas and news; and the effect of personalisation on news exposure on multiple platforms is smaller than often assumed. Yet, it is concerning that some communities remain wrongly informed in echo chambers and that this helps drive false information online. Furthermore, whether or not this is an improving or deteriorating situation is difficult to determine because how platforms are used and who is on them changes, as do platforms’ algorithms (as detailed in Chap. 2), but, to date, platforms largely have not made available to researchers their internal data or algorithms. This is especially so for researchers in small economies such as Guatemala or Honduras, a situation that journalist Luis Assardo (2021, August 27) terms ‘the disinformation backyard’. Clearly, more research is needed into digital echo chambers and filter bubbles, ideally with access to the platforms’ data and algorithms. The research should be conducted across a wider range of countries than those examined to date (largely Western democracies, especially the USA and Italy, both of which are polarised countries and prefer more partial news). We know very little, for instance, about digital echo chambers in countries that depend on Facebook to access the internet (via Free Basics). It is also vital to consider the wider information ecology, and people’s overall consumption of news, rather than focusing on single platforms.

Multiple countries have experienced harm 3, where the deliberately affective nature of false information online encourages affective polarisation and extreme views (found in the USA, Germany, Poland, Spain, India and Taiwan); fuels partisan misperceptions (found in the USA); promotes hate crimes (found in Germany and Russia); and promotes mediated incivility (found in the USA, Netherlands, UK and Spain). While most of these impacts are viewed as unequivocal harms, the rise in mediated incivility has a mixed reception because as well as generating harms (eroding political trust, decreasing the perceived legitimacy of political figures and reducing political participation intention and policy support), it can also lead to increased social engagement, information diffusion, higher participation and voter turnout, all of which are democratically valuable. More research on social media’s role in the various aspects of harm 3, and how this harm manifests in countries beyond the USA, would be worthwhile.

Big data studies evidence harm 4, contagion. Emotion expression, especially anger, joy and moral-emotional words, is contagious online on social media platforms based in the USA and China. Deception is also contagious on Twitter, inspiring fear, disgust and surprise, and spreading faster and more widely than fact-checking. Despite their content moderation actions, social media platforms have been unable to prevent the growth of harmful adversarial movements such as Stop the Steal in the USA. Studies, especially from the USA, show that false information on social media infect the wider press. More studies are needed to explore the extent to which deception is contagious on platforms other than Twitter and the extent to which false information online influences wider media in countries other than the USA.

In terms of the various harms stemming from microtargeting (harm 5), studies so far are indicative rather than conclusive. On fragmentation of important national conversations, a study shows that in the 2019 British General Election, for the first time, ads (many containing false information) tended to be targeted at marginal constituencies and certain demographics. While the UK has some protection from fragmentation of important national conversations in that it has a well-funded and regulated broadcasting sector, and over half the population trust mainstream news outlets, this is not the case in all parts of the world. Fragmentation of important national conversations from routinely running millions of algorithmically tuned messages, thereby damaging the transparency and political accountability of campaigns, remains feasible in all countries running digital political campaigns, especially where those countries lack media outlets with broad reach and trust. On the harm of targeted suppression of voters arising from profiling and microtargeting, this service has already been offered (in the UK) and implemented (in the USA). Lastly, in all countries where social media platforms have a presence, the potential for undue influence of citizens is present. Although studies have yet to prove undue influence, the idea of emotional manipulation is disliked by (American and British) populations. More studies across the world are needed, to examine to what extent targeted groups are subjected to insufficient, deceptive and affective content during important periods of civic activity (such as voting periods, census periods and vaccination drives).

Harm 6, seeding distrust in the civic body, is evident in some countries. The impacts of false COVID-19 information on trust in government vary worldwide. In African countries where governmental denial, secrecy and misinformation are common, alternative narratives of the COVID-19 crisis flourish online, thereby encouraging public distrust, apprehension and ambivalence towards public health messaging. Political disinformation about rigged elections and exposure to anti-government conspiracy theories also seeds distrust in wider democratic processes, lowers intention to vote and decreases political trust in the USA and UK. More studies across the world that examine the impact of false information on trust in important civic processes and institutions are needed.

Various stakeholders and countries have put forward solutions to counter false information online. It is to these that the next chapter turns, focusing on globally dominant digital platforms as the prime incubator of optimised emotions prevalent in the world today. We follow this with our final chapter that reflects more broadly on emergent forms of emotional AI, outlining the harms that are visible on the horizon line, but also drawing out how there is scope for stakeholders, countries and larger regions to act.