Opening Part II of this book on how to strengthen the civic body against the rising tide of optimisation of emotion and its incubation of false information online, this chapter examines six core social and democratic harms arising from false information online. Firstly (1) it produces wrongly informed citizens that (2) in certain circumstances, for certain communities, are likely to stay wrongly informed in digital echo chambers and (3), more widely, be emotionally provoked (given the affective nature of much false information), thereby fuelling polarisation, partisan misperceptions, incivility and hatred. Added to this is a fourth problem: (4) contagion, where false, emotive information incubated in digital echo chambers and highly partisan enclaves influences wider social media and mainstream news, thereby spreading its pollutants far and wide. Meanwhile, (5) profiling and microtargeting raise core democratic harms comprising fragmentation of important national conversations; targeted suppression of voters; and undue influence over susceptible citizens, although this is hard to directly prove. Also related (6) is the impact of false information in seeding distrust in important civic processes and institutions, from health messaging to democratic processes.
In Part I, we deconstructed core features of contemporary false information online, exploring its dynamics across the world and synthesising interdisciplinary scholarship on disinformation, misinformation, affect, emotion, profiling, targeting and the increasing datafication and optimisation of emotional life. Starting with the metaphor of the civic body, we highlighted the interconnectedness of bodies (individual and societal) and data about emotions. We identified core incubators of false information online to be the economics of emotion and the politics of emotion—namely, optimising content for economic or political gain. We discussed how different affective contexts worldwide fuel false information, thereby highlighting the need to understand local specificities of affective contexts, as well as their intersections with international information flows (for instance, regarding information warfare, ideological struggles and platform resources for content moderation). We clarified the nature of false information and its occurrence online, drawing out implications for citizen-political communications. We investigated the role of affect, emotion and moods as an energising force in opinion formation and decision-making, which drives false information online. Finally, we delved into profiling and targeting as the core means of delivering emotively charged, false information throughout the civic body, exploring this dynamic in political campaigning in democracies with different data protection regimes and digital literacies. Building on this knowledge, Part II explores how we can strengthen the civic body across dominant and emergent uses of emotional AI.
Opening this discussion, this chapter examines six core social and democratic harms arising from false information on digital platforms. (1) It produces wrongly informed citizens that (2) in certain circumstances, for certain communities, are likely to stay wrongly informed in digital echo chambers and (3), more widely, be emotionally provoked (given the affective nature of much false information), thereby fuelling polarisation, partisan misperceptions, incivility and hatred. Added to this is a fourth problem: (4) contagion, where false, emotive information incubated in digital echo chambers and highly partisan enclaves influences wider social media and mainstream news, thereby spreading its pollutants far and wide. Meanwhile, (5) profiling and microtargeting raise core democratic harms comprising fragmentation of important national conversations; targeted suppression of voters; and undue influence over susceptible citizens, although this is hard to directly prove. Also related (6) is the impact of false information in seeding distrust in important civic processes and institutions.
Harm 1: Wrongly Informed Citizens
Making decisions on the basis of false information cannot be good for the individual or society. Unfortunately, studies indicate that most people have poor information hygiene; most older people are not at all confident that they can recognise false information (see Chap. 1); and people are poor at recognising deepfakes (see Chap. 4). Furthermore, misperceptions, once formed, are difficult to correct (Flynn et al., 2017, p. 130). US experiments show that repeated exposure to fake news headlines increases their perceived accuracy: this ‘illusory truth effect’ for fake news headlines occurs despite low levels of overall believability and even when stories are labelled as contested by fact-checkers or are inconsistent with the reader’s political ideology (Pennycook et al., 2018). US experiments also show that exposure to elite discourse about fake news leads to lower levels of trust in media and less accurate identification of real news (Guess et al., 2017; Van Duyn & Collier, 2019).
Such negative effects may be unequally distributed across the civic body, for instance, if citizens are differentially targeted with, or exposed to, poor-quality, false information. This may be particularly problematic in elections and more broadly among historically marginalised communities (Gandy, 2009). For instance, according to reports from American non-profit organisations, research into disinformation targeted at Spanish-language-speaking communities in the US 2018 mid-term elections and 2020 presidential elections identifies the problem of ‘data voids’ in search engines (Golebiewski & Boyd, 2018, May). With little high-quality Spanish-language content online on political candidates or on the voting rights of those of Latin American cultural or ethnic identity, disinformation actors fill this gap (Thakur & Hankerson, 2021).
Of course, questions of facts, reason and evidence are not the only pertinent factors in ensuring a well-functioning democracy. Farkas and Schou (2020) argue that democracy should aspire to popular sovereignty and rule by the people if it is to be true to its Greek roots: demos (people) and kratos (rule). This requires interlocking exchanges between the individual and the people, with resulting competing political ideas about how society should be structured. However, Farkas and Schou (2020, p. 8) also acknowledge that facts, reason and evidence should not be decoupled from exchange of competing political ideas. Unfortunately, this is precisely what has happened for some communities, as shown in the next harm that we discuss.
Harm 2: Remaining Wrongly Informed in Digital Echo Chambers
A second harm from contemporary false information online is that if it goes uncorrected, it could lead citizens to remain wrongly informed in echo chambers. Echo chambers exist where information, ideas or beliefs are amplified and reinforced by communication and repetition inside a defined system where competing views are under-represented. Sunstein (2002, p. 176) describes this group polarisation phenomenon, where ‘members of a deliberating group predictably move toward a more extreme point in the direction indicated by the members’ predeliberation tendencies’. He ascribes this group polarisation to two forces. Firstly, ‘social comparison’: namely, people’s desire to maintain their reputation within the group and self-conception. The second force is ‘persuasive arguments’: there are limited ‘argument pools’ within a group whose members are already inclined in a certain direction, with a disproportionate number of arguments supporting that same direction, so the result of discussion will be to move individuals further in the direction of their initial inclinations. Echo chambers would be problematic for democracy, because, to make informed decisions, citizens need access to, and engagement with, a sufficiently diverse body of information about public life (Sunstein, 2002, 2017). Sunstein (2002, p. 195) concludes that for deliberation to be valuable as a social phenomenon, we should ‘create spaces for enclave deliberation without insulating enclave members from those with opposing views, and without insulating those outside of the enclave from the views of those within it’. Certainly, across the world, people state that they do not desire echo chambers in their news diet. In 2021, Reuters Institute surveyed the digital news consumption of people in 46 countries, finding that most (74%) prefer news that reflects a range of views (Newman et al., 2021).
It is much debated whether echo chambers are natural psycho-social phenomena, the product of the digital media ecology, or even exist at all. On the side of nature, a long line of research highlights the role of people’s natural biases and cognition processes. Selective exposure, where people prefer and tune into information that supports their existing beliefs, is an old and consistent finding in communication research, but operates mainly among a small minority of highly partisan individuals (Arguedas et al., 2022; Lazarsfeld et al., 1944). A closely related psychological phenomenon is confirmation bias, or people’s tendency to search for, interpret, notice, recall and believe information that confirms their pre-existing beliefs (Wason, 1960). Another related phenomenon is motivated reasoning—an information processing theory that holds that citizens are more accepting of false information that matches their pre-existing worldview (Kunda, 1990; Walter et al., 2020).
Fears have been expressed that when selective exposure, confirmation bias and motivated reasoning are combined with false information fed into self-reinforcing algorithmic systems (namely, filter bubbles), there is little chance of citizens correcting the false information and hence they will remain within their digital echo chamber. Pariser (2011) posits that ‘filter bubbles’ arise when algorithms applied to online content selectively gauge what information users want to see based on information about the users, their connections, browsing history, purchases and what they post and search. This results in users becoming separated from exposure to wider information that disagrees with their views.
Whether digital echo chambers and filter bubbles exist and whether they are a democratic problem has been vigorously debated. Synthesising these studies, the following sections present empirical evidence that indicate that digital echo chambers exist on some social media platforms for some communities and are damaging and, conversely, that digital echo chambers are minimal and do not pose a threat.
Digital Echo Chambers Exist for Some and Are Damaging
A number of studies suggest that digital echo chambers exist on some social media platforms for some countries and communities. For instance, a field experiment conducted in 2018 of over 17,000 American participants randomly offered participants subscriptions to conservative or liberal news outlets on Facebook. It then examined the causal chain of media effects (subscriptions to outlets, exposure to news on Facebook, visits to online news sites, sharing posts, and changes in political opinions and attitudes). It finds that news sites visited through Facebook are associated with more segregated, pro-attitudinal and extreme news, compared to other news sites visited, and that Facebook’s content-ranking algorithm may limit users’ exposure to news outlets offering viewpoints contrary to their own (Levy, 2021).
Big data studies also find evidence of digital echo chambers. Analysis of the USA’s press and social media landscape across 18 months leading up to the 2016 presidential election shows that the right-wing media ecosystem (dominated by Breitbart and Fox News) was more insulated than the left-wing media ecosystem and so was susceptible to disinformation (Faris et al., 2017, August 16). Computational approaches from other countries also empirically demonstrate that digital echo chambers exist on specific platforms for some communities and result in limited exposure to, and lack of engagement with, different ideas and other people’s viewpoints (Bessi et al., 2016; Cinelli et al., 2021; Cossard et al., 2020; del Vicario et al., 2016; Milani et al., 2020). For instance, Milani et al.’s (2020) social network analysis of how vaccination-related images are shared on Twitter (over 9000 English-language tweets from 2016) finds pro- and anti-vaccination users formed two polarised networks that hardly interacted with each other and disseminated images among their members differently. Bessi et al.’s (2016) examination of information consumption patterns of 1.2 million Italian Facebook users shows that their engagement with verified content (science news) or unverified content (conspiracy news) correlates with the number of friends having similar consumption patterns (homophily). While there is a scarcity of comparative studies across platforms on digital echo chambers, one such analysis of over 100 million pieces of content on controversial topics (including gun control, vaccination and abortion) from Facebook, Twitter, Reddit and Gab (sampling different time periods across 2010–2017) finds differences between the platforms. Digital echo chambers dominate online interactions on Facebook and Twitter (platforms that did not have a feed algorithm tweakable by users), but not in Reddit and Gab (platforms whose feed algorithm was tweakable by users). The study’s comparison of news consumption on Facebook and Reddit also finds higher segregation on Facebook (Cinelli et al., 2021).
Evidence from computational approaches shows that users accept confirmatory information on Facebook even if containing deliberately false claims (Bessi et al., 2014, 2016). For instance, Bessi et al.’s (2016) Italian Facebook study finds that users who are polarised towards conspiracy are most inclined to spread unverified rumours. Other studies show that dissenting information is mainly ignored or might even increase group polarisation. For instance, Zollo et al. (2017) examine the effectiveness of debunking through a quantitative analysis of 54 million US Facebook users across five years (2010–2014), comparing how users interact with proven (scientific) and unsubstantiated (conspiracy-like) information. They find that attempts at debunking are largely ineffective because only a small fraction of consumers of unsubstantiated information interact with the posts; those few are often the most committed conspiracy users; and rather than internalising debunking information, they often react to it negatively by retaining, or even increasing, engagement with the unsubstantiated information.
Digital Echo Chambers Are Minimal and Not a Threat
Yet, a sizeable body of research suggests that the extent and threat of digital echo chambers and filter bubbles has been overblown. While search engines were the anecdotal evidence for filter bubbles used by the originator of the concept, Eli Pariser (2011), studies of personalisation in Google News in the USA and Germany find only small differences between news stories suggested to different ‘profiles’ (Haim et al., 2018; Nechushtai & Lewis, 2019). Research by Facebook into how 10.1 million American Facebook users interacted with socially shared news across 2014–2015 finds that it is users’ clicking behaviour on its platform that plays a larger role than algorithmic ranking in limiting exposure to contrary content (Bakshy et al., 2015). Studies on Twitter that look beyond inherently ideological or polarised communities also find less homophily and polarisation in non-political contexts, observing considerable cross-connections between political groups (Bruns, 2019). For instance, a network analysis mapping Australian Twitter’s follower connections from 2015 to 2016 finds many interconnections around topics from politics to sports, although also finding that for some topics (hard-right politics, education and porn) followers have very few interconnections with others (Bruns et al., 2017). A network analysis mapping Norwegian Twitter’s follower connections in 2016 also suggests that digital echo chambers did not exist there at that time (Bruns & Enli, 2018). A big data study conducted in 2020 to evaluate the effectiveness of rumour rebuttal about COVID-19 on China’s Weibo concludes that there might not be a significant digital echo chamber effect on community interactions (Wang & Qian, 2021).
Some studies further show that social networks lead to greater exposure to diverse ideas (Flaxman et al., 2016; Messing & Westwood, 2012). For instance, a study of a three-month period in 2013 of web browsing histories for 50,000 American users who regularly read online news finds that this both increases ideological segregation (namely, echo chambers) and (counterintuitively) exposure to diverse perspectives (Flaxman et al., 2016). An online survey of incidental exposure to news on social media in Australia, Italy, the UK and USA in 2015 finds that incidentally exposed users use significantly more online news sources than people who never use social media (Fletcher & Nielsen, 2018).
Surveys that ask users about their overall media diet (rather than their activity on a single social media platform) also find that echo chambers are very small. Surveys on samples representative of Internet users in Denmark, France, Germany, Greece, Italy, Poland, Spain, UK and USA between 2015 and 2018 find that social media mostly do not constitute digital echo chambers or filter bubbles, as most users see a mixture of political content with which they agree and disagree (Vaccari & Valeriani, 2021). Fletcher et al.’s (2021) study of online survey data in 2020 from seven countries (Austria, Denmark, Germany, Norway, Spain, UK, USA) finds that while politically partisan online news echo chambers exist, in most countries, only a minority (about 5% of Internet users) inhabit them. The figure for the USA is slightly higher: on average, 10% are in a left-wing online news echo chamber, and 3% in a right-wing online news echo chamber.
On balance, then, the research indicates that digital echo chambers and filter bubbles exist on some social media platforms for some communities, but do not exist for search engine results, other social media communities, or for most people.
Harm 3: Affective Content, Polarisation, Partisan Misperceptions, Incivility and Hate
A third harm from false information online is that it is often deliberately affective, as explained in Chaps. 2, 3 and 5. This promotion of content with high emotional appeal can generate various harms including encouraging affective polarisation and extreme views, fuelling partisan misperceptions, promoting incivility and increasing hate crimes.
In March 2021, Facebook executives circulated a memo to employees to discredit the idea that its social media platforms contribute to political polarisation. In testimony before a US House of Representatives subcommittee that month, Mark Zuckerberg instead blamed the USA’s media and political environment. Indeed, Chap. 3 highlights the long-standing affectively polarised media and politics of the USA. Yet, this does not absolve social media platforms, as studies show the importance of polarising discourse on affective polarisation and ideological polarisation. A recent review of empirical studies on social media and polarisation (most of them US-based) concludes that social media shapes affective and ideological polarisation through partisan selection, message content, platform design and algorithms (Van Bavel et al., 2021). Bail’s (2021) study of thousands of US-based social media users concludes that although the source of political tribalism on social media lies deep inside Americans, tapping their fears and resentments, social media distorts and amplifies these already strong emotions, fuelling status-seeking extremists and muting moderates who see little point discussing politics on social media.
Indeed, leaked Facebook documents confirm that extreme positions on social media are encouraged algorithmically. Facebook’s internal research from 2016 found extremist content thriving in over a third of large German political groups on the platform. Swamped with racist, conspiracy-minded and pro-Russian content, 64% of new members in extremist groups joined because of Facebook’s recommendation tools (Horwitz & Seetharaman, 2020, May 26). Leaked Facebook documents from 2019 include a report titled ‘Carol’s Journey to QAnon’ (a cult that holds that a cabal of Satanic cannibals operates a global child sex trafficking ring and conspired against Donald Trump while he was US president). The documents examine how Facebook’s recommendation algorithms affected the feeds to an experimental account representing a conservative mother in North Carolina. It finds that rapid polarisation was an entrenched feature in the platform’s operation: the first QAnon page landed in the conservative user’s feed in just five days, even though the account set out to follow conservative political news and humour content and began by following high-quality conservative pages (Timberg et al., 2021, October 22).
Such social media polarisation, in turn, can skew the actual political offer. A study of US Twitter politicians and their followers from 2010 finds that politicians with more extreme ideological views had more followers than those with less extreme views. If politicians use social media feedback to inform their political stance, and if social media represents polarised views back to politicians, this can escalate polarised political offerings (Hong & Kim, 2016). Indeed, Chap. 2 points to leaked Facebook documents that show that political parties in Poland, Spain, India and Taiwan objected to Facebook’s change to its algorithm in 2018 (that rewarded more emotionalised engagement and reshares) on the grounds that it forced them into more negative, extreme policy positions in their communications on Facebook to reach wider audiences (Hagey & Horwitz, 2021, September 15; Pelley, 2021, October 4).
Such affective content may also fuel partisan misperceptions. Politically motivated reasoning is thought to be driven by automatic affective processes that establish the direction and strength of biases (Taber & Lodge, 2006, p. 756), with people updating their beliefs towards political objects using their existing affective evaluations (Flynn et al., 2017). Indeed, Chap. 5 discusses several American studies that show that there are notable increases in belief in fake news as audience emotionality increases and that people are more likely to believe fake news political headlines that align with their existing beliefs (also see Weeks, 2015).
A further problem arising from the affective nature of false information online is the relationship between affective content and incivility. If civility constitutes political argumentation characterised by speakers who present themselves as reasonable, courteous and respectful of those with whom they disagree (Berry & Sobieraj, 2014), incivility involves ‘speech that is impolite, insulting, or otherwise offensive’ (Ott, 2017, p. 62). Online incivility levels differ greatly worldwide, according to Microsoft (2021), and worsened during the first year of COVID-19, especially public (rather than private) interactions, and for women. While passionate politics is lauded by some, for others incivility is the antithesis of the norms of a well-functioning democracy which requires citizens and politicians to engage respectfully, even on controversial topics. As with media effects research in general, studies on the extent and impacts of mediated incivility on politics are contradictory and mixed (for overviews, see Otto et al., 2019). For instance, American studies show that exposure to mediated political incivility (namely, violation of social norms in the media) erodes political trust and decreases perceived legitimacy of political figures (Fridkin & Kenney, 2008; Mutz, 2007). A study in the Netherlands, UK and Spain shows that mediated political incivility reduces political participation intention and policy support (Otto et al., 2019). Yet, more positively, incivility and negative political speech can enable social engagement and information diffusion, leading to higher participation and voter turnout (Geer & Lau, 2006; Lu & Myrick, 2016).
While incivility can be democratically beneficial as well as harmful, scholarship on hate crimes is less equivocal. Several studies show that social media usage has measurable causal effects on hate crimes. One study isolates the causal effect of anti-refugee social media posts (on the Facebook page of Germany’s far-right AfD party) on hate crimes against refugees by examining associations with local Internet and Facebook outages. The association between Facebook posts and attacks disappears in localities where Internet outages prevented access to Facebook (Müller & Schwarz, 2020). Similar results are found in a longitudinal study (2007–2018) on the causal effects of Russia’s most popular social media platform, VKontakte (VK), on ethnic hate crimes and xenophobic attitudes in Russia. According to the study conducted by the US-based, non-partisan, National Bureau of Economic Research, the presence of this platform (measured by its extent of penetration across Russian cities) significantly increases hate crime in areas where there is pre-existing support for nationalist and xenophobic political party, Rodina (Bursztyn et al., 2019).
Harm 4: Contagion
In 2012, Facebook demonstrated that emotional expression is contagious on its platform (although it should be noted that expressions and the emotion that a person may be undergoing can be quite different (McStay, 2018)). Studies have since confirmed similar contagion of emotional expression on social media platforms from the USA (Facebook and Twitter) and beyond (China’s Weibo) (see Chap. 5). Although there is little consensus on what type of emotion expressions lead to stronger contagion, especially considering different languages and cultures (Goldenberg & Gross, 2020), joy, moral-emotional words and especially anger appear to be front runners. Indeed, according to leaked Facebook documents, when Facebook tweaked its News Feed algorithm in 2018 in search of increased user engagement, it made Facebook an angrier place (Hagey & Horwitz, 2021, September 15).
It has also been shown that false information is contagious online, influencing mainstream news and wider social media, thereby spreading its pollutants far and wide. Chapter 4 documents big data studies on Twitter that find that falsehood diffuses significantly farther, faster, deeper and more broadly than the truth, inspiring fear, disgust and surprise; that misinformation spreads faster and more widely than fact-checking content; and that low-credibility content is equally or more likely to spread virally as fact-checked articles.
Emotional and deceptive contagion online and offline also works through careful organisation by architects of disinformation. For instance, following Donald Trump’s loss of the 2020 presidential election to Joe Biden in November 2020, in the run-up to the 6 January 2021 congressional certification of electoral votes, Trump riled up his supporters via Twitter and Facebook. He repeatedly summoned them to Washington to protest, leading to pro-Trump supporters storming the Capitol on 6 January 2021. In Facebook’s internal analysis several months later, titled ‘Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement’, Facebook notes that 67% of Stop the Steal joins came through group invites: and 30% of invites came from just 0.3% of inviters. Such activity also helps avoid enforcement of Facebook’s content moderation as backup groups replace disabled groups. The Facebook report highlights how it was unable to cope with this level of growth:
In response [to the rapid growth of anti-quarantine Groups], a cap of 100 invites/person/day was implemented. We released an additional new invite rate limit of 30 adds/hour (now deprecated) during the growth of Stop the Steal Groups for users adding new friends (<3 days) to new groups (<7 days) to Groups with some certain ACDC properties. However, all of the rate limits were effective only to a certain extent and the groups were regardless able to grow substantially. (Mac et al., 2021, April 26)
In terms of false information on social media infecting the press, big data studies of the American press and social media landscape in the 18 months prior to the 2016 presidential election conclude that while highly partisan and clickbait news sites existed on both sides of the partisan divide, especially on Facebook, on the right-wing these sites were amplified and legitimated through an ‘attention backbone’ that tied the most extreme conspiracy to bridging sites such as Breitbart (Faris et al., 2017, August 16; also see Benkler et al., 2017). Another computational study investigating the role of fake news in the online media landscape from 2014 to 2016 finds that not only is fake news particularly responsive to the agendas of partisan media across many issues but also it has a relatively stable ability to influence the entire mediascape. Across all three years, fake news set the agenda for the key issue of international relations, and for two years, it set the agenda on the economy and religion (Vargo et al., 2018). Mainstream news media pay attention to fake news because exposing and correcting lies is a basic imperative of the journalistic profession. Less charitably, covering fake news stories is made much easier by the growth of independent fact-checkers whose fact-checking provides information subsidies for news organisations (Tsfati et al., 2020). Arguably, according to a study from Data and Society (an independent, non-profit, US research organisation that seeks evidence-based public debate about emerging technology), mainstream news also amplify false information in media environments where there is low public trust in media; a proclivity for sensationalism; lack of resources for fact-checking and investigative reporting; and lack of media pluralism at the hands of corporate consolidation (Marwick & Lewis, 2017). More worryingly, in countries where the state or political parties have undue political or commercial influence over legacy media, disinformation narratives developed online have ready outlets for widespread contagion.
Harm 5: Microtargeting
Profiling and microtargeting practices have been empirically demonstrated in elections in the USA, UK and India (see Chap. 6). They raise three key democratic harms: fragmentation of important national conversations; targeted suppression of voters; and undue influence.
Fragmentation of National Conversations
Data-driven politics is about communicating efficiently, talking to voters who are most useful to a campaign. Microtargeting has potential democratic benefits such as reaching social groups that are hard to contact, increasing knowledge among voters about individually relevant issues, and increasing the efficiency of political parties’ campaigns. However, as Anstead (2017, p. 309) argues, ‘inefficient targeting’ might lead to better democratic outcomes as it could include more people in the electoral conversation. The UK’s data regulator agrees that it is essential that ‘voters have access to the full spectrum of political messaging and information and understand who the authors of the messages are’ (Information Commissioners Office, 2018, November 6).
The opacity of online profiling and targeting provides capacity for ‘dog whistle’ campaigns that emphasise a provocative position only to sympathetic audiences while remaining invisible to others. It also enables targeted, secretive delivery of ‘wedge’ issues (namely, issues that are highly important to specific segments of a voting population) to mobilise small, but crucial, segments (Tufekci, 2014). According to a report for the Electoral Reform Society (a UK-based independent campaigning organisation which promotes electoral reform), such activities could lead to campaigners focusing on voters in marginal seats while ignoring voters considered less politically valuable, such as those in traditionally safe electorates (Dommett & Power, 2020). Indeed, during the 2019 UK General Election, ads tended to be targeted at marginal constituencies and certain demographics. For example, early in the campaign, the Conservatives pitched ads about the National Health Service, schools and police to women, while men received a ‘Get Brexit Done’ message. As observed by First Draft (a now ceased, non-profit coalition that provided practical guidance on how to find, verify and publish content sourced from the social web), such microtargeting matching of content with demographics had not been done in previous British elections (First Draft, 2019).
The importance of, and threat to, shared national conversations must be recognised where microtargeted ads deprive recipients of wider, diverse collective scrutiny of the messages therein. For instance, a study by First Draft during the 2019 UK General Election finds that a significant number of ads from all political parties contained statements flagged as at least partially incorrect by independent fact-checkers (Newman et al., 2020). If such false information disseminates through microtargeting and if this is not scrutinised by mass media (or if citizens are no longer paying attention to such sources), then there is little chance of those elected on such platforms being held to public account. While the UK has some protection from fragmentation of important national conversations in that it has a well-funded and regulated broadcasting sector, and over 50% of its population trust broadcast, local and regional news (Newman, 2022), this is not the case in all parts of the world. Furthermore, such microtargeting makes it difficult for regulators to enforce advertising rules because, by the very nature of the microtargeting, a regulator is unlikely to see those ads. This risk will intensify if algorithmic marketing techniques become available to all political parties, as the UK’s data regulator observes has already happened in the UK (Information Commissioners Office, 2020, November) (see Chap. 6). This would enable parties to routinely run millions of algorithmically tuned messages, on a scale that could overwhelm regulators, with deleterious consequences for the transparency and political accountability of campaigns.
Targeted Suppression of Voters
There are few academic studies on targeted voter suppression online, not least because of methodological difficulties in studying this area. A big data study of American voters and Twitter in the 2018 mid-term elections failed to find evidence of voter suppression, but this may be due to methodological failings (Deb et al., 2019) and does not mean that voter suppression is not attempted. Certainly, parliamentary inquiries, investigative journalists, civil rights groups and think tanks have unearthed multiple offers and efforts to dissuade certain types of people from voting.
For instance, in the UK, evidence submitted to the UK Inquiry into Disinformation and Fake News describes a pitch during the ‘Brexit’ Referendum campaign to the Leave.EU group from Cambridge Analytica/SCL Group to choose their company for electoral data analytics. Part of this pitch offered voter suppression, namely, ‘groups to dissuade from political engagement or to remove from contact strategy altogether’ (Bakir, 2020). Similarly, in the 2016 US presidential campaign, Trump’s digital campaign (called ‘Project Alamo’) involved Cambridge Analytica working with the Republican National Committee. Brad Parscale, the digital director of Trump’s campaign in 2016, reportedly used Facebook’s Lookalike Audiences ad tool to identify voters who were not Trump supporters, to then target them with psychographic, personalised negative messages designed to discourage them from voting. Campaign operatives openly referred to such efforts as ‘voter suppression’ aimed at three targeted groups: idealistic White liberals, young women and African Americans (Green & Issenberg, 2016, October 27). This targeted voter suppression of Black Americans was confirmed in 2020 by investigative journalists, based on leaked data used by Project Alamo on almost 200 million American voters. It found that in 16 key battleground states, millions of Americans were separated by an algorithm into one of eight categories, to then be targeted with tailored ads on social media: one of the categories was named ‘Deterrence’ and disproportionately held 3.5 million Black Americans. While causality cannot be proven, not least as there are numerous sources of voter suppression in the USA beyond online campaigning efforts (Boyd-Barrett, 2020), the 2016 campaign preceded the first fall in Black turnout in 20 years and allowed Trump to win in key states by thin margins (Channel 4 News Investigations Team, 2020, September 28). According to a report from the Center for Democracy and Technology (a US non-profit organisation whose stated aims include enhancing freedom of expression globally and stronger legal controls on government surveillance), attempted targeted suppression of Spanish-language-dominant voters in the 2020 US presidential elections has also been observed, with disinformation about basic voting details and messaging intended to intimidate such voters (Thakur & Hankerson, 2021).
If profiled and behaviourally driven messages are being used to try to surreptitiously influence people, then this may contravene the right to Freedom of Thought (Alegre, 2017, 2021, May). This right protects our mental inner space. It formally became international law in 1976 as part of Article 18 of the International Covenant on Civil and Political Rights (McCarthy-Jones, 2019). Alegre notes that ‘the concept of “thought” is potentially broad including things such as emotional states, political opinions and trivial thought processes’ (Alegre, 2017, p. 224). It includes the right to keep our thoughts private, the right not to have our thoughts manipulated and the right not to be penalised for our thoughts and opinions (Alegre, 2017, 2021, May). McCarthy-Jones (2019, p. 2) adds that thought includes attentional and cognitive agency, as well as external actions that are arguably constitutive of thought (such as reading, writing and many forms of Internet search behaviour). Unsurprisingly, then, freedom of thought (unlike freedom of expression) is protected as an absolute right in international human rights law: in other words, there are no restrictions allowed (Alegre, 2017). Freedom of thought has been described as ‘the foundation of democratic society’ and ‘the basis and origin of all other rights’ (Alegre, 2017, p. 221). Yet, this right has received little attention in the courts, partly because of an assumption that our inner thoughts were beyond reach (Alegre, 2017; McCarthy-Jones, 2019). This lacuna is problematic as recent developments in technology are providing new ways to access, alter and potentially manipulate our thoughts in ways we had not previously conceived.
While challenges to the right to freedom of thought are well worth exploring, it remains the case that studies on the impact of political messages on political behaviour are at best, mixed, with more studies finding minimal effects, but with recent studies also finding that targeted, data-driven campaigns have some influence (as discussed in Chap. 6). With few studies examining actual effects on political behaviour of false information campaigns conducted on social media, Bail et al.’s (2020) US study is instructive in calling into question their effectiveness. Their study uses longitudinal survey data and privileged access to Twitter data to assess the impact in late 2017 of a Russian Twitter false information campaign on political attitudes and behaviours of frequent American-based Twitter users who identified as either strong or weak partisans. They show that it was those users who were already highly polarised that engaged the most with the misinformation content. They also find no evidence that interacting with accounts linked to the false information campaign substantially impacted issue attitudes, partisan stereotypes or political behaviours that they measured. Proving that undue influence has taken place is hard. Yet, with few studies attempting to disentangle the influence of online disinformation in real-world settings, it would be unwise to dismiss concerns about undue influence at this stage, especially regarding carefully crafted and targeted disinformation. What is more apparent, however, from user-based studies in the USA and UK, is that people dislike the premise of being manipulated via their emotions, especially for political ends (Andalibi & Buss, 2020; also see Chap. 9).
It is also worth reflecting on the conditions that would enable undue influence (or manipulation). In their discussion of the online environment, Susser et al. (2019, p. 3, 26) define manipulation as using hidden or covert means to subvert another person’s decision-making power, undermining their autonomy. Bakir et al. (2019) argue that persuasive communications, to avoid being manipulative, should be guided by principles akin to informed consent. In short, to ethically persuade (rather than manipulate) people towards a particular viewpoint, the persuadee’s decision should be both informed (with sufficient information provided and none of it of a deceptive nature) and freely chosen (namely, no coercion or incentivisation). Yet, these conditions would be disabled by widespread false information (deception) driven by affect and emotion (that prompt gut reactions, thereby raising questions about the extent to which the decision was freely chosen), and profiling and targeting (that might exclude people from exposure to sufficient information).
Harm 6: Seeding Distrust in the Civic Body
False information seeds distrust in important civic processes and institutions, from health messaging to democratic processes.
Where the knowledge base is uncertain, people are more susceptible to false information, as evidenced by the COVID-19 pandemic (explored in Chap. 5). The impacts of false COVID-19 information on trust in government vary across the globe. Several months into the pandemic, Newman et al.’s (2020) survey of six countries (conducted in April 2020) finds high levels of trust in news and information about COVID-19 from scientists and doctors (83%), national health organisations (76%) and global health organisations (73%). However, only a small majority trusts the national government (59%) and news organisations (59%), raising concerns about the impact of public health messaging where behaviour change is needed across the entire population. In some countries, this is an improving or deteriorating situation depending on how governments have responded. In Vietnam, for instance, initially confusing governmental responses in a chaotic sphere of false information online and incivility greatly heightened public anxiety and fear. However, this then forced Vietnam’s one-party state to become unusually transparent in responding to public concerns across 2020, leading to every new COVID-19 case being immediately published on governmental websites, mainstream and social media (Nguyen & Nguyen, 2020). By contrast, in Africa, governmental denial, secrecy and misinformation, together with the fact that mainstream media are state controlled, encouraged alternative narratives of the COVID-19 crisis, especially online (where WhatsApp and Facebook are the two most common platforms). It also encouraged public distrust, apprehension and ambivalence towards public health messaging by governments in many parts of the continent. This builds on a long-standing cultural practice where rumour is how state narratives are routinely subverted in challenges by the public, civil society and religion (Ogola, 2020).
Disinformation also seeds distrust in wider democratic processes—an information warfare aim of Russia and, to a lesser extent, China, but also engaged in by populist domestic actors (see Chap. 2). While the impact of such efforts is unclear, a study by the Australian Strategic Policy Institute (a government-funded defence and strategic policy think tank) suggests that the very perception of interference could be enough to threaten democratic outcomes if, for instance, people refuse to accept that an election result is legitimate (Hanson et al., 2019). For example, during the 2020 US presidential election, President Trump repeatedly made false statements, attacking the integrity of the USA’s voting process, spawning diverse false claims online (Clayton et al., 2020; Lytvynenko & Silverman, 2020, November 3). Surveys indicate that among Trump’s supporters, the cumulative impact of such claims erodes trust and confidence in elections and increases belief that the election is rigged (Clayton et al., 2020, also see Pennycook & Rand, 2021). The conviction behind such false beliefs is evident in that it resulted in a violent mob descending on Capitol Hill on 6 January 2021 to overturn the election results. It is telling that in Facebook’s internal analysis, it notes that Stop the Steal and Patriot Party were harmful at the network level: ‘as a movement, it normalized delegitimization and hate in a way that resulted in offline harm and harm to the norms underpinning democracy’ (Mac et al., 2021, April 26). Indeed, some psychological studies find that exposure to anti-government conspiracy theories lowers intention to vote and decreases political trust among American and British citizens (although in other countries such as Germany, it increases intention to engage in political action) (Douglas et al., 2019: 20; Kim & Cao, 2016).
There are numerous social and democratic harms to the civic body arising from false information online. Across the six core harms that we have identified in this chapter, false information attacks our shared knowledge base, our togetherness, our democratic institutions and processes, and perhaps even our individual agency (although more studies are needed on this aspect).
In terms of harm 1 (wrongly informed citizens), people have trouble recognising fake news and deepfakes; and misperceptions, once formed, are difficult to correct. Furthermore, there is some evidence from the USA that, due to data voids, marginalised Spanish-language communities are exposed to poor-quality, false information on voting. More research into the extent to which this harm is present in countries beyond the USA as well as the extent to which it is unequally distributed across civic bodies is needed.
In terms of harm 2 (remaining wrongly informed in echo chambers), on balance, most scholarship agrees that digital echo chambers and filter bubbles exist for certain communities (right-wing, anti-vax and conspiracy groups, and on controversial topics), in certain countries (the USA and Italy, themselves, polarised societies) and on some platforms (on Twitter and on Facebook, the world’s biggest social media platform). This incubates conspiracy theories, rumours and fake news, and makes users resistant to debunking. Overall, however, digital echo chambers and filter bubbles are inhabited by a small proportion of national populations; social networks and recommendation algorithms lead to greater exposure to diverse ideas and news; and the effect of personalisation on news exposure on multiple platforms is smaller than often assumed. Yet, it is concerning that some communities remain wrongly informed in echo chambers and that this helps drive false information online. Furthermore, whether or not this is an improving or deteriorating situation is difficult to determine because how platforms are used and who is on them changes, as do platforms’ algorithms (as detailed in Chap. 2), but, to date, platforms largely have not made available to researchers their internal data or algorithms. This is especially so for researchers in small economies such as Guatemala or Honduras, a situation that journalist Luis Assardo (2021, August 27) terms ‘the disinformation backyard’. Clearly, more research is needed into digital echo chambers and filter bubbles, ideally with access to the platforms’ data and algorithms. The research should be conducted across a wider range of countries than those examined to date (largely Western democracies, especially the USA and Italy, both of which are polarised countries and prefer more partial news). We know very little, for instance, about digital echo chambers in countries that depend on Facebook to access the internet (via Free Basics). It is also vital to consider the wider information ecology, and people’s overall consumption of news, rather than focusing on single platforms.
Multiple countries have experienced harm 3, where the deliberately affective nature of false information online encourages affective polarisation and extreme views (found in the USA, Germany, Poland, Spain, India and Taiwan); fuels partisan misperceptions (found in the USA); promotes hate crimes (found in Germany and Russia); and promotes mediated incivility (found in the USA, Netherlands, UK and Spain). While most of these impacts are viewed as unequivocal harms, the rise in mediated incivility has a mixed reception because as well as generating harms (eroding political trust, decreasing the perceived legitimacy of political figures and reducing political participation intention and policy support), it can also lead to increased social engagement, information diffusion, higher participation and voter turnout, all of which are democratically valuable. More research on social media’s role in the various aspects of harm 3, and how this harm manifests in countries beyond the USA, would be worthwhile.
Big data studies evidence harm 4, contagion. Emotion expression, especially anger, joy and moral-emotional words, is contagious online on social media platforms based in the USA and China. Deception is also contagious on Twitter, inspiring fear, disgust and surprise, and spreading faster and more widely than fact-checking. Despite their content moderation actions, social media platforms have been unable to prevent the growth of harmful adversarial movements such as Stop the Steal in the USA. Studies, especially from the USA, show that false information on social media infect the wider press. More studies are needed to explore the extent to which deception is contagious on platforms other than Twitter and the extent to which false information online influences wider media in countries other than the USA.
In terms of the various harms stemming from microtargeting (harm 5), studies so far are indicative rather than conclusive. On fragmentation of important national conversations, a study shows that in the 2019 British General Election, for the first time, ads (many containing false information) tended to be targeted at marginal constituencies and certain demographics. While the UK has some protection from fragmentation of important national conversations in that it has a well-funded and regulated broadcasting sector, and over half the population trust mainstream news outlets, this is not the case in all parts of the world. Fragmentation of important national conversations from routinely running millions of algorithmically tuned messages, thereby damaging the transparency and political accountability of campaigns, remains feasible in all countries running digital political campaigns, especially where those countries lack media outlets with broad reach and trust. On the harm of targeted suppression of voters arising from profiling and microtargeting, this service has already been offered (in the UK) and implemented (in the USA). Lastly, in all countries where social media platforms have a presence, the potential for undue influence of citizens is present. Although studies have yet to prove undue influence, the idea of emotional manipulation is disliked by (American and British) populations. More studies across the world are needed, to examine to what extent targeted groups are subjected to insufficient, deceptive and affective content during important periods of civic activity (such as voting periods, census periods and vaccination drives).
Harm 6, seeding distrust in the civic body, is evident in some countries. The impacts of false COVID-19 information on trust in government vary worldwide. In African countries where governmental denial, secrecy and misinformation are common, alternative narratives of the COVID-19 crisis flourish online, thereby encouraging public distrust, apprehension and ambivalence towards public health messaging. Political disinformation about rigged elections and exposure to anti-government conspiracy theories also seeds distrust in wider democratic processes, lowers intention to vote and decreases political trust in the USA and UK. More studies across the world that examine the impact of false information on trust in important civic processes and institutions are needed.
Various stakeholders and countries have put forward solutions to counter false information online. It is to these that the next chapter turns, focusing on globally dominant digital platforms as the prime incubator of optimised emotions prevalent in the world today. We follow this with our final chapter that reflects more broadly on emergent forms of emotional AI, outlining the harms that are visible on the horizon line, but also drawing out how there is scope for stakeholders, countries and larger regions to act.
Alegre, S. (2017). Opinion. Rethinking freedom of thought for the 21st century. European Human Rights Law Review, 3, 221–233. Retrieved April 13, 2022, from https://www.doughtystreet.co.uk/sites/default/files/media/document/Rethinking%20Freedom%20of%20Thought%20for%20the%2021st.pdf
Alegre, S. (2021, May). Protecting freedom of thought in the digital age. Policy Brief No. 165. Centre for International Governance Innovation. Retrieved April 13, 2022, from https://www.cigionline.org/publications/protecting-freedom-of-thought-in-the-digital-age/
Andalibi, N., & Buss, J. (2020). CHI '20: Proceedings of the 2020 CHI conference on human factors in computing systems, April, pp. 1–16. https://doi.org/10.1145/3313831.3376680.
Anstead, N. (2017). Data-driven campaigning in the 2015 United Kingdom General Election. The International Journal of Press/Politics, 22(3), 294–313. https://doi.org/10.1177/1940161217706163
Arguedas, A. R., Robertson, C. T., Fletcher, R., & Nielsen, R. K. (2022). Echo chambers, filter bubbles, and polarisation: A literature review. Reuters Institute and the Royal Society. Retrieved April 13, 2022, from https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review
Assardo, L. (2021, August 27). The disinformation backyard. Medium. Retrieved April 13, 2022, from https://luisassardo.medium.com/?p=5643ad671bd5
Bail, C. (2021). Breaking the social media prism: How to make our platforms less polarizing. Princeton University Press.
Bail, C. A., Guay, B., Maloney, E., Combs, A., Sunshine Hillygus, D., Merhout, F., Feelon, D., & Volfovsky, A. (2020). Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter users in late 2017. Proceedings of the National Academy of Sciences, 117(1), 243–250. https://doi.org/10.1073/pnas.1906420116
Bakir, V. (2020). Psychological operations in digital political campaigns: Assessing Cambridge Analytica’s psychographic profiling and targeting. Frontiers in Political Communication., 5, 67. https://doi.org/10.3389/fcomm.2020.00067
Bakir, V., Herring, E., Miller, D., & Robinson, P. (2019). Organized persuasive communication: A conceptual framework. Critical Sociology, 45(3), 311–328. https://doi.org/10.1177/0896920518764586
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132. https://doi.org/10.1126/science.aaa1160
Benkler, Y., Faris, R., Roberts, H., & Zuckerman, E. (2017). Study: Breitbart-led right-wing media ecosystem altered broader media agenda. Columbia Journalism Review. Retrieved April 13, 2022, from https://www.cjr.org/analysis/breitbart-media-trump-harvard-study.php
Berry, J. M., & Sobieraj, S. (2014). The outrage industry: Political opinion media and the new incivility. Oxford University Press.
Bessi, A., Scala, A., Rossi, L., Zhang, Q., & Quattrociocchi, W. (2014). The economy of attention in the age of (mis) information. Journal of Trust Management, 1(1), 1–13. https://doi.org/10.1186/s40493-014-0012-y
Bessi, A., Petroni, F., Del Vicario, M., Zollo, F., Anagnostopoulos, A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2016). Homophily and polarization in the age of misinformation. The European Physical Journal Special Topics, 225, 2047–2059. https://doi.org/10.1140/epjst/e2015-50319-0
Boyd-Barrett, O. (2020). Russiagate. Disinformation in the age of social media. Routledge.
Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4), 1–14. https://doi.org/10.14763/2019.4.1426
Bruns, A., & Enli, G. (2018). The Norwegian Twittersphere: Structure and dynamics. Nordicom Review, 39(1), 129–148. https://doi.org/10.2478/nor-2018-0006
Bruns, A., Moon, B., Münch, F., & Sadkowsky, T. (2017). The Australian Twittersphere in 2016: Mapping the follower/followee network. Social Media + Society, 3(4), 1–15. https://doi.org/10.1177/2056305117748162
Bursztyn, L., Egorov, G. Enikolopov, R., & Petrova, M. (2019). Social media and xenophobia: Evidence from Russia. Technical report, National Bureau of Economic Research. Retrieved April 13, 2022, from https://home.uchicago.edu/bursztyn/SocialMediaXenophobia_December2019.pdf
Channel 4 News Investigations Team. (2020, September 28). Revealed: Trump campaign strategy to deter millions of Black Americans from voting in 2016. Channel 4 News. https://www.channel4.com/news/revealed-trump-campaign-strategy-to-deter-millions-of-black-americans-from-voting-in-2016
Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. PNAS, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118
Clayton, K., Davis, N. T., Nyhan, B., Porter, E., Ryan, T. J., & Wood, T.J. (2020). Does elite rhetoric undermine democratic norms? Retrieved April 13, 2022, from https://www.dartmouth.edu/~nyhan/democratic-norms.pdf
Cossard, A., De Francisci Morales, G., Kalimeri, K., Mejova, Y., Paolotti, D., & Starnini, M. (2020). Falling into the echo chamber: The Italian vaccination debate on Twitter. Proceedings of the international AAAI conference on web and social media, 14, 130–140. Retrieved April 13, 2022, from https://ojs.aaai.org/index.php/ICWSM/article/view/7285
Deb, A., Luceri, L., Badaway, A., Ferrara, E. (2019). Perils and challenges of social media and election manipulation analysis: The 2018 US midterms. Companion of the web conference 2019, pp. 237–247. https://doi.org/10.1145/3308560.3316486
del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarellia, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. PNAS, 113(3), 554–559. https://doi.org/10.1073/pnas.1517441113
Dommett, K., & Power, S. (2020). Democracy in the dark: Digital campaigning in the 2019 general election and beyond. Electoral Reform Society. Retrieved April 13, 2022, from https://www.electoral-reform.org.uk/latest-news-and-research/publications/democracy-in-the-dark-digital-campaigning-in-the-2019-general-election-and-beyond/
Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding conspiracy theories. Advances in Political Psychology, 40(1), 3–35. https://doi.org/10.1111/pops.12568
First Draft. (2019, November 14). UK Election: How political parties are targeting voters on Facebook, Google and Snapchat ads. First Draft. Retrieved April 13, 2022, from https://firstdraftnews.org/articles/uk-election-how-political-parties-are-targeting-voters-on-facebook-google-and-snapchat-ads/
Faris, R., Roberts, H., Etling, B., Bourassa, N., Zuckerman, E., & Benkler, Y. (2017, August 16). Partisanship, propaganda, and disinformation: Online media and the 2016 U.S. presidential election. Berkman Klein Center for Internet and Society at Harvard University. Retrieved April 13, 2022, from https://cyber.harvard.edu/publications/2017/08/mediacloud
Farkas, J., & Schou, J. (2020). Post-truth, fake news and democracy: Mapping the politics of falsehood. Routledge.
Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly, 80(1), 298–320. https://doi.org/10.1093/poq/nfw006
Fletcher, R., & Nielsen, R. K. (2018). Are people incidentally exposed to news on social media? A comparative analysis. New Media and Society, 20(7), 2450–2468. https://doi.org/10.1177/1461444817724170
Fletcher, R., Robertson, C. T., & Nielsen, R. K. (2021). How many people live in politically partisan online news echo chambers in different countries? Journal of Quantitative Description: Digital Media, 1, 1–56. https://doi.org/10.51685/jqd.2021.020
Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38, 127–150. https://doi.org/10.1111/pops.12394
Fridkin, K. L., & Kenney, P. (2008). The dimensions of negative messages. American Politics Research, 36, 694–723. https://doi.org/10.1177/1532673X08316448
Gandy, O. H. (2009). Coming to terms with chance engaging rational discrimination and cumulative disadvantage. Ashgate.
Geer, J., & Lau, R. (2006). Filling in the blanks: A new method for estimating campaign effects. British Journal of Political Science, 36, 269–290. https://doi.org/10.1017/S0007123406000159
Goldenberg, A., & Gross, J. J. (2020). Digital emotion contagion. Trends in Cognitive Sciences, 24(2), 316–328. https://doi.org/10.1016/j.tics.2020.01.009
Golebiewski, M., & Boyd, D. (2018, May). Data voids: Where missing data can easily be exploited (pp. 1–8). Data & Society. Retrieved June 22, 2022, from https://datasociety.net/wp-content/uploads/2018/05/Data_Society_Data_Voids_Final_3.pdf
Green, J., & Issenberg, S. (2016, October 27). Inside the Trump bunker, with days to go. Bloomberg. https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go
Guess, A., Nyhan, B., & Reifler, J. (2017). “You’re fake news!” Findings from the Poynter media trust survey. Retrieved April 13, 2022, from https://poyntercdn.blob.core.windows.net/files/PoynterMediaTrustSurvey2017.pdf
Hagey, K., & Horwitz, J. (2021, September 15). Facebook tried to make its platform a healthier place. It got angrier instead. Wall Street Journal, 16. https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=articleinline
Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the filter bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330–343. https://doi.org/10.1080/21670811.2017.1338145
Hanson, F., O’Connor, S., Walker, M., & Courtois, L. (2019). Hacking democracies: Cataloguing cyber-enabled attacks on elections, Policy Brief 16. Australian Strategic Policy Institute. Retrieved April 13, 2022, from https://www.aspi.org.au/report/hacking-democracies
Hong, S., & Kim, S. H. (2016). Political polarisation on Twitter: Implications for the use of social media in digital governments. Government Information Quarterly, 33(4), 777–782. https://doi.org/10.1016/j.giq.2016.04.007
Horwitz, J., & Seetharaman, D. (2020, May 26). Facebook executives shut down efforts to make the site less divisive. The Wall Street Journal. https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499
Information Commissioners Office. (2018, November 6). Investigation into the use of data analytics in political campaigns: A report to Parliament. Retrieved April 13, 2022, from https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf
Information Commissioners Office. (2020, November). Audits of data protection compliance by UK political parties: Summary report. Retrieved April 13, 2022, from https://ico.org.uk/media/action-weve-taken/2618567/audits-of-data-protection-compliance-by-uk-political-parties-summary-report.pdf
Kim, M., & Cao, X. (2016). The impact of exposure to media messages promoting government conspiracy theories on distrust in the government: Evidence from a two-stage randomized experiment. International Journal of Communication, 10, 38083827. https://ijoc.org/index.php/ijoc/article/view/5127/1740
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480–498. https://doi.org/10.1037/0033-2909.108.3.480
Lazarsfeld, P. F., Berelson, B., & Gaudet, H. (1944). The people’s choice: How a voter makes up his mind in a presidential campaign. Columbia University Press.
Levy, R. (2021). Social media, news consumption, and polarization: Evidence from a field experiment. American Economic Review, 111(3), 831–870. https://doi.org/10.1257/aer.20191777
Lu, Y., & Myrick, J. G. (2016). Cross-cutting exposure on Facebook and political participation: Unravelling the effects of emotional responses and online incivility. Journal of Media Psychology: Theories, Methods, and Applications, 28(3), 100–110. https://doi.org/10.1027/1864-1105/a000203
Lytvynenko, J., & Silverman, C. (2020, November 3). Here’s a running list of false and misleading information about the election. Buzzfeed News. https://www.buzzfeednews.com/article/janelytvynenko/election-rumors-debunked?bfsource=relatedmanual
Mac, R., Silverman, C., & Lytvynenko, J. (2021, April 26). Facebook stopped employees from reading an internal report about its role in the insurrection. You can read it here. Buzzfeed News. https://www.buzzfeednews.com/article/ryanmac/full-facebook-stop-the-steal-internal-report
Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. Data & Society Research Institute. http://www.chinhnghia.com/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf
McCarthy-Jones, S. (2019). The autonomous mind: The right to freedom of thought in the twenty-first century. Frontiers in Artificial Intelligence, 2(19), 1–17. https://doi.org/10.3389/frai.2019.00019
McStay, A. (2018). Emotional AI: The rise of empathic media. Sage.
Messing, S., & Westwood, S. J. (2012). Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communication Research, 41, 1042–1063. https://doi.org/10.1177/0093650212466406
Microsoft. (2021). Microsoft digital civility index. Retrieved April 13, 2022, from https://www.microsoft.com/en-us/online-safety/digital-civility
Milani, E., Weitkamp, E., & Webb, P. (2020). The visual vaccine debate on twitter: A social network analysis. Media and Communication, 8(2), 364–375. https://doi.org/10.17645/mac.v8i2.2847
Müller, K., & Schwarz, C. (2020). Fanning the flames of hate: Social media and hate crime. Retrieved April 13, 2022, from https://ssrn.com/abstract=3082972.
Mutz, D. C. (2007). Effects of ‘In-Your-Face’ television discourse on perceptions of a legitimate opposition. American Political Science Review, 101(4), 621–635. https://doi.org/10.1017/S000305540707044X
Nechushtai, E., & Lewis, S. C. (2019). What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations. Computers in Human Behavior, 90, 298–307. https://doi.org/10.1016/j.chb.2018.07.043
Newman, N. (2022). United Kingdom. In N. Newman, R. Fletcher, C. T. Robertson, K. Eddy, & R. K. Nielsen (Eds.), Reuters Institute digital news report 2022 (pp. 62–63). Retrieved June 20, 2022, from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2022-06/Digital_News-Report_2022.pdf
Newman, N., Fletcher, R., Schulz, A., Andı, S., & Nielsen, R. K. (2020). Reuters Institute digital news report 2020. Retrieved April 13, 2022, from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2020-06/DNR_2020_FINAL.pdf
Newman, N., Fletcher, R., Schulz, A., Andı, S., Robertson, C. T., & Nielsen, R. K. (2021). Reuters Institute digital news report 2021. Retrieved April 13, 2022, from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2021-06/Digital_News_Report_2021_FINAL.pdf
Nguyen, H., & Nguyen, A. (2020). Covid-19 misinformation and the social (media) amplification of risk: A Vietnamese perspective. Media and Communication, 8, 2, 444–447. https://doi.org/10.17645/mac.v8i2.3227
Ogola, G. (2020). Africa and the Covid-19 information framing crisis. Media and Communication, 8(2), 440–443. https://doi.org/10.17645/mac.v8i2.3223
Ott, B. L. (2017). The age of Twitter: Donald J. Trump and the politics of debasement. Critical Studies in Media Communication, 34, 59–68. https://doi.org/10.1080/15295036.2016.1266686
Otto, L. P., Lecheler, S., & Schuck, A. R. T. (2019). Is context the key? The (non-) differential effects of mediated incivility in three European countries. Political Communication, 37(1), 88–107. https://doi.org/10.1080/10584609.2019.1663324
Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin Press.
Pelley, S. (2021, October 4). Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation. 60 Minutes. https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/
Pennycook, G., & Rand, D. G. (2021). Research note: Examining false beliefs about voter fraud in the wake of the 2020 Presidential Election. The Harvard Kennedy School Misinformation Review, 2(1). https://doi.org/10.37016/mr-2020-51
Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865–1880. https://doi.org/10.1037/xge0000465
Sunstein, C. R. (2002). The law of group polarization. The Journal of Political Philosophy, 10(2), 175–195. https://doi.org/10.1111/1467-9760.00148
Sunstein, C. R. (2017). # Republic: Divided democracy in the age of social media. Princeton University Press.
Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 1, 1–45. https://doi.org/10.2139/ssrn.3306006
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. https://www.jstor.org/stable/3694247.
Thakur, D., & Hankerson, D. L. (2021). Facts and their discontents: A research agenda for online disinformation, race, and gender. Center for Democracy & Technology. https://osf.io/3e8s5/
Timberg, C., Dwoskin, E., & Albergotti, R. (2021, October 22). Inside Facebook. Jan. 6 violence fueled anger, regret over missed warning signs. Washington Post. https://www.washingtonpost.com/technology/2021/10/22/jan-6-capitol-riot-facebook/
Tsfati, Y., Boomgaarden, H. G., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E. (2020). Causes and consequences of mainstream media dissemination of fake news: Literature review and synthesis. Annals of the International Communication Association, 44(2), 157–173. https://doi.org/10.1080/23808985.2020.1759443
Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday, 19(7). Retrieved April 13, 2022, from https://firstmonday.org/ojs/index.php/fm/article/view/4901/4097
Vaccari, C., & Valeriani, A. (2021). Outside the bubble: Social media and political participation in western democracies. Oxford University Press.
Van Bavel, J. J., Rathje, S., Harris, E., Robertson, C., & Sternisko, A. (2021). How social media shapes polarization. Science & Society, 25(11), 913–916. https://doi.org/10.1016/j.tics.2021.07.013
Van Duyn, E., & Collier, J. (2019). Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Communication and Society, 22(1), 29–48. https://doi.org/10.1080/15205436.2018.1511807
Vargo, C. J., Guo, L., & Amazeen, M. A. (2018). The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media & Society, 20(5), 2028–2049. https://doi.org/10.1177/1461444817712086
Walter, N., Cohen, J., Holbert, R. L., & Morag, Y. (2020). Fact-checking: A meta-analysis of what works and for whom. Political Communication, 37(3), 350–375. https://doi.org/10.1080/10584609.2019.1668894
Wang, D., & Qian, Y. (2021). Echo chamber effect in rumor rebuttal discussions about COVID-19 in China: Social media content and network analysis study. Journal of Medical Internet Research, 23(3), e27009. https://doi.org/10.2196/27009
Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12(3), 129–140. https://doi.org/10.1080/17470216008416717
Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of Communication, 65, 699–719. https://doi.org/10.1111/jcom.12164
Zollo, F., Bessi, A., Del Vicario, M., Scala, A., Caldarelli, G., Shekhtman, L., Havlin, S., & Quattrociocchi, W. (2017). Debunking in a world of tribes. PLoS One, 12(7), e0181821. https://doi.org/10.1371/journal.pone.0181821
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
© 2022 The Author(s)
About this chapter
Cite this chapter
Bakir, V., McStay, A. (2022). Harms to the Civic Body from False Information Online. In: Optimising Emotions, Incubating Falsehoods. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-13551-4_7
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-031-13550-7
Online ISBN: 978-3-031-13551-4
eBook Packages: Literature, Cultural and Media StudiesLiterature, Cultural and Media Studies (R0)