Introduction

We have established that false information online harms the civic body, driven by the economics of emotion and the politics of emotion. What should be done about this? Global and regional surveys conducted in 2018 indicate public appetite for interventions to stop ‘fake news’ but are unclear where primary responsibility lies (Eurobarometer, 2018, February; Newman et al., 2018). Accordingly, multi-stakeholder solutions have been proffered by various countries’ governmental inquiries into disinformation and fake news, and by supranational bodies including the United Nations (UN), European Union and Commonwealth. This chapter assesses seven solution areas: namely, (1) government action, (2) cybersecurity, (3) digital intermediaries/platforms, (4) advertisers, (5) professional political persuaders and public relations, (6) media organisations and (7) education. These are intrinsically difficult areas to solve individually, let alone in concert, and in every country. We conclude that such solutions merely tinker at the edges as they do not address a fundamental incubator for false information online: namely, the business model for social media platforms built on the economics of emotion.

Solution Area 1: Governmental Action

Across the world, governmental approaches to tackling false information online range from those that respect freedom of expression (non-coercive responses) to those that do not (coercive responses).

Non-coercive Responses

In seeking to prevent negative democratic impacts of false information online, supranational reports recognise the importance of balancing measures to combat false information with the right to freedom of expression. For instance, on 3 March 2017, a Joint Declaration on ‘Freedom of Expression and ‘Fake News’, Disinformation and Propaganda’ was adopted by the United Nations Special Rapporteur on Freedom of Opinion and Expression, alongside other organisations. While noting the growing prevalence of disinformation, the Joint Declaration reaffirms the right to freedom of expression (Principle 1) and stipulates standards on disinformation and propaganda (Principle 2), including that states should not ban ‘dissemination of information based on vague and ambiguous ideas, including ‘false news’ or ‘non-objective information’ (United Nations Special Rapporteur on Freedom of Opinion and Expression et al., 2017).

Instead, as will be developed in later sections, non-coercive governmental responses include media monitoring and development of early warning detection systems as part of cybersecurity operations against disinformation and as part of digital literacy programmes. Control over diffusion of false information is difficult in today’s media ecology, but there is some evidence that rumours on social media can be stopped by early, strong corrections by officials, as seen in studies in Japan (Takayasu et al., 2015) and Germany (Jung et al., 2020).

Non-coercive governmental responses also include a preference for self-regulation rather than regulation of digital platforms and intermediaries. Significantly, the European Commission created the European Union Code of Practice on Disinformation in 2018, where for the first time worldwide industry agreed on a voluntary basis to self-regulatory standards to fight disinformation (European Commission, 2018b). Similar Codes of Practice have since been developed in other jurisdictions, such as Australia (Digital Industry Group Inc., 2021). The European Union Code of Practice was signed by Facebook, Google, Twitter, Mozilla and parts of the advertising industry in 2018, with Microsoft and TikTok signing in 2020, and further signatories in 2021 as the Code of Practice was strengthened (European Commission, 2021c). While the Code of Practice ensured greater transparency and accountability of signatories’ disinformation policies, the European Commission concluded that more needed to be done in consistently applying and monitoring the Code across platforms and Member States, in providing access to platforms’ data for disinformation research and in increasing participation from the advertising sector (European Commission, 2021b, c). As we shall see below, this has since resulted in more coercive legislation in the European Union.

Coercive Responses

Given the failure of self-regulation to prevent widespread circulation of false information, many nations have resorted to more coercive responses. Arrests and shutting down the entire Internet are arguably the most coercive actions taken by governments. In the year across June 2020 to May 2021, Freedom House (2021) (a non-profit, majority US government-funded organisation that conducts research and advocacy on democracy, political freedom and human rights) observed that of the 70 states covered by its annual study of human rights in the digital sphere, officials arrested or convicted people for their online speech in 56 countries; and governments suspended Internet access in at least 20 countries, usually during political turmoil across elections and protests.

Other actions which do not respect freedom of expression include enacting legislation on false information online. Authoritarian China, keen to maintain control over its diverse population of 1.4 billion, was among the first to legislate in this area. In 2016, China criminalised creating or spreading online ‘rumours’ that ‘undermine economic and social order’. A 2017 law requires Internet news providers to reprint information published by government-acknowledged news organisations without ‘distorting or falsifying news information’ (Repnikova, 2018, September 6). In 2018, Chinese authorities required microblogging sites to highlight and refute rumours on their platforms. Across 2020–2021, China’s Internet regulator introduced new rules to restrict independently operated social media accounts that publish current affairs, leading to many accounts being removed (Freedom House, 2021). China’s State Council published guidelines for building a ‘civilised’ Internet in September 2021, stating that the web should be used to promote education about the ruling Communist Party and its achievements. Beyond China, by 2019, over 40 national laws to combat disinformation had been chronicled worldwide (Marsden et al., 2020). Some are intended to eliminate critical reporting. For instance, as reported by international non-governmental human rights organisation Amnesty International (2022, March 10), during Russia’s invasion of Ukraine in 2022, Russia’s parliament criminalised spreading ‘false information’ about Russian Armed Forces or ‘discrediting’ Russian troops (punishable by 15 years in prison). In other countries, such laws are intended to pressurise social media platforms to take action. For instance, in Germany, the ‘Netzwerkdurchsetzungsgesetz’ (Network Enforcement Act) (NetzDG) was introduced in 2018 to reduce the spread of hate speech and false information (Netzwerkdurchsetzungsgesetz, 2017). Online platforms must remove ‘obviously illegal’ posts within 24 hours or risk fines of up to €50 million. While well-intentioned, as observed by Human Rights Watch (an international non-governmental organisation that conducts research and advocacy on human rights), this law damages free speech by tasking companies that host third-party content to make difficult determinations of when user speech violates law. Even courts can find these determinations challenging, as they require nuanced understanding of context, culture and law. Faced with short review periods and steep fines, companies have little incentive to err in favour of free expression (Human Rights Watch, 2018, February 18).

Elsewhere, rather than legislating on false information online generally, legislation seeks to protect elections while trying to respect freedom of expression. For instance, in 2018, France passed a law that establishes an expedited judicial procedure for adjudicating complaints about fake news preceding elections, imposing heightened transparency obligations on platforms during these periods (Fukuyama & Grotto, 2020, p. 204). The USA, in 2019, approved proposals for online paid political ads to be required to be appropriately labelled and to clearly display or link to key information (McNeice, 2019, November 5).

Appreciating that self-regulation has not sufficiently tackled false information online, in April 2022 the European Union agreed the broad terms of the Digital Services Act to make technology companies take greater responsibility for content appearing on their platforms. Expected to come fully into force by 2024 at the latest, new obligations on platforms include new strategies for dealing with misinformation during crises (such as a pandemic or war); explaining clearly why they have removed illegal content; giving users the ability to appeal takedowns; explaining how their recommender algorithms work; offering a recommender system not based on profiling (for instance, chronological listing); prohibiting ‘dark patterns’ (namely, confusing or deceptive user interfaces designed to steer people into decisions they may not otherwise have made); banning targeted ads based on an individual’s religion, sexual orientation, ethnicity, health information or political beliefs or targeted at children; allowing European Union governments to request removal of illegal content; and dissuasive sanctions of up to 6% of global turnover. The online platforms will also have to identify and tackle ‘systemic risks’ stemming from the design and use of their services including those that adversely impact fundamental rights or seriously harm users’ physical or mental health, and manipulation of services that impact democratic processes and public security (Council of the EU, 2022, April 23; Goujard, 2022, April 23; Vincent, 2022a, April 23).

Also noteworthy is that the nurturing policy environment evident in many countries across recent decades that encouraged big technology platforms to innovate and grow now appears to be shifting against monopoly power. For instance, alongside the Digital Services Act, the European Union is advancing the Digital Markets Act. Its broad details, agreed in March 2022, aim to curb the dominant big technology platforms and enable future anti-trust actions. Its proposed penalties for infringement include fines of up to 10% of total worldwide turnover in the preceding financial year and 20% for repeated infringements, and a time-limited ban on acquiring other companies in the case of systematic infringements (Vincent, 2022b, March 24). Whether this policy shift against monopolistic big technology platforms will endure remains to be seen, not least as the technology sector intensively lobbies parliaments to water down proposed legislation. Indeed, a study from Corporate Europe Observatory and Lobby Control (an independent research and campaign group working to challenge the privileged influence of corporations and their lobby groups in European Union policy-making) finds that the technology sector is the biggest lobby sector in Europe (Bank et al., 2021). Across 2021, the biggest spenders lobbying the European Union were Apple (€6.5 million), Google (€6 million) and Facebook (€6 million). A major target was to protect their surveillance advertising business model from an outright ban (Lomas, 2022, April 22). Beyond the European Union, in 2021 China, too, ended its stance of minimal regulation of its own big technology companies (Au, 2021, September 27). For instance, wishing to curb capitalist excess, and increase national security, China’s Central Commission for Cybersecurity and Informatization (2021, December 28) issued its 14th Five-Year Plan for National Informatisation. Its plans to build technology norms and digital governance systems include reducing its technology industry’s ‘disorderly expansion’ and monopolistic business practices; launching ‘technical algorithm regulation’; and clarifying the responsibility that Internet platforms bear over the content they publish.

Solution Area 2: Cybersecurity

The spread of false information online, especially through information warfare conducted via social media platforms, is a significant cybersecurity issue. Information warfare includes coordinated, deceptive efforts to manipulate public debate, often spreading hate speech or populism; trying to undermine faith in democracy; or trying to manipulate electorates through negative campaigning, fear and divisions.

In December 2020, the European Commission recognised that more effort was needed on cybersecurity to strengthen European democracies. It notes that only by pooling existing knowledge on hybrid threats across different sectors (such as disinformation, cyber operations and election interference) can the European Union respond effectively to disinformation and influence operations (European Commission, 2020, December 3, p. 20). While the European Union and selected countries are addressing cybersecurity and social media platforms, the response is more uneven worldwide (Brown et al., 2020). For democratic governments, responding to foreign interference can be difficult as methods used by adversaries typically exploit democratic principles such as free speech, trust and openness. Detection can be hard both because the methods are difficult to identify and because democracies pertain to avoid surveillance of their own domestic populations and debates, with most intelligence resources directed towards external collection to actively monitor foreign disinformation campaigns (Bakir, 2019 [2018]; Hanson et al., 2019).

The digital platforms have also adopted cybersecurity measures. For instance, since Russian attempts to influence the 2016 US presidential election were exposed, Facebook has since built a team of over 200 people globally (experts in cybersecurity, disinformation, digital forensics, law enforcement, national security and investigative journalism) focused on combating such operations. Its approach detects and removes violating content, known bad actors and coordinated deceptive behaviour. It is designed to have flexibility, understanding that tactics evolve as bad actors take evasive actions (Facebook, 2020a, April). Indeed, rapid technological change and adaptive tactics by disinformation purveyors have spurred platforms, news outlets and researchers to find automated ways of detecting deceptive forms such as fake news online and deepfakes.

In terms of research on automatically detecting fake news online to help fact-checkers, existing approaches mainly rely on training classifiers, for which past events or claims are gathered and labelled as real or fake, and significant features are extracted to generate appropriate data representations (Cha et al., 2020). However, problems abound with using AI for fake news detection. Firstly, unlike detection of hateful, sexist, or hyperpartisan language, linguistic classifiers alone cannot detect fake news and propagandists exploit such weaknesses. For instance, Russia’s Facebook ads used to try to disrupt the 2016 US presidential election and typically posted words superimposed on images, which allowed them to evade Facebook’s machine-learning algorithms for detecting fake news (Levy, 2020, p. 375). A second problem with using AI for fake news detection is that linguistic classifiers need humans in the loop, such as fact-checkers, to keep the models updated, otherwise accuracy rapidly degrades, even within one week. Thirdly, building blacklists of websites spreading false information is not scalable for content produced every minute and will produce bias towards specific websites in the database. Fourthly, removal of fake accounts is problematic because of the vast scale at which fake accounts are produced. Fifthly, stakeholders require models to combat fake news that provide explainable outcomes that highlight which users and publishers are creating fake news, on which topics and through what types of textual and social manners, but automated AI solutions do not lend themselves to explainability (Cha et al., 2020; Ghulati, 2020, 27 November). Nonetheless, the field continues to advance. For instance, to learn feature representations from multiple aspects, deep neural networks have been successfully applied to tasks such as visual question answering; image captioning; and a deep learning-based fake news detection model which extracts multimodal and social context features and fuses them by attention mechanism (Wang et al., 2018).

Alongside detecting fake news through automated means, various countermeasures against burgeoning deepfake technology have been developed in collaborations between the US military and dominant platforms. These have provided tools and datasets of manipulated and non-manipulated videos to help develop identification techniques (Vizoso et al., 2021). They include the US Defense Advanced Research Projects Agency establishing the Media Forensics programme (Langguth et al., 2021; Vizoso et al., 2021) and a competitive challenge organised by Facebook’s Deepfake Detection Challenge across 2019–2020, boosted by companies like Microsoft and Amazon Web Services and university research (Facebook, 2020b, June 25). Many tools have been created to automatically detect deepfakes, based on intrinsic contradictions in the algorithm synthesis. These include a lack of eye blinking or mismatching lip movement with speech. There are systems that use a convolutional neural network that extracts frame-level features that are then used to train a recurrent neural network that learns to determine if a video has been manipulated. Google also created a tool called Assemble that helps journalists identify manipulated images (Pérez Dasilva et al., 2021). However, Langguth et al. (2021) warn that the success of such approaches depends ultimately on their mode of deployment. Furthermore, recent research using adversarial strategies indicates that even the best detectors can be fooled. Adversarial strategies consist of adding noise (imperceptible to the human eye) to a video or image to confuse a fake news detector. They conclude that it is likely that many of these systems are ultimately flawed in application because they do not offer 100% detection accuracy, and if they are available to the public, they will also be available to disinformation creators.

Solution Area 3: Digital Platforms/Intermediaries

Under pressure from regulators and bad publicity arising from whistleblowers and political inquiries, globally dominant digital platforms have undertaken design reforms and algorithmic tweaks to address some of the harms arising from their existence while preserving their core business model of maximising user engagement (see Chap. 2). As noted earlier in this chapter, the European Union Code of Practice on Disinformation regarded as a landmark document and signed by dominant digital media platforms sets out voluntary commitments including those on better platform transparency, digital literacy and content moderation (European Commission, 2018b, 2021b). Such voluntary commitments were not all successfully fulfilled (European Commission, 2021c) and have since been hardened into the landmark Digital Services Act. Later in this chapter, we address some of these commitments and their shortcomings, but here we focus on the thorny issue of content moderation.

In efforts to promote rapid growth of Internet platforms, US federal legislation passed in 1996 (Section 230 of the Communications Decency Act) freed Internet intermediaries from almost all liability for user-generated content, placing the burden of content curation on the platforms themselves (see Chap. 2). In the USA, where dominant digital platforms are based, freedom of speech (especially political or ideological speech) is a constitutional right, with exceptions for narrow speech categories of obscenity, defamation, fraud, incitement, fighting words, true threats, speech integral to criminal conduct and child pornography (Killion, 2019, January 16). It is only in exceptional cases that platforms censor politicians. For instance, in an unprecedented move, in January 2021 Facebook banned then outgoing US President Trump until at least 2023 for inciting the deadly January 6 insurrection at the US Capitol (Hendrix, 2021, January 7).

While political speech is protected, digital platforms are more forthcoming in banning deliberately misattributed or manipulated content (with exceptions for satire). For instance, YouTube’s (2021) misinformation policies prohibit misattributed content, namely, content ‘that may pose a serious risk of egregious harm by falsely claiming that old footage from a past event is from a current event’. It also bans manipulated content, namely, content that is ‘technically manipulated or doctored in a way that misleads users (beyond clips taken out of context) and may pose a serious risk of egregious harm’. Its examples of manipulated content include videos that are technically manipulated to make it appear that a government official is dead or to fabricate events where there is serious risk of egregious harm. Similarly, TikTok (2021) prohibits ‘digital forgeries (Synthetic Media or Manipulated Media) that mislead users by distorting the truth of events and cause harm to the subject of the video, other persons, or society’.

While platforms ban certain types of content to prevent harms to the civic body, they prefer to promote authoritative sources and demote borderline content. Most commonly, such content moderation occurs in areas of health messaging and elections. For instance, harms to public health from false COVID-19 information prompted platforms to moderate content via their recommendation algorithms. Google prioritises information from the World Health Organization in its search rankings, even if this information is not optimised for Google (O’Donovan, 2020, November 27). According to a civil rights audit of US Facebook, Facebook shows messages in News Feed to people who interacted with harmful COVID-19 misinformation that was later removed as false, using these messages to connect people to the World Health Organization’s COVID-19 mythbuster website (Murphy, 2020, July 8, p. 53). On content moderation during elections, in June 2020, US Facebook announced its planned Voting Information Center, modelled after the COVID-19 Information Center that Facebook uses to connect users to trusted information from health authorities.

Despite a plethora of ever-evolving policies and community guidelines on false media forms and content, digital platforms perform poorly at enforcing intermediary liability laws (which tell platforms what responsibility they have for unlawful content posted by users) consistently at scale. One reason may be divergence in national laws on (a) how neutral platforms must be to qualify for immunity from legal claims arising from users’ unlawful speech and (b) the degree of content moderation they can perform without being exposed to liability. Another reason is that most intermediary liability laws oblige platforms to take down illegal content once they ‘know’ about it, but laws vary in what counts as ‘knowledge’. Under some national rules, platforms can only be legally required to take down users’ speech if a court adjudicates it unlawful; elsewhere, platforms can decide themselves (Keller & Leerssen, 2020). Because platforms’ private rulesets (Community Guidelines) are privately defined and enforced, platforms’ decisions are generally not subject to review by courts, there is little transparency in how their policies are applied, and these appear inconsistent internationally (Ajder & Glick, 2021; Keller & Leerssen, 2020; McNamee, 2019). Against further transparency, Facebook states that it fears giving people with bad intentions a playbook to explain its algorithms (Merrill & Oremus, 2021, October 26). If digital platforms were more explicit about their algorithms’ workings, this could also give competitors an easy means of duplicating and surpassing their service (Gillespie, 2014). At the time of writing (April 2022), the European Union Digital Services Act (referred to earlier in this chapter), which will make platforms explain their content moderation policies, practices and decisions more clearly including how their recommender algorithms work, looks promising on paper, but time will tell whether sufficient resources have been ringfenced to ensure compliance or whether lobbying will dilute the law and provide platforms with workarounds that mean that they do not have to significantly alter their behaviour. Certainly, in the run up to the agreement of this act, a major lobbying target for Google and Spotify (the world’s largest music streaming service provider) was to limit researchers’ access to data on algorithmic content ranking systems (Lomas, 2022, April 22).

A further problem with content moderation of online disinformation campaigns is that this requires action from dominant and minor platforms alike to prevent those censored on one platform from simply moving onto other platforms (Siegel, 2020, p. 73). For instance, according to technology journalist Sarah Emerson, in response to dominant social media platforms’ efforts to quell Trump’s claims of election fraud across the 2020 US presidential election campaign, movements such as Stop the Steal, QAnon and right-wing militia groups moved to platforms such as MeWe, where they encouraged violent responses to post-election events (Emerson, 2021, January 14). Very unusually, following the US Capitol Hill riot in January 2021, censorship was enforced across the entire platform ecosystem in the following fortnight, including YouTube, Facebook, Instagram, Twitter, Google, TikTok, Amazon, Apple and Airbnb, but also lesser-known platforms such as Gab, Parler, 4chan, Stripe, Twitch, Zello and MeWe.

The US Capitol Hill riot also highlights two intrinsic technical difficulties of content moderation on any platform. Firstly, it requires skilled detective work to understand the nature of posts (Emerson, 2021, January 14). As Zello, which is often encrypted end to end for privacy reasons, points out: ‘This makes the task of proactive monitoring for compliance with our terms of service unrealistic: we simply cannot just “search our data” for specific keywords in conversations’ (Zello Staff, 2021, January 13). Secondly (and related), content moderation requires resources, and there is a divergence between the capabilities of dominant platforms and smaller digital intermediaries. As of 2021, MeWe employed less than 100 moderators, while Facebook employed 15,000 people reviewing content in over 70 languages. However, even in Facebook, this resource is unevenly distributed worldwide (see Chap. 2), relies heavily on automation geared towards English-language communities and, to date, has often fallen short of what is needed (Simonite, 2021, October 25, Thakur & Hankerson, 2021).

Solution Area 4: Advertising

There are various advertising-driven causes of, and solutions to, false information online. As Chap. 2 observes, the very successful Google-Facebook duopoly in online behavioural advertising means that news outlets are deprived of advertising funds, with the resulting news desert ultimately, perhaps, the biggest challenge to tackling false information online: we address how to combat this in the section below on Media Organisations. In Chap. 2, we also discussed how datafied emotional content is optimised to generate Facebook shares for Internet traffic and advertising income (clickbait audiences)-generating fake news, hate speech and deceptive, emotive political campaigning: we address how to combat this in the coming section on Professional Political Persuaders and Public Relations. In this section, we focus specifically on the problem of commercial ads online inadvertently funding false information via adtech.

Adtech is used to profile and target people in order to serve behaviourally targeted ads. It funds fake news sites, commercial ads and political ads alike. Here, the prime actor is Google’s ad network, Doubleclick but there are other behavioural and programmatic ad networks including seemingly countless lesser-known networks such as OpenX, Tribal Fusion and 33Across. By March 2018, the European Commission (2018a) reported that online platforms were tackling disinformation by disrupting the business model for its production and amplification. Disruptions included ad networks not placing ads on websites identified as purveyors of disinformation, thereby directly reducing income to disinformation providers, and ad networks not disbursing revenues to sites and partners until they could confirm that they operate within relevant terms and conditions. However, by late 2020 Konrad Shek (Deputy Director, Policy and Regulation, UK Advertising Standards Association) observed that although brands are incentivised to choke funds to fake news websites, the volume and speed of the supply chain makes this difficult: for instance, brands already employ negative lists, but must keep these updated (Shek, 2020, November 27).

The issue of brand safety is an ongoing one within the digital advertising industry, and the issue of false information online adds political and public impetus to resolve it: reputable advertisers are unlikely to want their advertising associated with content that by its very nature cannot be trusted. Various efforts have been made to help advertisers identify (and avoid) false information providers online. For instance, British-based non-profit organisation, Global Disinformation Index, deploys its assessment framework to rate news domains’ risk of disinforming their readers, aiming to generate neutral ratings for advertisers, ad tech companies and platforms to redirect their online ad spending, in line with their brand safety and disinformation risk mitigation strategies (Global Disinformation Index, 2021). Other initiatives from various ad networks and programmatic companies promise to deliver brand-safe ads. Rubicon, for example, claims it can identify undesirable publishers before the ads are released and can track activity during and after the campaign to see who clicked on which ads and where. However, to be effective, all ad networks need to be involved to prevent undesirable sites (such as fake news sites) that have been ejected from one ad network from simply moving to less discriminating ones. With greater transparency in the system for advertisers, non-fake news publishers and advertisers could be encouraged to stop using the less discriminating ad network. Given that ad networks benefit from economies of scale, the departure of reputable advertisers and publishers would be harmful and possibly terminal to that ad network. Indeed, a study that tracked ads served in a sample of fake, low-quality and traditional news outlets over 12 weeks in in 2019 (1.32 million ads served by 565 unique ad servers on 1600 news sites) finds that fake news publishers were still strongly reliant on credible ad servers: the top ten credible ad servers alone accounted for 67% and 56% of fake and low-quality ad traffic, respectively (Bozarth & Budak, 2021).

As the European Union General Data Protection Regulation (GDPR) has taken effect in media markets, the end of the cookie-based behavioural advertising market is increasingly likely, especially as Google and Apple tighten control of third-party cookies for Chrome and Safari users to prevent unwanted tracking. As of 2022, for example, the default for cookies will be ‘off’ in Chrome. What will replace the third-party tracking cookie is not yet clear but is likely to consist of new approaches to identifying a user and targeting by much larger cohorts rather than individual profiles. Another approach is Universal IDs, which are based on a person providing their personal details to advertisers, such as logins to sites and details held about their interactions with sites. Similarly, third-party identity management services also exist, which would allow for microtargeting and cross-site tracking (just as third-party cookies do today). With adtech’s trade association, the Internet Advertising Bureau, noting that ‘universal ID solutions work very similarly to third-party cookies’, it remains it be seen how viable this is (Internet Advertising Bureau UK, 2021). Notably, Google will not support Universal IDs, effectively locking out smaller adtech firms. Both Apple and Google prefer topic- and cohort-based approaches, which are built on larger clusters of people. Google’s ‘Topics’ approach, for example, targets by cohorts of people (potentially of thousands of people) (Goel, 2022). This, then, would involve what we see as meso-targeting, the middle layer between micro- and macro-. Here, input features to the ad network algorithm, including web history, are kept local on the browser and are not uploaded elsewhere—the browser only exposes the generated topics for that week to the ad network. Yet, notably, if a publisher has subscriber details, they will have access to the cohort a person belongs to. Although Google’s GitHub pages note restrictions on sensitive categories, Google’s policy on political content is based on regional legal compliance, whereas other categories are explicitly restricted (such as targeting by or in relation to personal hardship, systemic discrimination, sexual interests, or societal biases). There is then the contextual approach (targeting based on content rather than who is looking at the content), which is certainly a more privacy-friendly approach, but it remains to be seen whether this addresses the problem of the over-emotionalised civic body, especially as the digital version of contextual advertising seeks to profile sentiment of the content on the site itself (such as keywords, website content and other metadata), in turn showing ads in relation to what else is displayed on the site at the time. Conceivably, one could see that publishers would work to clarify the emotional tone of their sites, to ensure brand-emotion uniformity and that programmatic advertising is in line with this. Yet, this could feed further news and audience polarisation, as publishers avoid being caught in a ‘balanced’ middle ground, which would be of less value to advertisers due to absence of clarity of which audience is being reached.

Solution Area 5: Professional Political Persuaders and Public Relations

As discussed in Chap. 1, an international survey of digital news consumption across 40 countries finds that it is domestic politicians that are regarded as by far the most responsible for false and misleading information online (Newman et al., 2020, p. 18). As such, this section focuses on professional persuaders and public relations in the political domain, addressing two problematic areas in incubating false information: the use of political online ads and broader use of strategic communications.

Political Online Ads

Across the world, electoral laws greatly lag developments in the digital media ecology. Regulating online political ads is challenging due to the borderlessness of online space; the difficulty of recognising seemingly organic, but paid-for, political material and distinguishing it from other political content; and microtargeting and behavioural profiling techniques that can rely on improperly obtained data, which in turn may be misused to direct polarising narratives (European Commission, 2020, December 3, p. 4). Stakeholders looking for solutions are divided on the value of microtargeting but are more united on increasing the transparency of online political ads, so enabling advertisers to be held accountable for what they say and for breaking rules. Of interest, given its global focus, are recommendations from the Kofi Annan Commission on Elections and Democracy in the Digital Age (2020) discussed below.

The Kofi Annan Commission urges countries to adapt their political advertising regulations to the online environment and recommends that relevant public authorities should define in law what is considered to be a political ad. Such a move would enable digital intermediaries and platforms to know what to include in their own policies on ads about elections and politics. The European Commission’s (2021a) proposed rules on political ads, published in November 2021, took a broad definitional approach to political ads, to include those concerning political actors and issue-based ads liable to influence voting behaviour. Of course, digital political campaigning is far broader than simply paid-for advertising and may include branded content, influencers and other activities that look like ads.

The Kofi Annan Commission recommends that countries should compel social media platforms to make public all information involved in the purchase of an ad, including the advertiser’s real identity, the amount spent, targeting criteria and actual ad creative. Since the 2016 US presidential elections, some social media platforms have introduced measures to verify the identity of people purchasing political ads. Facebook, for instance, requires those running ads about ‘social issues, elections or politics’ to have their identity verified using documents issued by the country they want to run ads in (Facebook, n.d.), although this is not active in every country (Facebook, 2021). In 2021, the European Commission (2021a, November 25) proposed transparency rules that would require political ads and electorally relevant issue ads to be clearly labelled, including information such as who paid for it and how much. In the USA, proposed legislation that would have created an archive maintained by the Federal Election Commission of purchased political ads online prompted Twitter, Google and Facebook to provide publicly accessible, searchable libraries of election ads and spending on their US platforms in 2018, with rollouts in certain other countries since then. Although by October 2019, Twitter stopped accepting most political ads, Google allows users to see election-related ads, showing statistics on audience demographics. On Facebook, users can click on political, electoral or social issue ads to access information about the ad’s reach, who was shown the ad and the entity responsible; and Facebook took more measures to increase the transparency of Political Ads and its Public Ad Library following criticisms of its explainability and functionality (Edelson et al., 2018; Murphy, 2020, July, pp. 836–837). While such political ad archives have enabled journalists to call attention to influence networks and monitor ad content for disinformation and hate speech, they remain minimally useful for electoral regulators (Gorwa & Ash, 2020; Leerson et al., 2021). Meaningful political ad archives need to archive ads accurately, rapidly (ideally, in real time), over long time periods (ideally, all), provide granular information about spending and targeting, and provide precise names of organisations that paid for the ads (Dommett & Power, 2020; Leerson et al., 2021). Such archives are needed in every country where digital platforms allow political advertising. Leerson et al. (2021) argue that these should be publicly regulated, otherwise journalists are reliant on voluntary, incomplete access frameworks controlled by the very platforms they aim to scrutinise.

The Kofi Annan Commission recommends that countries should specify by law the minimum audience segment size for an ad. Since 2017, digital platforms started to limit the level of detail campaigns could use to target voters. In November 2019, Google said that while it had ‘never offered granular microtargeting of election ads’, it was further limiting election ad targeting to general categories of age, gender and general location (postal code level), as well as to contextual targeting (Spencer, 2019); that advertisers would no longer be able to target political messages based on users’ interests inferred from their browsing or search histories (Glazer, 2019, November 21); and that this approach would be enforced worldwide from 2020. Reviewing Google’s policy in 2022, this is not globally uniform as Google has different requirements for political and election ads based on region (Google, 2022). The policy of disclosure requirements (that an ad is political) and targeting restrictions (low granularity) are only applied to regions where election ad verification is required. (According to Google, disclosure and restrictions apply in Australia, Brazil, European Union, India, Israel, New Zealand, Taiwan, the UK and the USA.)

Facebook continues to microtarget, arguing that advertising is an important part of free speech, especially when it comes to political messaging. However, given increased legislative scrutiny of these practices, Facebook’s parent company (Meta) announced that from January 2022, it would no longer let advertisers target people based on how interested the social network thinks they are in ‘sensitive’ topics including political affiliation, religion, sexual orientation, health, race and ethnicity. This would apply across Meta’s apps, including Facebook, Instagram and Messenger, and its audience network, which places ads on other smartphone apps (Bond, 2021, November 9). This is in line with the European Commission’s (2021a) proposed rules on political ads and electorally relevant issue ads, published in November 2021, that stipulate that targeting and amplification would be banned when using sensitive personal data (such as ethnic origin, religious beliefs or sexual orientation) without explicit consent of the individual. The proposed rules also stipulate that political targeting and amplification techniques would need to be explained publicly in unprecedented detail including clear information on what basis the person is targeted, which groups of people were targeted, based on which criteria and with what amplification tools or method.

Strategic Communications

As Chap. 6 shows, it is not just paid for political advertising that promotes harmful disinformation. Rather, professional persuaders have been joined by data management companies and data brokers, spawning self-regulated strategic communications consultants whose aim is audience influence and behaviour change, often leveraged through localised influencers, bots and trolls.

As a case in point, an ethnographic study across 2016–2017 in the Philippines problematises the work hierarchies and institutions that professionalise and incentivise ‘paid troll’ work (see Chap. 3). The study stresses the importance of understanding local contexts of how architects of disinformation evade responsibility and entice young creative professionals in need of paid employment to join them. Similar processes have been documented in Guatemala (Currier & Mackey, 2018, April 7). The Philippines’ study recommends greater industry self-regulation and development and enforcement of stronger codes of ethics to encourage transparency and accountability in digital marketing, political marketing (including a requirement to disclose political consultancies) and the digital influencer industry (where undisclosed paid sponsorships and collaborating with anonymous digital influencers enable people to elide accountability) (Ong & Cabañes, 2018). More prescriptively, the Final Report from the UK Inquiry into Disinformation and Fake News recommends that the government move beyond self-regulation to consider new regulations on transparency in strategic communications companies, with a public record of all campaigns that they work on domestically and abroad (Digital, Culture, Media and Sport Committee, 2019, February 14, pp. 83–84). Globally, however, the strategic communications industry remains largely unregulated and opaque, with self-regulation failing to stymie the architects of disinformation.

Solution Area 6: Media Organisations

Media organisations can raise awareness of disinformation and how it works, propagate true stories that connect with audiences and hold powerholders to account. However, this requires a healthy media ecology. Where the media ecology is unhealthy, steps should be taken to strengthen it. This section considers two macro solutions (namely, restoring competitive balance and rebuilding trust in mainstream news) and one solution that has become globally prominent in recent years (fact-checking).

Restoring Competitive Balance

In most liberal democracies, print media content is not extensively regulated because these markets are usually decentralised and competitive, whereas broadcast media are highly regulated because of their formerly oligopolistic or monopolistic position. Today, the scale and reach of dominant Internet platforms means that they occupy a position similar to that of legacy television networks (Fukuyama & Grotto, 2020). Furthermore, as Chap. 2 details, the impact of digital platforms on the business model of legacy news has been profoundly damaging, siphoning ad revenue and discouraging people from paying for news, generating news deserts where it has become uneconomic to provide news.

Previous democratic crises of media pluralism involving new technologies (from radio onwards) saw parliaments legislating to increase media pluralism by, for instance, funding new sources of trusted local information (notably, public service broadcasters) and introducing media ownership laws to prevent existing monopolists reaching into new media (Marsden et al., 2020). Competitive balance in the digital platform-dominated media ecology could be restored by breaking up the platforms to diminish their influence (McNamee, 2019) or by demanding that technology platforms divert more of their profits to finance local news, investigative journalism and public service journalism. Since the 2016 fake news furore captured public and political attention, Google and Facebook have voluntarily paid publishers around the world hundreds of millions of dollars to sponsor news-related projects (Benton, 2022). Some criticise such voluntary efforts as too minimal, suggesting instead that platforms redistribute a small percentage of their revenue as part of a new social contract to address the loss of public service journalism (Pickard, 2020). More recently, governments have also started to apply pressure, as evidenced in Australia, which passed the Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Act, 2021, requiring social media companies to pay media outlets for using their content. However, such regulatory measures risk dominant digital platforms shifting their investment away from news altogether, especially where news is not core to their product; in Facebook, for instance, only one out of every 250 News Feed content views in the first quarter of 2022 were to external links to a news site (Benton, 2022). As Chap. 2 reminds us, algorithmic tweaks on the dominant platforms have huge impacts on the fortunes of news outlets. Moreover, news outlets struggle to make a profit in the digital environment, so if dominant digital platforms roll back their recent overtures towards financially supporting news outlets, then the onus will fall on others to step in.

Rebuilding Trust in Mainstream News

Trust in the truthfulness of journalism has been low for decades, predating the social media era, as Chap. 4 reminds us, fuelled by long-standing political and commercial processes of manipulation and commodification that shape how news stories are constructed. Trust remains low across the world, as shown in a survey in 2020 of the digital news consumption of over 80,000 people in 40 countries: it finds overall levels of trust in news at their lowest point since they started to track such data, with only 38% saying they trust most news most of the time (Newman et al., 2020). A similar fig. (42%) was found in a 2022 survey of 46 countries, with only 19% saying all or most news organisations put what is best for society ahead of their own commercial or political interests. Notably, it is public service broadcasting organisations with a strong track record of independence that attract the highest trust ratings (Newman et al., 2022). Ultimately, then, it would seem apposite to invest in public service broadcasting across the world. Beyond such large-scale investment, various solutions addressing false information online have been proposed to rebuild trust in journalism.

One proposed solution involves creating guidelines for journalists for reporting false information. A 2018 survey of 803 American and British journalists finds that such reporting guidelines are not widespread. This is problematic because highlighting insignificant fake stories can draw extra attention to false information, giving those propagating the false story credibility because they can point to mainstream media engagement with it (Persen et al., 2021). A study for the Council of Europe (an organisation that seeks to develop throughout Europe common and democratic principles based on the European Convention on Human Rights and other reference texts) argues that newsrooms need policies on strategic silence on fake news to inform decisions about what stories to debunk and which to ignore (namely, those not gaining traction) (Wardle & Derakshan, 2017, p. 19).

Other proposed solutions involve greater journalistic transparency of online news sources and journalistic processes. For instance, the European Commission (2018a) suggests that platforms should integrate source transparency indicators into their ranking algorithms to better signal trustworthy and identifiable sources in search engine results and social media news feeds. This challenge has been taken up by the Coalition for Content Provenance and Authenticity (led by Adobe, ARM, the British Broadcasting Corporation, Intel, Microsoft, TruePic and Twitter). It is developing technical specifications for content provenance enhancing technologies, to help users decide whether content is manipulated by applying the content’s metadata to determine who created it, how and when (The Royal Society, 2022, January). However, this does nothing to reduce the scale of false information online. Matters are not helped by the fact that large influence operations on social media often use established media outlets as camouflage (as discussed in Chap. 4). Conversely, a more radical solution to rebuilding trust would be to discard the unobtainable professional ideal of impartial and objective journalism. Winston and Winston (2021) propose that a more openly subjective, biased journalism would be better at providing a public forum, analysing context, mobilising citizens and building empathy between communities while being no worse at providing new information and holding power to account. These diverse solutions to improving journalistic transparency could be worth trying, but currently lack empirical evidence on ability to rebuild trust, itself a complex phenomenon that, once lost, is difficult to regain. These solutions also do nothing to address the root cause of distrust in news, namely, the long-standing political and commercial processes of manipulation and commodification.

Fact-Checking

Fact-checking is the practice of systematically publishing assessments of the validity of claims made by public bodies to identify whether a claim is factual (Walter et al., 2020). Political fact-checking emerged in the late 1980s, following deceptive, unchallenged ads in the 1988 US presidential race. In 2015, the Poynter Institute established the International Fact-Checking Network to bring together fact-checkers from around the world. By 2020 there were 290 active fact-checking sites in 83 countries, although most are in Europe, North America and Asia, with fewer in South America or Africa (Stencel & Luther, 2020).

A key challenge to fact-checking is its resource-intensiveness and hence expense: for example, fact-checker, PolitiFact, takes three editors to judge whether a piece of news is false (Oshikawa et al., 2020). Fact-checking therefore tends to be reserved for important moments where the civic body requires protection, such as elections (Rodríguez-Pérez et al., 2021). In countries such as Brazil, Argentina and Mexico, journalists have been successfully collaborating with each other during elections to reduce the costs of fact-checking, prevent duplication of newsrooms debunking the same content and ensure that quality information reaches larger audiences. Successful collaborations also counter false pandemic information, such as the 91 verification units from 70 countries that feed the database, The CoronaVirusFacts/DatosCoronaVirus Alliance, supported by the International Fact-Checking Network (Palomo & Sedano, 2021). Yet, given its resource-restricted factual focus, fact-checking will fail to identify gendered disinformation, where the claims being made are couched in value judgements or are about people’s character (Judson et al., 2020, October).

Furthermore, fact-checking itself is not immune to the influence of powerful actors. For instance, fact-checkers are beholden to being recognised by dominant digital platforms. For instance, Google applies a series of tests, including that the fact-checking organisation must qualify for inclusion in Google News, itself an opaque and controversial process, and that publishers must be algorithmically determined to be an authoritative source of information (Graves & Anderson, 2020).

Ultimately, the efficacy of fact-checking may be minimal as those who most need to see the fact-check do not (Moreno-Gil et al., 2021). Guess et al. (2018) estimate that about one in four Americans visited a fake news website around the 2016 US presidential election and that fact-checking failed to counter fake news because consumption of fact-checks was concentrated among non-fake news consumers. Beyond the USA, a study of public attitudes towards fact-checking in Europe finds greater acceptance of fact-checking in Sweden and Germany than in Italy, Spain, France and Poland. Dissatisfaction with democracy and the European Union also predicts negative feelings towards fact-checkers in five of the countries examined (although not France) (Lyons et al., 2020). Furthermore, fact-checking sites do not seem to influence the issue agenda of other media. Vargo et al.’s (2018) computational study of the role of fake news in the online news media landscape from 2014 to 2016 finds that fact-checking websites had half the influence of fake news in 2016. A report by the Atlantic Council (a US non-partisan think tank) observes that fact-checking is also ineffective where telecommunications companies’ zero-rating policies incentivise social media users to remain in a closed online space within platforms, making it hard for them to verify claims using external resources (Bandeira et al., 2019). There are also psychological factors that can mitigate effectiveness of fact-checking (explored in the following section).

Solution Area 7: Education

There have been multi-stakeholder efforts to improve citizen’s digital literacy around false information. Increasingly promoted by the globally dominant digital platforms, in 2017, Facebook launched its ‘Facebook Journalism Project’ and announced that news literacy would be a priority. Beyond financially supporting non-profits working in this space, it also rolled out a Public Service Announcement-type message at the top of the News Feed in 14 countries, linking to a post with tips for spotting ‘false news’ (Murphy, 2020, July 8, p. 29). As Chap. 6 observes, however, even among highly educated audiences in India, this media literacy campaign had only short-term effects in improving discernment between mainstream and false news headlines (Guess et al., 2020). Reportedly more successful efforts consider how best to reach the digitally illiterate. In India, for instance, to counter digitally illiterate village communities reacting with terrified mob violence towards false information on WhatsApp (Bali & Desai, 2019), the police (in Telangana state) in 2018 used ‘Janapadam’, namely, folklore that establishes a connection with locals. This involved short skits where primarily lower caste communities share religious tales and important news. They typically feature a man and two women sitting together to narrate a story, ending with a message promoting digital literacy. This audience-targeted approach reportedly generated broad reach and acceptance among local communities (Singh, 2019, January 9).

In much more digitally literate Finland, the government launched an anti-fake news initiative in 2014 to teach citizens, journalists and politicians how to counter false information designed to sow division. Finland was attuned to Russian propaganda having faced this since declaring independence from Russia a century prior. As online trolling increased in 2014, after Moscow annexed Crimea and backed rebels in eastern Ukraine, Finland reformed its education system in 2016 to emphasise critical thinking. However, Finland may have unique features that make media literacy efforts more likely to succeed. As well as a long history of dealing with foreign propaganda, it is a small, homogenous country that consistently tops international indexes on happiness, press freedom, gender equality, social justice, transparency, education and trust in national media, making it hard for external actors to find social fissures to exploit (Mackintosh, 2019, May; Newman et al., 2018).

Such media literacy governance solutions (policies, funding, tools) may be beneficial when conducted under appropriate conditions attuned to local contexts. However, they may have only short-term effects and are unevenly rolled out worldwide. They also run into complex psychological and sociological issues of how and why people spread and remember false information, which we discuss below.

Correcting False Information Does Not Change Beliefs

On whether fact-checking messages influence what we believe, Walter et al. (2020, pp. 17–18) present optimistic and pessimistic conclusions from their meta-analysis of 30 studies. Their optimistic interpretation is that people’s beliefs become more accurate and factually consistent, even after a single exposure to a fact-checking message. Their pessimistic interpretation is that fact-checking has weak impacts on beliefs that become negligible the more the study resembles real-world scenarios of exposure to fact-checking. Chan et al.’s (2017) meta-analysis of the psychological efficacy of messages countering misinformation finds that debunking effects were weaker when audiences generate reasons in support of the initial misinformation, supporting what we know about the power of confirmation bias. Correcting misinformation therefore does not necessarily change people’s beliefs (Flynn et al., 2017).

By contrast, a near-universal finding is ‘the continued influence effect’ where, even after its correction, misinformation continues to influence people’s attitudes and beliefs (Wittenberg & Berinsky, 2020, p. 174). Experiments show that repeated exposure to fake news headlines increases their perceived accuracy: this occurs despite a low level of overall believability and even when stories are labelled as contested by fact-checkers or are inconsistent with readers’ political ideology. These results suggest that platforms help incubate belief in false information and that tagging such stories as ‘disputed’ is ineffective as any repetition of misinformation, even in the context of refuting it, may be harmful (Pennycook et al., 2018).

Given this state of affairs, psychological research shows that inoculating people with information before their minds are made up on an issue may better ensure that false information does not circulate (Cook et al., 2017). Inoculation theory (McGuire, 1964) was pioneered to induce attitudinal resistance against propaganda and persuasion. It holds that activating people’s ‘mental antibodies’ through a weakened dose of the infectious agent can confer resistance against future attempts to persuade them. A decade-old meta-analysis of studies finds that inoculation is effective at conferring resistance (Banas & Rains, 2010). Recent studies find that inoculating people with facts against misinformation works for a highly politicised issue (global warming), regardless of prior attitudes (Cook et al., 2017; van der Linden et al., 2017). Applying inoculation theory to fake news finds that inoculation has some effect in making participants more sceptical and attuning people to deception (Roozenbeek & van der Linden, 2019).

Nudges

Experiments have deployed nudges to make people more careful about what they circulate online. Theories suggest that ‘social norm’ nudges work by informing people how others behave, so triggering desire to conform; by reminding people what the norms are, thereby changing behaviour to avoid social sanctions from norm-breaking; and by indicating what the ‘best’ course of action is, so changing behaviour (Legros & Cislaghi, 2020). ‘Confront nudges’ try to pause unwanted actions by instilling doubt, attempting to break mindless behaviour and prompting reflective choices (Caraban et al., 2019).

Numerous nudging experiments have been conducted on social media to see if they can reduce harms. For instance, Andı and Akesson (2021) designed a social norm-based message that nudges people towards better sharing behaviour. Their study placed the nudge above a thumbnail link to a false news article and provided a reminder that false news is prevalent online and that most responsible people think twice before sharing news. Participants exposed to the nudge were 5% less likely to say that they were willing to share the article. Such nudges could form a firebreak in online emotional contagion. A ‘confront nudge’ that provides multiple viewpoints to overcome our confirmation bias is NewsCube: it collects different points of view and offers an unbiased clustered overview in evenly distributed sections, while identifying unread sections, to nudge users to read all viewpoints (Park et al., 2009). Levy’s (2021) US-based field experiment conducted in 2018 (>17,000 participants) provides the first experimental evidence that exposure to counter-attitudinal news on Facebook decreases affective polarisation, so demonstrating that nudges diversifying social media news exposure could be effective.

However, researchers increasingly note the inability of behaviour change technologies to sustain user engagement. Furthermore, few examine long-term effects of nudging, and most do not examine possible backfires and unexpected effects. Reasons why nudges fail include that techniques tapping into the automatic mind lack any educational effects, and hence their effects may cease when nudges are removed; reminders might cause reactance after repeated exposure; and graphic warnings can lose resonance over time (Caraban et al., 2019).

Reason and Emotion

Scholarship suggests a positive role for reasoning in resisting false information. Ross et al. (2021) conducted two studies asking 1973 Americans to assess true, false and hyperpartisan news headlines from Facebook. It finds that analytical thinking was mostly associated with an increased tendency to distinguish true headlines from false and hyperpartisan headlines and that analytical thinking was not generally associated with increased willingness to share hyperpartisan or false headlines. Pennycook and Rand’s (2019) study of 3446 Mechanical Turk workers concludes that analytical thinking is used to assess plausibility of headlines, regardless of whether stories are consistent with one’s political ideology. Their findings suggest that susceptibility to fake news is driven more by lazy thinking than partisan bias. As such, training people to think more analytically, or giving (nudging) people time to take a moment for an analytical breath, could be fruitful.

Also important is the need to educate people on the power of emotive content to manipulate, as well as on the power of emotion when deployed in AI-driven behavioural prediction models that can be used for influence (McNamee, 2019, p. 260). Wardle and Derakshan (2017, p. 70) argue that any media literacy curriculum should include techniques for developing emotional scepticism to override our brain’s tendency to be less critical of content that provokes our emotions. Of course, this may fail where disinformation is crafted to provoke more subtle emotional responses. Indeed, as Bennett and Livingston (2020) observe, recommendations that focus on educating people about detecting false information avoid the question of why so many people easily exchange facts for deeper emotional truths. Sociologically informed research, for instance, suggests that sharing fake news might be an expression of group identity or dissatisfaction with the current political system. As such, it is important for educators to address the impact of past disinformation campaigns, as well as current inequalities, on people’s willingness to believe falsehoods (Nisbet & Kamenchuk, 2019).

Conclusion

In assessing seven solution areas to false information online, we conclude that each has an important role in strengthening the civic body, but also faces intrinsic challenges. Some solutions trample on human rights; others come up against the limits of technological fixes; and others are stymied by commercial imperatives, lack of political will, or the complexity of our interactions with false information online. The unrelenting scale, speed and spread of false information online; the unpreparedness of automation to detect and address all false information online; the lack of transparency and ethics in digital political advertising and wider strategic communications in the political sphere; the unhealthy media ecology dominated by global digital platforms, decreasing trust in news and under-resourced fact-checking and journalism; and the practical, psychological and sociological limits to increasing people’s digital literacy truly make this a ‘wicked’ problem.

The first solution area, governmental action, varies from non-coercive to coercive responses. Supranational, and many national, declarations urge better self-regulation of platforms, but more coercive responses include arrests, Internet shutdowns, legislation on false information online that stifles dissenting views, targeted legislation to protect key moments for the civic body such as elections, and broader legislation and actions to make dominant big technology platforms more responsible for the content that they host and to curb their monopoly power. Many of the coercive responses contravene the human right to freedom of speech, are often abused by authoritarian states and require significant resourcing for compliance. However, non-coercive responses have not solved the problem either.

The second solution area, cybersecurity, involves countries and supranational networks (such as the European Union) actively monitoring and combating foreign disinformation campaigns; social media platforms detecting and removing disinformation content and networks; and multi-stakeholder approaches to develop appropriate technology such as automated recognition of deceptive media forms. However, cybersecurity responses are uneven worldwide; and there are many methodological and practical problems with using AI for fake news and deepfake detection.

The third solution area, digital platforms and intermediaries, has found globally dominant platforms signing up to self-regulatory approaches with multiple commitments, but it is efforts by platforms around content moderation that have attracted sustained criticism. As well as freedom of speech issues (a right unevenly enforced worldwide), content moderation raises the issue of lack of transparency about what content has been removed or promoted, and why; and there is inconsistency as platforms’ policies differ in application regarding what content is removed or promoted, their stance changing over time. A related problem is that platforms perform poorly at enforcing intermediary liability laws consistently at scale. Meanwhile, the media ecology enables those censored on one platform to simply move onto others. There are also intrinsic technical difficulties of content moderation on any platform, but especially in end-to-end encrypted systems, as it requires skill and resources to detect the nature of posts. While the forthcoming European Union Digital Services Act has demonstrated legislative will to make platforms explain their content moderation policies, practices and decisions more clearly, it is too soon to know if sufficient resources are being ringfenced to ensure compliance, whether platform lobbying will dilute the law, or whether similar legislation will be passed outside of the European Union.

The fourth solution area, advertising, has seen dominant digital platforms, ad networks, programmatic companies and non-profit organisations acting to disrupt business models for producing and amplifying disinformation. However, their activities continue to be challenged by the volume and speed of the supply chain for fake news outlets. The likely ending of the cookie-based behavioural advertising market beckons as GDPR takes effect in media markets throughout the European Union, but what it will be replaced by, as well as its likely impact on the civic body, is unclear.

The fifth solution area, professional persuaders and public relations in the political domain, finds broad stakeholder agreement and legislative activity in certain regions (such as the European Union) on the need to greatly increase transparency of online political ads in terms of who purchased them, to whom they are targeted, and on what basis, and to enable advertisers to be held accountable. However, such legislation is needed in every country where digital political campaigning occurs. Ad libraries remain under the control of the dominant platforms and, in their current form, are minimally useful for electoral regulators. Finding solutions to broader strategic communications (a self-regulated area) that disseminate disinformation worldwide has proven harder, given lack of transparency, absence of professional ethics and localised conditions that entice creative professionals to engage with paid troll work. Diverse countries recommend greater transparency, self-regulation and regulation of strategic communications companies (including political marketing, digital marketing and the digital influencer industry).

The sixth solution area, media organisations, could play a vital role in combating online disinformation campaigns, but this requires a healthy media ecology. This in turn requires restoration of competitive balance, with suggestions ranging from breaking up dominant digital platforms to making them redistribute more of their advertising revenue back to media organisations. Such solutions, however, require uncompromising legislative intent and action by governments worldwide and also risk provoking dominant digital platforms to pivot away from news altogether (further damaging the revenue streams of news outlets). There are also proposed solutions to rebuild trust in journalism such as through newsroom policies on when to debunk false information online and greater journalistic transparency regarding news story construction. While such actions may help, empirical studies on efficacy are lacking. Declining trust in news is a long-standing, complex issue and unlikely to be solved any time soon given that news stories are a construct and that journalism remains beholden to long-standing political and commercial processes of manipulation and commodification. Ultimately, it would seem apposite to invest in independent public service broadcasting across the world, as it is such news outlets that currently garner greatest trust, but this would require large-scale investment. A dominant solution globally is promotion of fact-checking, but obstacles include resource-intensiveness and expense; that fact-checking itself is not immune to the influence of powerful actors; and that the efficacy of fact-checking may be minimal as those who most need to see the fact-checks do not.

The seventh solution area, education, has been embraced by many countries which have adopted campaigns to improve their citizen’s digital literacy and awareness of online disinformation. Those considered successful have carefully considered how best to reach the digitally illiterate or operate in small, relatively homogenous, progressive countries with a history of dealing with disinformation. However, while media literacy solutions may work when conducted under appropriate conditions attuned to local contexts, they may have only short-term effects and are unevenly rolled out worldwide. They also run into complex psychological and sociological issues of how and why people spread and remember false information. Scholarship shows limited impact on people’s beliefs from correcting false information (although inoculation can prove useful); the potential of nudging to make people more careful in what they circulate online (but that nudging may only have short-term effects); and various roles played by reason (training people to think or act analytically) and emotion (developing emotional awareness and scepticism towards content and algorithms). Fundamentally, however, literacy approaches alone cannot address why so many people easily exchange facts for deeper emotional truths. The task then broadens to educators (especially of history, sociology and communications) to address the impact of past disinformation combined with present-day inequalities on people’s current willingness to believe falsehoods.

Where does this leave us? Reducing the overall volume, and impacts, of false information circulating online would seem paramount. However, over six years of intensive governance and multidisciplinary academic interest in tackling false information online has not yet fixed the problem. We conclude that the ultimate solution would be to alter the business models of platforms, so that they do not seek maximal user engagement and so that they do not design algorithms that make emotional and deceptive content go viral (see Section I). In lieu of directly addressing the innate dynamics of informational capitalism and the economics of emotion, we are left to tinker at the edges with imperfect solutions. Ultimately, when set against business models that promote emotive, false information, any proposed solution faces an uphill task. As Chap. 2 explains, leaked Facebook documents show that Facebook’s News Rank algorithm has prioritised emotional, engaging reactions, with posts sparking ‘Angry’, ‘Wow’ and ‘Haha’ Reaction emoji disproportionately likely to include misinformation, toxicity and low-quality news. The power of the algorithmic promotion undermined efforts by Facebook’s content moderators and integrity teams to reduce toxic, harmful content. Yet, Facebook has the power to address matters at source. In 2020, Facebook cut the weight of all Reactions to one and a half times that of a ‘Like’ and, in September 2020, cut the weight of the ‘Angry’ Reaction to zero. As a result, Facebook users began to get less misinformation, less ‘disturbing’ content and less ‘graphic violence’ (Merrill & Oremus, 2021, October 26). Twitter also has the power to reduce viral false information and occasionally does so to protect the civic body, as discussed in Chap. 4. For instance, in preparation for the 2020 US presidential elections, Twitter temporarily introduced friction to slow the spread of misleading information by reducing the overall amount of sharing on the platform (Gadde & Beykpour, 2020, November 12). Whether social media platforms will address their business models to permanently dampen false information online remains to be seen.

Eager to prevent regulation along these lines, globally dominant digital platforms regularly point to their many mitigation efforts and to the good that their platforms enable, including the large amount of money and creativity that their presence creates in countries. As such, a redesign of algorithms to make platforms less engaging is unlikely to happen without either (a) a mass exodus of users (which is unlikely given how strongly imbricated the dominant digital platforms are into people’s daily lives) or (b) strong governmental and coordinated intergovernmental intervention to regulate algorithms that promote emotive, false information (care would be needed not to sacrifice the benefits of free speech). At stake is whether it is acceptable for globally dominant digital platforms to be deciding, ultimately, what is optimal, optimisable, or optimised in a public sphere shaped by datafied emotion, given the many harms to the civic body that we have identified.

Importantly, false information online has been incubated to date by globally dominant digital and social media platforms. But they are just the currently most prevalent use case of emotional profiling, with many more emergent forms of emotional AI being trialled and rolled out globally. As such, we need to consider near-horizon possible futures and formulate principles to strengthen the civic body when faced with the rising tide of emotional AI. It is to this task that we turn in the following, and final, chapter.