Introduction

Emotion plays a vital role in modern societies, especially given circulation of knowingly and unwittingly spread false information. This book assesses how this has come to be, how we should understand it, why it matters, what comes next and what we should do about it. We start with three observations.

Firstly, false information is prevalent online and causes real-world civic harms. Multiple concepts associated with false information achieved linguistic prominence across the early twenty-first century, indicating the scale of the problem. In 2006, ‘truthiness’ was Word of the Year for Merriam-Webster: it refers to ‘a truthful or seemingly truthful quality that is claimed for something not because of supporting facts or evidence but because of a feeling that it is true or a desire for it to be true’ (Merriam-Webster, 2020). A decade later, ‘post-truth’ became Oxford Dictionaries Word of the Year, defined as, ‘relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’ (Oxford Dictionaries, 2016). By 2017, a year after Donald Trump became US president, ‘fake news’ was word of the year for Collins’ English dictionary, defined as ‘false, often sensational, information disseminated under the guise of news reporting’ (Collins English Dictionary, 2017). In 2018, ‘misinformation’ was Dictionary.com’s word of the year, namely: ‘false information that is spread, regardless of whether there is intent to mislead’ (Dictionary.com, 2018). This linguistic infiltration indicates the prominence of false information in recent years, as well as widespread public concern.

Such concern built across the second and third decades of the twenty-first century. By 2013, massive digital misinformation was so pervasive in social media that it was listed by the World Economic Forum as a major societal threat. As the COVID-19 pandemic broke out in 2020, the World Health Organization (2020) expressed concerns about an ‘infodemic’, namely, ‘too much information including false or misleading information in digital and physical environments during a disease outbreak’ causing ‘confusion and risk-taking behaviours that can harm health’ and ‘mistrust in health authorities’. COVID-19 conspiracy theories led to preventable deaths as many refused to be vaccinated against this highly infectious, novel disease. Yet, we should resist the idea that false information is something that those people believe. Although some viral claims were outlandish (such as the theory that co-founder of Microsoft, Bill Gates, masterminded the pandemic to implant microchips into humans alongside the vaccine), others were conceivable, while false (such as the vaccine’s potential impact on fertility). Given limited understanding, persuasive and professional looking ‘news’, and the lingering ‘what if’ question, people reached their judgements. Philosophically, we take the view that although facts certainly exist, people mostly do not passively observe the world. Rather, we are coping and adapting, and while this involves rational decision-making, judgements are rightly informed by emotions.

Indeed, our second observation is that emotion is fundamental to civic life. Governments seek to influence their population’s behaviour using insights from behavioural economics and cognitive psychology into the role of emotions in decision-making, for instance, to encourage compliance with COVID-19 biosecurity rules. Political parties and campaign groups use civic emotions to invigorate democratic life, drive voter mobilisation and nudge opinion formation. Direct action and protest against polluting corporations run on hope for change, channelling anger to shame decision-makers into addressing the climate emergency. Celebrations of hard-won civic rights invoke collective pride among marginalised communities while provoking others who feel unseen and left behind. Public efforts to help refugees are fuelled by pity and empathy, but equally, opposition to economic immigrants runs on suspicion and fear.

Our third observation is that profiling and optimisation of emotions using automated systems (AI and machine learning algorithms) are escalating features of daily life. Such systems, present in social media and elsewhere, record and analyse people’s psychological and behavioural characteristics. They do this to profile, label, classify and judge people, largely for the purposes of refining, targeting, boosting and otherwise optimising messaging that people are exposed to. This, today, is mostly automated and algorithmic, involving sets of instructions for computers to label, classify and inform how a system will ‘decide’ (Kitchin, 2017). Traditionally algorithms are rules predefined by human experts to perform a task and make sense of data, although today’s digital platforms use machine learning techniques to look for patterns in big data. Supervised when the computer is being told what to look for and unsupervised when told to inductively create patterns and labels, both approaches used by digital platforms process vast amounts of information about human subjectivity to offer personalised services, content and advertisements (hereafter, ads) (van Dijck et al., 2018). Increasingly, the datafied emotions of consumers, users and citizens are also algorithmically gauged and profiled by private companies and governments worldwide for diverse purposes, including to persuade, influence and monetise.

At the established end of the spectrum, this involves profiling the behaviour and emotions of social media users to increase their engagement with the platform and their value to online advertisers. As we discuss in Chap. 2, the world’s largest suite of social media platforms, Meta Platforms Inc. (formerly called Facebook until 2021), comprising Facebook, WhatsApp, Instagram and Messenger, continuously tweaks its machine learning algorithms to maximise user engagement. These technologies have real-world consequence, shown, for example, by internal Facebook documents leaked in 2021. These reveal how tweaks to its News Rank algorithm (that determines what posts users see in their News Feed) promoted politically and socially destructive, extremist, viral, false information. This, in turn impacts real-world politics. Another leaked Facebook report states that political parties across the world felt that Facebook’s algorithmic change forced them into more extreme, negative policy positions and communications. Such algorithmic tweaks have global reach. Facebook had 2.9 billion monthly active users in 2021. Its parent company, Meta, claims 3.6 billion monthly active users in 2022 across Facebook, WhatsApp, Instagram and Messenger, reaching almost half of the world’s population.

At the more emergent end of the spectrum, the profiling of behaviour and emotions involves a wide range of biometric profiling, variously trialled and deployed worldwide to generate more persuasive ads and call centre workers, more reactive voice assistants and toys, cars that compensate for drivers’ mental and emotional states, border guards that detect travellers’ deception, police that anticipate dangerous situations in crowds and teaching that monitors children’s concentration. In short, social life is becoming increasingly profiled in efforts to assess emotional and psychological disposition for the purposes of profiteering, persuasion, wellbeing, influence, resource allocation, social engineering, safety and security. Adequacy of methods is more than a little debatable, but we see this as an ongoing development of a process that began several decades ago with experimentation with biosensors in affective computing and the rise of social media platforms. It is now spreading to encompass diverse biometric data capture across existing media and emergent services.

In this book we focus on how current emotional profiling fuels the spread of false information online, consider the implications of emergent emotion profiling and suggest what can be done about these developments. Straddling our three opening observations, there is now evidence that emotion profiling incubates false information online, causing significant harms worldwide. This chapter frames these developments in terms of a civic body increasingly affected by processes of optimised emotion.

Optimising Emotion

Emotions are powerful drivers of decision-making and behaviour. As such, there is commercial, political, ideological and discursive power in understanding and influencing emotions and collective feeling. This has long been known by advertisers seeking to influence consumer behaviour, political campaigners seeking power, governments seeking behavioural management of populations, journalists seeking to maximise readership, trade unions seeking solidarity and social movements seeking social change. Unsurprisingly, then, the formation and manipulation of irrational (emotional) publics has long been of concern across multiple vectors of interest, most obviously, the Frankfurt School’s mass society thesis (for instance, Marcuse (1991 [1964])) and public sphere theorisation and critiques (such as Habermas (1992 [1962]) and Calhoun (2010)).

Concomitantly, recent years have seen increasing datafication of emotion and the rise of emotional artificial intelligence (‘emotional AI’) and so-called empathic technologies. These technologies use machine training to read and react to human emotions and feeling through text, voice, computer vision and biometric sensing, thereby simulating understanding of affect, emotion and intention (McStay, 2018, 2022). The roots of such processes that convert emotional life into data lie in ‘affective computing’ (Picard, 1997) which, in the 1990s, measured biometric signals such as heart fluctuations, skin conductance, muscle tension, pupil dilation and facial muscles to assess how changes in the body relate to emotions. Today, the emergent social picture is one where datafied emotion is optimised to form a fundamental component of personalisation, communication and experience.

Only a few years ago practical use cases of biometric emotional AI were rare, such as in outdoor ads that, enabled by emotional AI, changed themselves over time to elicit more smiles from passers-by (these captured by cameras embedded above the ad). Emotion optimisation is now becoming more mainstream, at least in certain consumer-facing sectors (McStay, 2018). For instance, major car manufacturers worldwide are deploying in-car cameras and affect and emotion tracking systems to profile drivers’ emotional behaviour to personalise in-cabin experience and improve safety (McStay & Urquhart, 2022). Emotion-focused wearables are increasingly popular with Amazon’s Halo (a fitness, mood and wellness tracker) and Garmin systems (tracking stress and the body’s ‘battery’) to help users manage their mental health and overall day (Dignan, 2020, December 14; Garmin, 2022). Legacy companies such as Unilever and IBM in the USA, and SoftBank in Japan, use emotional analytics for recruitment purposes (Richardson, 2020). Although there is extensive academic scepticism about over-simplistic methodological approaches used by the emotional AI industry to translate biometric signals into emotional inferences (McStay, 2018, 2019), this has not prevented trialling and development of biometric forms of emotional AI in even more sensitive domains, often in countries with very different data protection and privacy regimes. This includes border security, policing (Wright, 2021), smart cities (McStay, 2018), education (Article 19, 2021; McStay, 2019) and children’s toys (McStay & Rosner, 2021).

However, mass datafication and optimisation of emotion was pioneered and honed by social media platforms, which we observe as the dominant use case of emotional AI worldwide today. This follows two decades of continuous development of their emotional profiling and targeting tools, premised on surveillance and sharing of users’ data for the purposes of modifying user behaviour and maximising user engagement with their platforms. We will return to this point in Chap. 2, but for now, we observe that many accounts link dominant social media platforms’ quest for greater user engagement to the viral spread of false information. Indeed, a survey in 2020 covering all five continents reveals that globally, people see social media as the biggest source of concern about misinformation (40%), well ahead of news sites (20%), messaging apps (14%) and search engines (10%). Overall, the greatest concern is with the world’s biggest social media platform, Facebook (29% are most concerned about Facebook), followed by Google-owned YouTube (6%) and Twitter (5%) (Newman et al., 2020). As such, it is social media that predominate throughout this book when discussing false information online. In the final chapter, we move beyond social media to embrace more emergent emotional AI forms as we delve into near-horizon futures and assess implications for civic bodies that are being profiled and optimised in increasingly novel ways. This entails use of AI technologies and organisational claims to see, read, listen, judge, classify and learn about emotional life through biometric and human state data (McStay, 2018).

Seen one way, ‘optimisation’ is the language of efficiency, making the best or most effective use of a situation or resource. Yet, when applied to datafied human emotion and civic functions (such as a public sphere of news, debate and information flow shaped by social media and search engines), ‘optimisation’ cannot fail to become something more political, more contentious. This is particularly so when set against critical understanding of Silicon Valley’s neoliberal, free market worldview and ‘technocapitalism’ where globalised, powerful corporations profit from intangibles such as new knowledge, intellectual property, research creativity and technological infrastructure (Suarez-Villa, 2012). Such critique raises classic questions of exploitation and choice: who decides, or has a say in, what is optimal, optimisable, or optimised in a public sphere shaped by datafied emotion? Furthermore, who benefits from these decisions, who is harmed and what is lost along the way? We hope, in this book, to provide some answers.

The ‘Civic Body’

We advance the notion of the civic body, to capture the various ways by which datafied emotion is collected, processed and optimised, especially as it relates to information, between individuals and collectives. The civic body has an antecedent in the ‘body politic’, a principle originating with Plutarch, the Greek Middle Platonist philosopher who regarded the polity as akin to a body having a life (Rigby, 2012). This has been through many iterations, focusing on different aspects of the body. Head-oriented accounts of the body politic focus on the head of the body, be this monarchs, rulers, decision-makers and hierarchical conceptions. Other bodily metaphors focus on limbs (such as the long arm of the law) or organs (typically the heart, belly and bowel) (Musolff, 2010). Still other bodily metaphors conceptualise the polity as based on equilibrium and interdependence. Just as a body requires balance to ensure homeostasis, social harmony is required among key stakeholders for the wellbeing of the entire social and political organism.

Although the ‘body politic’ is a classical way of understanding the relative importance and ecological interactions of government, monarchs, police, military and other heads, limbs and organs of the polity, the ‘civic body’ (as advanced in this book) is a more citizen-oriented concept in drawing attention to the datafied emotion of individuals and collectives. The civic body is also a less metaphorical concept than the body politic. Quite literally, we deploy the concept of feeling-into and optimising the civic body as a way to account for the variety of modalities by which interested parties seek to understand not only citizens’ expressed and inferred preferences but increasingly their biometric correlates. This, for us, is a key change, in that the profiling of human behaviour increasingly involves information about the body, be this our faces, voices, or biometrics collected by body-worn technologies. This allies well with (but is not reliant on) biopolitical writing, crystallised in Rose’s definition of biopolitics as the capacity ‘to control, manage, engineer, reshape, and modulate the very vital capacities of human beings as living creatures’ (Rose, 2006, p. 3) and the general interest in integrating bodies into systems (Rabinow & Rose, 2006). We also recognise biopolitical interest in organisational proclivity towards life sensitivity and what biopower scholars phrase as the ‘molecular’ (Deleuze & Guattari, 2000 [1972]; Foucault, 1977; Lazzarato, 2014; McStay, 2018) that represents a shift from macro- to micro-interests of a biological sort in populations, subjectivity and governance thereof.

There are a variety of modalities by which interested parties use technologies to feel-into the civic body. Today, this is still typically done through polling, interviews and focus groups. For instance, since 2017, Microsoft has compiled a Digital Civility Index through an online survey of adults and teens across more than 20 countries and most continents to explore their perceptions of online incivility. It covers topics such as being ‘treated mean’, trolling, hate speech, online harassment, hoaxes, frauds, scams, discrimination and sexual solicitation. Its 2021 survey finds that the most civil country online is the Netherlands, followed by Germany, the UK, Canada and Singapore. The most uncivil country online is Colombia followed by Russia, Peru, Argentina, India and Brazil (Microsoft, 2021). Complementing such standard tools for feeling-into the civic body, emotional AI and wider empathic technologies are also deployed. Modalities may include (1) sentiment analysis, (2) psycho-physiological measures and (3) urban data, each discussed below.

The first modality, sentiment analysis, focuses on online language, emojis, images and video for evidence of moods, feelings and emotions regarding specific issues. Sentiment is often inferred through social media, and studies have been conducted on Twitter to measure life satisfaction in Turkey (Durahim & Coşkun, 2015); to create a ‘hate map’ to geolocate racism and intolerance across the USA (Stephens, 2013); and to measure and predict electoral outcomes (Ceron et al., 2017). Beyond social media sentiment analysis, music players such as Spotify sell data about specified user groups, providing insights on moods, psychology, preferences and triggers. ManTech, a US government contractor, has developed a model that uses open-source intelligence to predict and track foreign influence operations in any country, including covert actions by foreign governments to influence political sentiment or public discourse. Its main data source is Google’s Global Database of Events, Language and Tone (GDELT) that monitors the world’s broadcast, print and Web news in over 100 languages and identifies emotions, as well as people, locations, organisations, themes, sources, counts, quotes, images and events (Erwin, 2022, April 25).

The second modality, psycho-physiological measures, entails more focused assessment of bodies themselves, such as via laboratory-based tracking of facial expressions to gauge our expressions when shown political ads. Other psycho-physiological means include wearable devices that sense various responses (such as skin conductivity, moisture and temperature; heart rate and rhythms; respiration rate; and brain activity). Voice analytics try to parse not just what people say but how they say it. These include elements such as rate of speech, increases and decreases in pauses, and tone (McStay, 2018). Indeed, Amazon has long sought for its ubiquitous Alexa to profile users’ voice behaviour to gauge emotion, despite technical and methodological challenges.

Finally, the third modality, urban data, moves beyond sentiment analysis and laboratory settings, to feel-into ‘living labs’ (Alavi et al., 2020). Expanding beyond laboratory walls, this involves research, analysis and surveillance in more open settings, for example, a city or a larger polity. This harnesses insights gathered from traditional research techniques, online media and laboratory-based response analysis, but also feels-into the civic body through a vast array of public and private means. These include footfall, transport usage, data from mobile phones, spending patterns, urban cameras (that may register numbers, identities and expressions of emotion), health data and citizen complaints. This involves ‘big data’ logics to identify patterns in massive volumes of unstructured data from multiple sources and react quickly (Ceron et al., 2017). However, feeling-into the civic body goes further than this: quantity is used to help deal with political ‘why’ questions.

We posit that a healthy civic body requires a healthy media system. Unfortunately, so far, the datafication and optimisation of individual and civic emotion by digital platforms has helped incubate and amplify an ecology of false information throughout the civic body.

Incubating False Information in the Civic Body

There are many processes that fuel the ecology of false information among ‘networked publics’ (namely, publics restructured by networked technologies) (Boyd, 2010). These include epistemological processes (such as the rise of ‘post-truth’); cultural processes (such as the decline of trust in political elites, experts and journalists); political processes (that fan emotion, such as nationalism and populism); economic processes (such as increasing competitive pressures on news outlets, generating tendencies to produce ever more engaging content); regulatory processes (such as uneven data privacy protections); and media and technological processes (such as the global rise of social media platforms and their interplay with legacy and alternative media systems). The view from Central and Latin America, for example, finds many of these processes underpinning widespread disinformation in elections held in 2018 in Brazil, Colombia and Mexico and in 2011 and 2015 in Guatemala. According to reports by news outlet, The Intercept, and by the Atlantic Council (a non-partisan think tank that seeks to galvanise American leadership to address global challenges), these processes include structural political corruption, mistrust in politicians and a desire for change; economic downturns and unemployment; an absence of data protection cultures; political challengers adept at using social media; and features within social media platforms that enabled virality (for instance, in Brazil, each WhatsApp user could create up to 9999 groups, each with up to 256 people and could forward a message to 20 contacts simultaneously) (Bandeira & Braga, 2019; Bandeira et al., 2019; Currier & Mackey, 2018, April 7).

Mindful of these broader processes, we focus on the digital media and technological element. This is centrally important as false information online is greatly facilitated by the affordances of digital networked environments, namely, what these technological systems enable to happen and how they are used in particular contexts (Rice et al., 2017) and the sifting, sorting and judging processes therein. With few resources, 100% fake news websites hosting totally made-up stories can be created and made to look like genuine news content. Digital manipulation tools can increasingly easily be bent towards changing images and video (via deepfakes and shallowfakes), thereby deepening the rupture between recorded image and reality (a problem long discussed by Media Studies). These deceptive messages can have a long shelf life. As well as faking content, identities can be easily disguised online (the phenomenon of ‘sock puppets’). These deceptive accounts and messages can be made to appear popular through amplification via campaigners, bots and targeting influential humans to manipulate online conversation. Indeed, in 2017, Facebook estimated that 2–3% of its worldwide monthly active users were ‘user-misclassified and undesirable accounts’, with a far higher percentage in developing markets such as India, Indonesia and the Philippines (Facebook, 2017, September 20).

People are concerned about false information online, as repeatedly shown in global surveys. Annual surveys by the University of Oxford’s Reuters Institute (funded by the Thomson Reuters Foundation, Google, Facebook and other donors) conducted across scores of countries, and all five continents from 2018 to 2022 find over half (around 54% to 58%) of respondents are concerned about what is real and fake online (Newman et al., 2018, 2021, 2022). There are large variations between regions. In 2021, there was most concern in Africa (74%), followed by Latin America (65%), North America (63%) and Asia (59%), with the lowest concern in Europe (54%) (Newman et al., 2021). A survey of 27 countries in 2020 finds only 29% agree that they have ‘good information hygiene’ in engaging with news; avoiding echo chambers; verifying information; and not amplifying unvetted information (Edelman, 2021).

Alongside national differences in concern over false information, there are demographic differences. Surveys representative of the digital population from Argentina, Chile and Spain across 2018 and 2019 show that concern increases with age, women, self-identified left-leaning users and those with high interest in political news (Rodríguez-Virgili et al., 2021). Age is also a factor in a 2018 survey of 28 European Union Member States, with older respondents less confident in their ability to identify fake news than other age groups (Eurobarometer, 2018).

Regions with the highest levels of concern (Africa, Latin America) over false information online correspond closely with high levels of use of social media for news. Different platforms also engender different levels of concern worldwide. Globally, the greatest concern is with Meta-owned Facebook (identified by 29% in 2020): this is unsurprising given that it is the most used social network worldwide. In parts of the Global South, such as Brazil, Chile, Mexico, Malaysia and Singapore, people are more concerned about closed messaging apps like (Meta-owned) WhatsApp where false information is less visible and harder to counter: for instance, 35% are concerned in Brazil. Twitter is seen as the biggest problem in Japan and YouTube in South Korea (Newman et al., 2020, 2021, 2022). Even in China, where the communication environment is tightly controlled and American digital platforms are absent, people are concerned about fake news. This is especially so on social networking site Sina Weibo (Twitter’s equivalent in China, with over 307 million monthly active users in 2021) and on Tencent’s popular messaging app WeChat (with over 1 billion monthly active users in 2021). A 2018 survey finds that about seven in ten respondents believe that fake news poses ‘a great deal’ or ‘a fair amount’ of threat to Chinese society and 12% think that ‘most’ of the news on social media is made up (Tang et al., 2021).

Not all types of false information are of equal concern. Surveys from Argentina, Chile and Spain across 2018 and 2019 show that participants are most concerned by stories where facts are twisted to push a particular agenda, followed by those completely made up for political or commercial reasons. They were least concerned by poor journalism (factual mistakes, dumbed-down stories, misleading headlines and clickbait) (Rodríguez-Virgili et al., 2021). While there remains widespread media coverage of attempts by outside powers to undermine elections abroad, Newman et al.’s (2020) survey across 40 countries finds that it is domestic politicians that are seen as by far the most responsible for false and misleading information online (40%), followed by political activists (14%), journalists (13%) and ordinary people (13%), with only 10% concerned about foreign governments. In some countries the figure holding domestic politicians as most responsible for false information is even higher (for instance, in Brazil, the Philippines, South Africa and the USA) (Newman et al., 2020, p. 18).

Recognition of the harms that false information can inflict on the civic body has generated a search for solutions at national and supranational levels. A global response is necessary given the economic power, political value and transnational nature of dominant digital platforms. Especially where platforms operate in jurisdictions without data protection legislations, corporations may decide what data is collected, who can access and use the data, and why. This is not simply a privacy issue, but one of fairness and justice. For instance, in 2020, Facebook blocked an international investigation into use of hate speech on its platform to incite genocide against Rohingya Muslims in Myanmar in 2018 (Smith, 2020, August 18). This obstructs ‘data justice’, a term advanced by Taylor (2017) to advocate fairness in how people are made visible, represented and treated arising from their production of digital data.

Optimising Society and Subjectivity

Despite such concerns, optimising emotional data could be a force for civic good. Journalists have long appreciated the need to emotionally engage audiences: worthy stories that go unread have little value. Consider, too, the increase in engagement, mobilisation and togetherness across civic practices, where citizens care enough to go out and vote, or where they reach out to each other in solidarity and empathy. In a situation where data about emotion is increasingly ubiquitous, powerholders may seek to feel-into localised emotions and civic moods to form their policies and to help govern and better care for their wards. Already, since 2011, the UK’s Office for National Statistics has tracked national and local authority-level average ratings of life satisfaction, happiness, anxiety and whether things feel worthwhile. For instance, it finds that in the build-up to the first national COVID-19 lockdown in March 2020, average anxiety jumped to its highest level since measurements began, and average happiness levels declined steeply (Office for National Statistics, 2020). More finally grained emotional data would prove useful to governments seeking to model and manage population behaviour at aggregate and localised levels, especially during upheavals like pandemics.

Yet, there is clearly scope for harms. These include the rise of ‘empathically-optimised automated fake news’ that exploits users’ outrage, tribalism and preconceived ideas (Bakir & McStay, 2018, 2020); computational propaganda that attempts manipulation of public opinion through an assemblage of social media platforms, autonomous agents and big data (Woolley & Howard, 2018); information warfare that provokes intense anxiety among targeted populations (Bolton, 2021); and political campaigning that profiles how we secretly feel in order to push anti-social emotional buttons (such as resentment towards specific groups) (Bakir, 2020).

For better or worse, the psychological and emotional behaviour of individuals and groups is increasingly quantified and datafied for the purposes of monetisation and influence (McStay, 2018). These processes are opaque but draw upon influential ‘behavioural sciences’ (Thaler & Sunstein, 2008) that downplay rationality in favour of a neo-behaviourist outlook. Furthermore, users are kept from seeing much of how their behaviour is monetised and shaped, whether by hidden trackers and behavioural advertising pixels that follow them around the Web, or by the secret processes that determine content moderation on platforms (Gorwa & Ash, 2020). While all these developments are observable, their impacts on human subjectivity and autonomy are more debatable.

Following a long tradition in Media and Cultural Studies that laments the loss of human agency, creativity and ability to think for oneself in the face of commercial or propagandistic mass communications (as in mass society studies), critical and biopolitically inclined scholars similarly object that neo-behaviourism and seeing people in psycho-physiological terms disregards (or denies) agency and civic autonomy. For instance, Andrejevic (2020, p. 2) highlights the peril of ‘automated media’ creating an ‘automated subject’ whose wants and needs have been anticipated and whose anti-social desires pre-empted, thereby diminishing the subject, politics and citizenship. After all, we are not the sum of our past preferences but engage in dialogue and community to reach collective decisions and to cultivate ‘a willingness to adopt the perspective of others’ (Andrejevic, 2020, p. 19). On a similar theme, Zuboff (2015, 2019) argues that ‘surveillance capitalism’ (exemplified by Google’s AdWords) uses personal data to target consumers more precisely, thereby exploiting and controlling human nature and damaging the social fabric. For Zuboff (2015, p. 86), this replaces the ‘rule of law and the necessity of social trust as the basis for human communities with a new life-world of rewards and punishments, stimulus and response’. Also decrying loss of human autonomy, Couldry and Mejias (2019, p. 346) advance the notion of ‘data colonialism’ as a commercially motivated form of data extraction that advances particular economic and governance interests. They argue that we must protect ‘the integrity of the self as the entity that can make and reflect on choices in a complex world’; that this is ‘essential to all Western liberal notions of freedom’ (p. 345); and that it ‘cannot be traded away without endangering the basic conditions of human autonomy’ (p. 345). Bösel (2020) argues that the blackboxing of media-assisted, automatic affect regulation of individuals and populations might lead to serious disempowerment of moral and political subjects.

Although multiple critics decry the attack on human subjectivity, agency and autonomy, a cautionary note is needed when discussing impacts of any media text, system or technology on our beliefs, thoughts and actions. Historically, when new media technologies emerge, so do dystopian worries about their harms, alongside a desire to understand how to harness the new medium for social engineering. This was the case with the emergence of printing, radio, film, television, video games, the Internet and social media. Subsequent empirical studies tend to find that audience impacts are less pronounced, more difficult to interpret and more varied, with active rather than passive audiences, some of whom resist and reappropriate content rather than succumb to manipulation (Livingstone, 1998). Certainly, the empirical reality concerning false information online is messy. For instance, a representative online survey of Spanish adults finds (a) active users who are concerned about false news, are more aware of difficulties detecting it, and so make more effort to check news veracity; and (b) confident, passive users who feel less concerned about false news, view it as less difficult to detect, and so verify content less (Almenar et al., 2021). We acknowledge the long tradition of media research that engages with uses and gratifications and the politics of pleasure, finding active, oppositional and interactive audiences, as well as modes of resistance often mobilised by personal experience, and cultural differences and competences (for instance, Ang (1996), Morley (1992)). Indeed, agency can be evident even in encounters with algorithmic systems (Savolainen & Ruckenstein, 2022; Velkova & Kaun 2021). Ultimately, however, people (or audiences or users) do not get to design or set the rules on how such systems work, including technological systems of emotional optimisation. Furthermore, these algorithmic systems are abstract, opaque, personalised and of recent provenance, making it harder for people to develop an awareness of how they operate and whether they can be resisted or gamed.

Rather than starting from highly critical perspectives (as, for instance, adopted by those from the biopolitical or mass society camps), this book seeks out empirical insights and patterns to diagnose and evaluate the harms to the civic body from false information online. Most of this book focuses on false information incubated by digital platforms and especially social media (the globally dominant use case of emotional AI today). We appreciate that this is just part of the wider media system and that our focus neglects other areas such as the role of monopolistic, commercial legacy media systems and state-captured media systems in disseminating false and distorted information, but this is well-trodden ground in the political economy of media studies (McChesney, 2008; Túñez-López et al., 2022).

Throughout this book, we document strong currents seeking to optimise human emotion on behalf of platforms and influencers. Mindful to be even-handed rather than alarmist, we have sought out user-based studies to understand to what degree the agency of people is undermined across three component areas: false information, emotional information and microtargeting. Unfortunately, the studies are showing that most people are bad at recognising deception, especially in novel digital media forms (see Chap. 4), that emotions are viral online and that (some) people prefer news that bolsters their own worldview (see Chap. 5). The area least settled is microtargeting (see Chap. 6), and while this practice is on the rise, more user-based studies in this area are warranted.

One may rightly ask, who benefits from such emotional optimisation? As we will develop (especially in Chap. 2), it is the globally dominant digital platforms who ultimately profit from algorithmic optimisation of emotions, as this is driven by their business model that maximises user engagement. Justifiable and unjustifiable anger are fuelled by the algorithms, but so are joy, sadness and other emotions. As emotions drive user engagement, platforms can profit from any of these emotions; hence they are clear beneficiaries of this socio-technological arrangement. By contrast, the benefits to individuals and societies are mixed, affected by multiple contexts and accompanied by harms (for instance, proliferation of extremism, hate speech and false information online).

In seeking empirical insights into the causes and social consequences of globally dominant forms of emotional AI, we form a robust empirical base, from which we divine what may arise from more emergent forms of emotional AI. With interest in mediated emotion and datafied behaviour on the increase in biometric and in-the-wild contexts, we are interested in the horizon line of emotionalised civic bodies. Ultimately, what should be done to socially prepare ourselves for the impact on the civic body of automated profiling, emotional AI and applications that simulate properties of empathy? We conclude by aligning with AI expert Stuart Russell who observes the pressing need to protect our ‘mental integrity’ from the global rise of AI and its profiling and predictive capacities (McStay, 2022; Russell, 2021). We argue that human mental integrity is not something to be lightly tossed aside in a technological, commercial, political or bureaucratic quest for something better, more efficient and optimised.

Aims, Approach and Argument

This is not a pessimistic book but one written in exceptional circumstances. People’s growing concerns about false information online across the past decade have been spearheaded by governments and transnational bodies, producing political inquiries into, and legislation concerning, online fake news and disinformation. The COVID-19 pandemic also underscores harmful impacts of widespread, false information. Indeed, as the book progresses, we will suggest routes and means to address the problems we diagnose.

Given the rising tide of optimised emotion fuelling networked false information, we have six core aims:

  1. 1.

    To understand the significance of societal level profiling of human emotion through digital means.

  2. 2.

    To understand how the media and technological environment is (and will be) constructed to incubate false information, affect, emotions and user profiling and targeting.

  3. 3.

    To understand the economic and political incentives that drive the emotional profiling of society.

  4. 4.

    To understand the implications of societal profiling of emotion in a range of different societies and political arrangements.

  5. 5.

    To evaluate multi-stakeholder solutions to false information online proffered by supranational bodies, various sectors of society and multiple academic disciplines.

  6. 6.

    To generate principles necessary to strengthen the civic body while also looking forward to near-horizon futures.

Our approach deploys a multidisciplinary literature on contemporary misinformation, disinformation, digital marketing, digital advertising and emotional analytics. This scholarship is rooted in Communication Studies, embracing the disciplines of Advertising, Economics, History, Information Science, International Relations, Journalism, Law, Marketing, Media, Philosophy, Politics, Psychology, Public Relations, Science and Technology Studies and Sociology. Throughout, we have focused on studies with implications for the flow of false information throughout the civic body. There is growing evidence on the extent of false information on specific platforms and wider media, the techniques and pathways for its creation and spread, and how it may be tackled. There are also studies on its impacts, most of which detail behavioural impacts on platforms, national levels of concern and smaller-scale experiments into communication processes around false information.

As well as these academic studies, we draw on reports from national and supranational governmental bodies, regulators, non-governmental organisations, digital platforms, technology companies, think tanks (variously claiming to be independent, non-partisan, security focused, policy solutions oriented or technology based), research institutes, cybersecurity organisations, fact-checkers, journalists and, occasionally, bloggers. As the topic of this book is false information, it is pertinent to flag that some of these sources are focused on revealing and solving specific types of disinformation (for instance, emanating from some countries rather than others or deemed problematic for ‘important’ topics or groups of people). Even in countries such as the USA, where there is a considerable research effort into understanding disinformation, evidence is lacking on whether disinformation is about, or targeted at, people based on categories such as race and gender and whether it is effective (Thakur & Hankerson, 2021). Consequently, there are fewer finer-grained demographic insights into the phenomenon of false information online in this book. This would be important to address in future studies, as disinformation campaigns often rely on exploiting existing or historical narratives of discrimination to build credibility for the falsehoods being shared. Also, while we have cast our net widely geographically, some countries are well represented in terms of empirical studies, while others are lacking. This would also be important to address in future studies, as social media are global phenomena; as ‘digital divides’ are rapidly being breached in many countries, but digital literacies have not kept pace; and as digital means of engaging in information warfare and electoral influence can encompass any country. Notwithstanding these empirical blind spots, we have aimed at a global approach that includes, but also looks beyond, the (comparatively) well-trodden ground of the USA.

As this book is primarily empirically based, most studies are on established use cases of optimised emotion, namely, social media and search engines, but we consider more emergent forms of emotional AI where there are supporting empirical studies. This includes substantive insights and trends emerging from the Emotional AI lab, where we are tracking cross-cultural developments in the fast-moving area of false information, datafied emotion and technological change. We flag here the difficulty of researching this emerging sector, not least because algorithms and datasets of the emotional AI industry remain largely off-limits to independent researchers, echoing the stance of dominant social media platforms.

The book has two parts. Part I (this chapter and Chaps. 2, 3, 4, 5 and 6) provides conceptual tools and contextual knowledge to understand the nature of false information online worldwide. We have devoted this chapter to introducing the metaphor of the civic body, to highlight the interconnectedness of bodies (individual and societal) and data about emotions. We also introduced the notion that this is leading to efforts to optimise emotions, this raising classic questions of exploitation and choice. Namely, who decides, or has a say in, what is optimal, optimisable, or optimised in a public sphere shaped by datafied emotion? Who benefits from these decisions, and who is harmed? What is lost along the way, and is there scope for resistance and reappropriation? In Chap. 2, we identify the two core incubators of false information to be the economics of emotion and the politics of emotion—namely, the optimisation of content for economic or political gain. We discuss, in Chap. 3, how different affective contexts worldwide fuel false information. This highlights the need to understand specificities of affective contexts and civic engagement, as well as their intersections with wider international information flows such as information warfare, ideological struggles and platforms’ resources for content moderation.

Three chapters then each separately discuss a core component of contemporary false information online, covering false information (Chap. 4), affect and emotion (Chap. 5) and profiling and targeting (Chap. 6). In Chap. 4, we clarify the nature and forms of false information online (focusing on fake news and deepfakes, as well as wider misinformation), and its occurrence online (noting its prevalence, who spreads it and why). Observing that we are bad at recognising deception, especially new forms, we draw out implications for citizen-political communications, including that rulers should not be deceptive, because of its erosion of social trust and democratic foundations.

Chapter 5 investigates the role of affect, emotion and moods as an energising force in opinion formation and decision-making that drives false information online across social media and news to potentially create post-truth environments. The chapter examines the resulting harms to the civic body by highlighting the challenges it poses to governmental efforts to manage their population’s feelings and behaviour during the COVID-19 pandemic where uncertainty, anxiety, false information and conspiracy theories proliferated. As both mental harms (hate speech) and physical harms (reduced vaccine uptake) were evident, we conclude that we live in an informational environment that is sub-optimal for a healthy civic body.

In Chap. 6, we delve into profiling and targeting as the core means of delivering emotively charged, false information throughout the civic body, exploring this dynamic in political campaigning in democracies with different data protection regimes and digital literacies: the USA, UK and India. We find that political parties know increasingly more about their profiled, target audiences and adapt their campaigning accordingly. Worryingly, politicians and political parties have utilised platforms and built apps to mobilise electorates via delivery of inflammatory and deceptive messages targeted at profiled users. Less worryingly, the few empirical studies on profiling and microtargeting of voters find modest impacts on specific types of audience, and mixed findings regarding accuracy and prevalence of microtargeting. We conclude that more studies are needed on the effects of continuously refined profiling and targeting techniques on voting behaviour, especially as mobilisation of just a small sliver of the population (the persuadables) may generate decisive results. We also find that digital literacy, and awareness of profiling and microtargeting technologies for political purposes is uneven across the world, but where people are aware, most do not want it.

Building on this knowledge, Part II explores how we can strengthen the civic body across dominant and emergent uses of emotional AI. Opening this discussion, Chap. 7 identifies the following six civic harms arising from false information online. (1) It creates wrongly informed citizens that (2) in certain circumstances, for certain communities, may stay wrongly informed in digital echo chambers and (3) more widely, be emotionally provoked, leading to (4) contagion, where false, emotive information incubated online influences wider social media and mainstream news. Meanwhile, (5) profiling and microtargeting raise core democratic harms comprising fragmentation of important national conversations; targeted suppression of voters; and even (potentially) undue influence over susceptible citizens. Also related (6) is the impact of false information in seeding distrust in important civic processes and institutions.

Chapter 8 evaluates solutions so far proffered by diverse stakeholders and by the multiple academic disciplines that embrace Communications Studies. It assesses seven solution areas: namely: (1) government action, (2) cybersecurity, (3) digital intermediaries/platforms, (4) advertisers, (5) professional political persuaders and public relations, (6) media organisations and (7) education. Noting that these are intrinsically difficult areas to solve individually, let alone in concert, and in every country, we conclude that such solutions merely tinker at the edges as they do not address a fundamental incubator for false information online: namely, the business model for social media platforms built on the economics of emotion.

The final chapter (Chap. 9) looks forward to near-horizon futures—an important angle given the rapid onset, scale and nature of false information online, and the rising tide of deployment of emotional analytics across all life contexts. While noting that false information, emotion, profiling and targeting are hardly new phenomena in citizen-political communications, we observe that the scale of contemporary profiling is unprecedented. We argue that a prime site of concern is the automated industrial psycho-physiological profiling of the civic body to understand affect and infer emotion for the purposes of changing behaviour. Exploring this, we look to near-horizon futures. This allows us to distil our core protective principle of protecting mental integrity. This is necessary to strengthen the civic body to withstand false information in a future where optimised emotion has become commonplace. How to have less of the harms and more of the positive elements is a difficult conundrum for policymakers. We hope that this book contributes to this ongoing global debate.