We now understand the nature of the complex, interconnected environment where communication and technologies operate to spread false information, impacting individuals and society. We diagnose that it is incubated especially by the economics of emotion (namely, the optimisation of datafied emotional content for financial gain) and the politics of emotion (namely, the optimisation of datafied emotional content for political gain). To reach this understanding, we integrated and shaped a wealth of literature from numerous disciplines on the deployment of false information, emotion, profiling and targeting. We illustrated this with case examples from across the world while reflecting on arising social and democratic harms to the civic body and multi-stakeholder solutions. Throughout, we have focused on global digital platforms, especially social media platforms, as these are the dominant purveyors of emotional AI globally today. Yet, far greater datafication of emotion is presaged worldwide through a plethora of more emergent emotional AI technologies. In this final chapter we draw out more substantive answers to strengthen the civic body as the bandwidth for the datafication, and optimisation, of emotion expands.

First, we tease out core shifts discernable from a backward glance. This allows us to identify that while false information, emotion, profiling and targeting are hardly new phenomena in citizen-political communications, the scale of contemporary profiling is unprecedented. As such, a prime site of concern is the automated industrial psycho-physiological profiling of the civic body to understand affect and infer emotion for the purposes of changing behaviour. Exploring this, we look to near-horizon futures. This is an important angle given the rapid onset, scale and nature of contemporary false information online; the rising tide of deployment of emotional analytics across all life contexts; and what we see as the greater role that biometrics will play in everyday life. Peeking over the horizon line allows us to distil our core protective principle of protecting mental integrity. This is necessary to strengthen the civic body to withstand false information in a future where optimised emotion has become commonplace.

Looking Backwards: Core Shifts

Reflecting on the rise of false, emotive information online, our chapters on false information (Chap. 4) and affect, emotion and mood (Chap. 5) highlight that such phenomena are enduring features of citizen-political communications, spanning thousands of years and attuned to shifts in media environments. Yet, if contemporary false information online simply makes use of classical propaganda techniques, why the current furore? The most obvious changes to the wider environment have been wrought by introduction of new forms of media, profiling techniques, systems that judge humans and their behaviour, and the search to monetise these phenomena. Indeed, the scale of contemporary profiling is unprecedented, notwithstanding the fact that profiling itself has a long history.

As shown in our discussion of adtech and corporate profiling in political communication (especially Chap. 6), the private sector led improvements in classification and quantification of populations using a panoply of approaches to identify audiences and record feedback. Originating in the USA over a century ago, and subsequently adopted by media owners internationally (McStay, 2011), this close monitoring of behaviour, consumption and geo-demography brought order to understanding of preferences, attitudes, civic feeling and disposition. Indeed, the fundamental principles of societal management and control through data had been essentially completed by the late 1930s through ad-testing, retail patterns, surveys and media engagement trends, among other data sources (Beniger, 1986).

Similarly, pre-empting automated real-time A/B testing used in commercial and political digital campaigning, key figures in the history and practice of advertising, such as Daniel Starch (1914) and Claude Hopkins (1998 [1923]), were insistent that advertising should be treated as a science, using feedback to understand and identify those techniques that worked. The championing of datafied campaigning and voter profiling increasingly evident worldwide has discursive roots 100 years old. Indeed, while the feedback logics of contemporary false information online might be said to have neo-behaviourist characteristics, the earliest large advertising agencies (such as the J. Walter Thompson agency) were hiring behaviourist scientists to study their advertising, audiences and their states of mind, emotion and reactivity, in a systematic, data-first manner (McStay, 2011).

While we are wary of technological determinism, the use of technology does alter things, as witnessed through numerous seismic changes to mediated life. The printed press, radio, television, Internet, mobile telephony and their modalities of audience profiling are of profound importance. As we look towards the horizon line for how the media ecology might evolve, we regard as a prime site of concern the rise of emotional AI (McStay, 2018) and its psycho-physiological profiling of the civic body to understand affect and infer emotion.

Looking Forward: Near-Horizon Futures

Social media platforms developed and honed the practice of profiling and targeting individual desires and vulnerabilities, but they are now being joined by more emergent forms of emotional AI that are being trialled by governments worldwide as well as by globally dominant digital platforms themselves. When assisted by technologies that can turn human-state signals into fungible electronic data, identify patterns in small and large datasets, and apply and test rules from one situation to other situations and when this can be done increasingly cheaply, this provides for hitherto unseen scale. This portends nothing less than the automated industrial psychology of emotional life, one already attuned for changing behaviour. ‘Emotional AI’ claims to read and react to emotions through text, voice, computer vision and biometric sensing. This simulates understanding of human emotions via sensing words and images (such as sentiment analysis) and via sensing various bodily behaviours including facial expressions, gaze direction, gestures, voice, heart rate, body temperature, respiration and dermal electrical properties (McStay, 2018). Applicable machine learning and AI techniques deliver outputs that are named emotional states. These are then used for given purposes, such as predicting behaviour.

This is not at all far-fetched, and we do not seek to be alarmist or dystopian as a means of attracting attention. Instead, be this the wearable on our wrist, the cameras and microphones in our mobile phones, home digital assistants, in-car cameras and telematics, and more, their affect- and emotion-aware systems can provide not just novel means of engagement but profile us too (as introduced in Chap. 1). In addition to the human-technology touchpoints, we should also consider the commercial and political motives to better understand feeling and emotion (or at least being able to claim to do so), as elucidated in Chap. 2. Indeed, the body is already playing a role in political profiling, including testing with emotional AI and wider technologies, where ad-testers and political communications specialists use facial coding, electroencephalography (EEG) and other intimate means of analysis to assess bodies and brains for reactions to political messages and advertising. This entails reactions to propositions, types of attention, the role of contrasts and reactions to colour, music and narrative within a given ad (McStay, 2018). In this vein, it should not be missed that microtargeting in politics stems from technological ‘innovation’ in advertising, so it is reasonable to assume that political communicators will continue to utilise techniques from the commercial advertising sector.

Extending longstanding practices of sentiment analysis and classification of online emotion-type and disposition, we point to increasing inclusion of data about bodies. For example, Spotify (the world’s largest music streaming service provider, with over 381 million monthly active users in 2021) has long profiled emotions and moods and has arrangements with advertising conglomerates (McStay, 2018). Signalling intention, Spotify’s patent logged in 2021 to register taste attributes from audio signals is important, given the ubiquity of Spotify’s service. Their goal is to improve speech-enabled recommender services by potentially simultaneously processing data from voice tone, speech content, background noise, user history, explicit indications of taste, profiles of Spotify user friends and environmental data from background noise. With this example alone, one easily sees how biometrics (through voice and speech) can begin to inform targeting processes for coming iterations of political advertising. Similar can be said for in-world profiling in Meta’s foray into the metaverse (discussed further below). This will be dependent on physical profiling, especially of the face (via cameras or worn lenses with sensors around the mask), thereby rendering emotion expressions for in-world interactions (McStay, 2022).

Whether emotional AI technologies can deliver on their promises to accurately gauge human emotion has attracted much scholastic and industrial attention. Methodological flaws of determining emotions from biometrics (especially from facial coding) have particularly suffused this critique (McStay, 2019). For instance, Barrett et al. (2019) demonstrate in an authoritative meta-analysis that the ‘basic emotions’ approach that sees emotions as universal and informs much of the emotional AI industry fails to capture how people convey, or interpret, emotion on faces. Illustrating both accuracy and systemic racist bias, Rhue (2018), for example, compares emotional analysis components of Chinese face recognition company Megvii’s Face++ software to Microsoft’s Face Application Programming Interface when applied to a database of headshots of White and Black male professional basketball players in the USA. It finds that facial recognition software interprets emotions differently based on race, with Black players interpreted as angrier than White players by Megvii’s Face++, and Microsoft interpreting Black players as more contemptuous when their facial expressions are ambiguous, compared to White players.

Yet, Barrett et al.’s (2019) damning and authoritative methodological critique of the use of facial coding to determine emotions also suggests solutions that engage more with context. Such context could be a ‘cultural context, a specific situation, a person’s learning history or momentary physiological state, or even the temporal context of what just took place a moment ago’ (Barrett et al., 2019, p. 47). Indeed, industry leaders, such as Microsoft, are now advocating a turn to social context to more accurately gauge users’ emotions. As signalled in the Spotify example above, McStay and Urquhart (2019) predict that this will inevitably involve a turn to more data so that the profiling analyst can know contextually more about a person and the scenario. In countries where profiling to infer sensitive attributes such as sexual orientation or political opinions is not well regulated, or where being of the ‘wrong’ sexuality or political tribe can be dangerous to life chances, or even to life itself, this increased optimisation of emotional life is alarming. Furthermore, we observe (and expand later in this chapter) that this sort of contextual data is precisely what globally dominant social media and technology platforms are very good at supplying through their profiling technologies. The suggestion, then, is not that biometric emotional AI will be foolproof (it will not be). Yet, in-house testing through biometric reactions, and potentially multimodal collection of biometric data about reactivity to stimuli, will make a significant difference to how civic bodies are understood, profiled, represented and targeted.

Despite methodological concerns, emotional AI is being used worldwide in a wide variety of governance contexts that impact the civic body. Its deployment for the purposes of governance varies according to different countries’ societal goals, social organisation, and regulatory and cultural norms of privacy and agency. For instance, since 2016, in authoritarian United Arab Emirates, the smart city initiative of Smart Dubai uses sensors and analytics that feed a centralised monitoring and management layer to tell city analysts how residents, visitors, commuters and tourists feel about municipal matters, from transport to shopping and health. Presaged on opening personal data silos to the state, the Smart Dubai programme presents this as ‘a globally unique, science-based approach to measuring and impacting people’s happiness, fuelling the city’s transformation’ (McStay, 2018, p. 156). Notably, although Dubai’s citizens have privacy rights, they constitute a small proportion of the overall population: residents and tourists have no such rights. Also noteworthy is that Dubai is well positioned to export its smart city model and emotional capture technologies globally (McStay, 2018). Not content with emotionally profiling populations, emotional AI is deployed worldwide to tell if we are lying. Fifty countries, including over 65 American law enforcement agencies and nearly 100 worldwide, already deploy US firm Converus’ EyeDetect that uses software to track involuntary eye movements to detect lies (Lisbona, 2022, January 31). Universities are meanwhile developing lie detectors that rely on speech (content and tone of voice), body language and other physiological measures such as changes in facial muscle movements (Shuster et al., 2021). Although facial emotion expressions are far from universal, emotional AI technology companies have already sold facial recognition cameras across the world to surveil and police schools and cities (Article 19, 2021; McStay, 2018).

What then might the near future hold, and what does it portend for the spread of false information online? We consider three near-horizon futures as the bandwidth for profiled, datafied emotions expands.

Scenario 1: The Ministry of Optimised Moods

As a vehicle to consider connections between emotional AI technologies and the civic body, we could ask, ‘What would political strategists such as Dominic Cummings make of them?’ Cummings was a data-focused campaign strategist for Vote Leave, a campaigning organisation that, against all expectations, won the 2016 referendum campaign for Britain to leave the European Union on what was regarded as a disinformation-heavy campaign. After Boris Johnson was elected UK Prime Minister in July 2019, Cummings was appointed to the new role of Chief Adviser to the Prime Minister. As COVID-19 ravaged the UK across 2020, Cummings was on hand to advise on adaptive strategies that (the government emphasised) followed the data and science (see Chap. 5).

Cummings embraces the role of data, engineering and management. In his blog, he proudly claims that Vote Leave innovated, ‘the first web-based canvassing software that actually works properly in the UK and its integration into world-leading data science modelling to target digital advertising and ground campaigning’ (Cummings, 2017, January 30). From the heart of the government, rather than relying on stories and authority, Cummings championed data-informed politics and novel modes of visualising complex information (across time, as well as contemporary complexity) to enhance decision-making. This includes a high-level interest in data and computer science, systems theory, psychology of persuasion, game theory, AI and machine learning, and the intersection of technology and storytelling. Cummings also champions sciences of prediction that are dynamic in nature (such as from weather forecasting and epidemiology), new technologies and interface design, difficult-to-control modern communications and cybernetic government (error-correction paths and prediction). His championing of interface design heavily draws on (and supports) Bret Victor, whose company, Dynamicland, builds computers and interfaces that people can handle. Cummings laments the UK government’s Cabinet Room where important decisions are made without data-informed insight or dynamic representation of ongoing events and longitudinal trends. This contrasts with his enthusiasm for Dynamicland where computing (not just data representations) is embedded in surfaces of walls and objects. These new ‘cognitive technologies’ provide ‘a new way of seeing and thinking’ (Cummings, 2019a, 26 June). Cummings (2019a, 26 June) posits: ‘Imagine discussing … possible post-Brexit trading arrangements with the models running like this for decision-makers to interact with’.

Beyond such ‘Seeing Rooms’ (Cummings, 2019a, June 26), given the rising tide of interest across society in emotional AI, it is not a stretch to see how citizen feeling might be modelled with multiple predictive scenarios of novel variables to consider outcomes and policies. The UK’s Office for National Statistics (2021) already tracks national well-being data, but consider this dynamically visualised at granular levels in real time in the Cabinet Room, using multiple sensors across cities, transport, workplaces, wearables, mainstream media and social media. One has to be careful not to overreach, but there is a clear appetite in being able to gauge the civic body, predict it (and its parts), know what the public will accept (such as restrictions on specific freedoms for the civic good) and use these insights to model public infrastructure initiatives. Arguably, before COVID-19, such datafication of the emotionalised civic body might have seemed unthinkable in liberal democracies, but COVID-19 has shown there to be keen appetite to know the public mood for governance purposes. As surveillance systems become even more normalised to protect the public and to police desired behaviour changes during pandemics, governments have a vested interest in understanding how the nation or specific groups are feeling, in order to hone targeted messages and other behavioural interventions and to cultivate a desired emotional state among the population (see Chap. 5). Add this to Cummings’ interest in more intuitive forms of computing that facilitate new ways of doing politics, such as through ‘neural interfaces’ (Cummings, 2019a, June 26).

One might write off Cummings as an eccentric, someone who does not actually understand technology, as someone who misunderstands social complexity and the irreducibility of qualitative life to quantitative form. Dismissal ignores that these beliefs themselves matter given Cummings’ prominent positions within UK politics and government at momentous times (the architect of the official Vote Leave referendum campaign and governing the UK during the first year of COVID-19, until leaving office in November 2020). Previous UK government advisors such as Alastair Campbell (Downing Street Press Secretary (1997–2000) and Downing Street Director of Communications and Strategy (2000–2003) for Prime Minister Tony Blair) perhaps belong to the age of news and rhetoric. By contrast, Cummings exists in a discourse of neuroscience, biohacks, datafication and predictive analytics. The test of whether this is a serious proposition is based in value: if there is deemed to be commercial or political value in optimising the mood of the civic body for the purposes of governing, it is a proposition that engaged citizenry should take seriously, however outlandish it may seem for liberal democracies. As a minimum, the convergence of emotion, commercial biometrics and politics is something that should be recognised and guarded against. Again, if the connection between emotional AI and political discourse seems too tenuous, we might remember that a central architect of Brexit, and advisor to the British government, saw emotion and data as key to his successes, albeit in this case using online behavioural technologies built for advertising.

Turning from liberal democracies to the one-party state of China, of note is its 14th Five-Year Plan for National Informatisation. Aiming to promote innovation and application at scale of AI, it plans to ‘launch cutting-edge intersectional research on artificial intelligence and basic disciplines such as neuroscience, cognitive science, psychology, social science’ (Central Commission for Cybersecurity and Informatisation, 2021, December 28, p. 48). When married with its planned projects to experiment with AI for social governance purposes covering areas like public health, urban management, education and building ‘social governance big data and virtual inference scientific research platforms’ (p. 34), it is likely that emotional AI will play an increasing role in governance. Indeed, through experimenting with facial recognition technologies in schools and for policing, China has already started down this route, as observed by international human rights organisation, Article 19 (2021).

Scenario 1, then, is where the civic body is empathically optimised so that governments may better manage populations. It offers potential to be in touch with the disposition and emotional state of the civic body of one’s country (or even that of another country). This scenario may appeal to those desiring more compliant populations (for instance, to instil prosocial public health behaviour during pandemics). However, those who prioritise individual agency above being dictated to by a wider, or leading, group are unlikely to view this scenario positively. The potential for honing disinformation by bad actors and for information warfare is also profound: it would super-charge the ability of an adversarial state or bad actor to achieve its goals by better understanding how to manipulate the emotions of targeted individuals or groups in other countries.

Scenario 2: Campaigns That Optimise Embodied Emotions

How would political or advocacy groups seeking to win elections or referenda, or promote their cause, behave in this brave new world of automated industrial psycho-physiological profiling of the civic body? Recent history shows sometimes psychopathic political levels of desire to win, willingness to break rules and to use all available data and new technologies to exploit psycho-emotionally sensitive points of the civic body. We posit that many campaigners would embrace this profiling to empathically optimise their messages to resonate with target audiences, regardless of what social, cultural and technological norms are broken. Indeed, Chap. 4 already documents advocacy groups worldwide making powerful demands by putting words we want to hear into political leaders’ mouths (such as apologising for failing to avert climate change) and resurrecting the dead (such as bringing back a murdered journalist to demand that state-backed violence against the press ends). Chapters 3, 5 and 6 document optimised emotive political campaigning and information warfare, where emotive, deceptive, microtargeted political campaigns have been offered, attempted or delivered, taking advantage of the affordances of social media and mobile apps. Chapter 6 highlights linguistically optimised deepfakes with politicians seeking to generate closer emotional connections with targeted voters by artificially, through AI, speaking their dialects.

Such profiling and targeting opportunities and claims continue to develop. Of note is recent research by Kosinski, given prior interest by (now defunct) political consultancy Cambridge Analytica in his work on Facebook ‘Likes’ to predict psychological characteristics and political inferences (see Chap. 6). Arguing that he is exposing societal threats rather than building new tools for harm, Kosinski (2021) claims that an open-source facial recognition algorithm can expose individuals’ political orientation from a single naturalistic facial image taken from US Facebook profiles or from a popular dating website in the USA, UK and Canada. According to Kosinski, facial expression, self-presentation and facial morphology contain potential cues. For instance, in the US Facebook sample, Kosinski reports that liberals tend to face the camera more directly, are more likely to express surprise and are less likely to express disgust. Political orientation was correctly classified in 72% of liberal-conservative face pairs. Kosinski (2021) posits that even higher accuracy would likely arise from using higher resolution and multiple images per person; training custom neural networks aimed at political orientation; or including non-facial cues such as hairstyle. He also notes that even modestly accurate predictions can be impactful when applied to large populations in high-stakes contexts, such as elections. Unsurprisingly, given its biological deterministic bent, similar research by Kosinski (for instance, that AI can distinguish gay from straight people in photos (Wang & Kosinski, 2018)) has attracted stinging critiques, rightly invoking the racist and junk science of physiognomy, especially Kosinski’s connecting of personality with facial morphology. From the point of view of physiognomy and the political civic body, warning from history could not be any louder, given the keen interest of Nazism in morphology, anthropometrics and physiognomy (Gray, 2004; McStay, 2022). Regardless of whether Kosinski’s research on AI’s ability to expose political or sexual orientation from a facial image is realistic, that the question is being asked means political strategists and advocacy groups will be interested. This portends a direction of travel towards biometric profiling of the political civic body.

We also note that, beyond multiple emotional AI start-ups, several globally dominant companies already offer emotion recognition services based on analysis of facial expressions, including Microsoft, Amazon (Rekognition), Facebook, Apple and Google (Cloud Vision API) (McStay, 2018; Wright, 2021). Social media platforms already offer granular profiling and microtargeting tools to influence unsuspecting users, as Chap. 6 demonstrated. Their deployment of biometric emotion recognition services can only add further layers of granularity, and presumably, accuracy, to their suite of services for influence.

Manipulation of embodied emotions by political and advocacy groups is of particular concern where such groups engage in deceptive practices. For instance, deepfake synthetic media can elicit more emotional responses, as well as collapsing language barriers and reaching the illiterate (as deepfakes can deliver messages in any language or dialect that the deepfaker desires). While providing short-term wins for the campaigning group that has persuaded people by establishing greater personal connection, it is likely to further damage belief in the indexicality of the audiovisual image. Already, public figures are denying the authenticity of past incriminating video clips, allowing them to avoid accountability (see Chap. 4). If deepfakes, or the very idea of them, become more commonplace, then people will likely demand further proof of veracity, as seeing will no longer be believing. Given the biometric turn, this may involve biometric indicators to (a) prove that the campaigner is who they say they are and (b) that they mean what they are saying. This would represent a societal shift for would-be persuaders to ‘prove’ their authenticity (of self or message) by strapping themselves up to biometric lie detectors or other indicators of affect and emotion. An arms race, not just to increase citizens’ digital literacy to spot false information but also to identify authenticity of emotions, and from that to infer the persuader’s intent, may be on the near horizon too, despite concerns about the accuracy of such technology.

Scenario 2, then, is one where biometrics as a proxy for the civic body’s emotions are gauged so that campaigning groups can better connect with target audiences to influence votes, donations or behaviour. With the rise of machine learning on bodies and disposition, and as industry leaders advocate a turn to ingesting and understanding social context (namely, wider forms of data) so that profiling analysts can know more about a person and the scenario, optimisation endeavours are likely to increase to be both more effective and affective. This lays the ground for undue influence and manipulation at important moments in the life of the civic body. This is of particular concern where campaigning groups engage in deceptive practices to achieve their aims.

Scenario 3: Profiting from Optimising Fellow-Feeling

As this book has demonstrated, emotional profiling is already deployed to manipulate us for profit by ‘feeling-into’ online conversations and creating content and headlines on social media to resonate with, or trigger, specific groups within the civic body (see Chaps. 2 and 3). Furthermore, automated journalism can already automatically (with little human intervention beyond the initial programming phase) dig into reams of data to find patterns, such as using algorithms to sift through the leaked Panama Papers (Schapals & Porlezza, 2020); and it can offer insights to journalists on what the most important story element is (Cools et al., 2021). On top of this, the ability to automatically enable tone-optimised and geo-tailored news stories is already at hand for newsrooms willing to experiment. Using automated insights, algorithms can determine the emotional tone of a story and can tailor news stories for local audiences, for instance, on local sports results or local election outcomes, enabling highly personalised news feeds (Bakir & McStay, 2018; Graefe, 2016). Indeed, the phenomenon of empathically optimised automated news (of fake and real events alike) is on the near horizon, given the current state of automated journalism, sentiment analysis and language modelling.

To create empathically optimised automated fake news, the process would be to understand key trigger words and images among target groups; create fake news (itself normally comprising shorter and less informative content oriented towards disgust and anger [as discussed in Chap. 4]) and measure its engagement; and then have machines learn in an evolutionary capacity from this experience to create stories with more potency to increase engagement and thereafter advertising revenue (Bakir & McStay, 2018).

Should this appear unrealistic, consider the practices of Open AI, an American company whose mission is to ensure that artificial general intelligence (namely, highly autonomous systems that outperform humans at most economically valuable work) benefits all of humanity (Open AI, 2022). In 2020, Open AI launched GPT-3 that uses deep learning to produce humanlike text. Within a year, over 300 applications were delivering GPT-3-powered search, conversation, text completion and other advanced AI features through their Application Programming Interface, involving tens of thousands of developers worldwide (Open AI, 2021, March 25). Such capacity has been noticed by political strategists. Dominic Cummings regularly wore an Open AI tee shirt and cites Open AI on his blog: for instance, how output from its large-scale unsupervised language model ‘feels close to human quality’ (Cummings, 2019b, March 1).

While one might counter that people would not be fooled by AI-generated text, this cannot be assumed. By way of illustration, Google engineer, Blake Lemoine, published transcripts in June 2022 that seemed to indicate that the AI chatbot generator system he was working on (Google’s LaMDA (Language Model for Dialogue Applications)) had become sentient, with Lemoine claiming that it has the perception of, and ability to express thoughts and feelings equivalent to, a seven- or eight-year old human (Tiku, 2022, June 11). Google disagrees with Lemoine’s assessment: LaMDA’s abilities are based on pattern recognition rather than understanding meaning; and those familiar with chatbots can easily detect LaMDA’s chatbot qualities, such as speaking in general ways that lack specificity, depth or originality (Ray, 2022, June 18). Reading the LaMDA transcripts (see Lemoine, 2022, June 11), if the reader has no awareness that the AI is using machine learning (transformer-based neural language models) to put the right words in the right order based on vast amounts of training data (trillions of words from the Internet) and the help of human crowd workers conscripted to engage in thousands of chats with the programme, the conversation looks convincingly humanlike.

Despite Open AI’s and Google’s stated commitments to Responsible AI, dangers to the civic body are in plain sight if it becomes impossible to distinguish human-generated text from AI-generated text. Google’s research paper on LaMDA acknowledges that ‘adversaries could potentially attempt to tarnish another person’s reputation, leverage their status, or sow misinformation by using this technology to impersonate specific individuals’ conversational style’ (Thoppilan et al., 2022, p. 18). The architects of disinformation would surely add this tool to their arsenal if there is monetary or other gain to be made in doing so.

We have already seen how profiting from optimising fellow-feeling manifests throughout the contemporary disinformation supply chain. Money is made by digital influence mercenaries and trolls supplying false content (financed by propagandists or their clients); by creators of fake news websites (from associated online advertising on their sites); by clickbait-oriented news organisations (who earn money from more click-throughs of misleading headlines); and by the dominant digital platforms themselves (who sell profiles of engaged audiences to advertisers). Unfortunately, whistleblowing accounts detailed in Chap. 2 show that by designing algorithms that gave outsize weight to emotional Reactions and engaging posts, communities sharing false, extremist information were generated and consolidated on Facebook. Chapter 2 also observes that other social media platforms are similarly emotional by design, and Chap. 5 documents studies of the virality of emotional content on multiple social media platforms.

As an empirically grounded book, we have focused primarily on globally dominant digital platforms (especially social media); how their exploitation of datafied emotions maximises user engagement that can be monetised; and how this drives viral, false information. Looking to the future, however, the world’s globally dominant social media platform, Facebook (rebranded as Meta in late 2021), is also turning to wider bandwidths of data collection, including biometrics. In late 2021 Mark Zuckerberg outlined plans for Meta as a metaverse company, a realisation of cyberspace where people move between virtual reality, augmented reality and familiar web-based platforms. Although the so-called metaverse is subject to much scepticism by well-placed commentariat, this would see the capacity for emotional profiling and targeting already afforded by social media platforms to connect with that afforded by biometrics. Keeping in mind that alongside Facebook, Instagram and WhatsApp, Meta also own Oculus (that produces virtual reality devices) and that they have long been researching in-world detection of emotion in virtual reality, one begins to discern Meta’s direction of travel. As early as 2014, Zuckerberg regarded virtual reality as the next globally significant platform, capable of sharing precious, personal experiences (Levy, 2020, p. 328). Seven years later, Facebook Reality Labs Research predicted that virtual reality and augmented reality will ‘become as universal and essential as smartphones and personal computers are today’ and that they will involve ‘optics and displays, computer vision, audio, graphics, brain-computer interface, haptic interaction, full body tracking, perception science, and true telepresence’ (Tech@FACEBOOK, 2021, March 18).

This portends a profoundly granular control system built on an expanded bandwidth of data collection. As a minimum, in-world profiling will include data about facial expressions and reactivity stimuli and others (whether generated from desktop cameras or worn sensors, such as around a virtual reality head unit mask tracking muscle movement). Neural input technology is steadily moving towards everyday experience, such as Facebook’s wristband that uses haptics to measure hand and finger gesture (Tech@FACEBOOK, 2021, March 18). As such, there is clear scope for ocular- and affect-based interactions to create and track engagement with virtual objects. Meta, of course, is not the only company seeking to realise long-promised visions of the neuro-enhanced ‘human-machine’, but unlike start-up companies such as Elon Musk’s Neuralink that is developing brain chips, Meta has global scale. The significance of augmentation is the scope to sense and measure, or feel-into, electrical impulses (such as through electromyography) in the body to gauge human intention.

Scenario 3, then, is one where individuals and companies profit by feeling-into the civic body and creating content to resonate with specific groups to increase engagement and thereafter advertising revenue. This has already proven lucrative to the architects of disinformation across emotional by design social media platforms. The nature of future instantiations of profiting by feeling-into the civic body is not at all clear given hype and the diverse technologies and practices in play, but we foresee a near-horizon future where citizens’ online and offline behaviour is registered by much more granular means, representing a biometric future for communication with and through the civic body. That we may be turned into perpetually targeted data pools to be exploited and managed by architects of disinformation and influence is not a scenario that accords with one of human dignity and flourishing.

Protecting Citizens in the Coming Era of Optimised Emotions

That citizens could be more intensely emotionally profiled and targeted for manipulation by individuals, pressure groups, companies, political parties, governments and nation-states has raised concerns at the highest of levels. Published attention sharpened in 2021, for example, with the United Nations Committee on the Rights of the Child publishing ‘General Comment 25’ that addresses children’s rights in the digital age. This contains multiple mentions of emotion analytics (see §42, 62, 68), finding them to interfere with children’s right to privacy, freedom of thought and belief. It also flags the importance ‘that automated systems or information filtering systems are not used to affect or influence children’s behaviour or emotions or to limit their opportunities or development’ (United Nations Convention on the Rights of the Child, 2021, March 2, §62). Moreover, also in 2021, the United Nations Human Rights Council formally adopted the Resolution titled ‘Right to privacy in the digital age’ where §3 notes need for safeguards for emotion recognition (United Nations General Assembly, 2021). The Council of Europe (2021) likewise called for strict limitations and bans regarding emotion profiling in areas of education and the workplace. Also in 2021, the European Data Protection Board and the European Data Protection Supervisor issued a joint statement declaring use of AI to infer emotions of a natural person as highly undesirable and that it should be prohibited, except for specified cases, such as some health purposes (European Data Protection Board, 2021). Related, 2021 also saw the release of a draft of the proposed European Union AI Act, a risk-based piece of legislation that classifies emotion recognition as both risky and high risk, depending on the use case (European Commission, 2021, April 21).

Indeed, beyond interest in emotion recognition systems, the proposed European Union AI Act is unequivocal about the need to protect against the capacity of AI (especially that using biometric data) for undue influence and manipulation. To create an ecosystem of trust around AI, its proposed AI regulation bans use of AI for manipulative purposes; namely, that ‘deploys subliminal techniques … to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm’ (European Commission, 2021, April 21, Title II Article 5, p. 43). While it is not yet clear what current applications this might include, it is highly likely to cater for neural and in-world environmental manipulation of the sort that would be facilitated if Meta and Neuralink’s developments are realised.

Furthermore, in April 2022, proposed amendments to the draft AI Act included the proposal from the Committee on the Internal Market and Consumer Protection, and the Committee on Civil Liberties, Justice and Home Affairs, that ‘high-risk’ AI systems should include AI systems used by candidates or parties to influence, count or process votes in local, national or European elections (to address the risks of undue external interference and of disproportionate effects on democratic processes and democracy). Also proposed as ‘high risk’ are machine-generated complex text such as news articles, opinion articles, novels, scripts and scientific articles (because of their potential to manipulate, deceive or expose natural persons to built-in biases or inaccuracies) and deepfakes representing existing persons (because of their potential to manipulate the natural persons that are exposed to those deepfakes and harm the persons they are representing or misrepresenting) (European Parliament, 2022, April 20, Amendments 26, 27, 295, 296, 297). Classifying them as ‘high risk’ would mean that they would need to meet the Act’s transparency and conformity requirements before they could be put on the market: these requirements, in turn, are intended to build trust in such AI systems.

Mindful that people have generally low awareness about emotion profiling, since 2015, we at the Emotional AI Lab have carried out studies into the British public’s views on established and emergent emotional AI use cases. Our recent survey shows that a majority of British adults dislike use of emotional AI technologies where there is capacity for undue influence in situations that they are powerless to control and where it affects important moments in civic life or a person’s own life chances. This demographically representative omnibus online survey (n > 2000 adults conducted in January 2020 by ICM Unlimited) explores levels of concern about five use cases for emotion-sensing technologies in everyday life (see Table 9.1). Of these, it finds that people are most concerned about social media profiling in political campaigns utilised to find out which political ads or messages are most engaging for specific audiences and to personalise and target what political ads we see (66% are ‘not OK’ with any form of such data collection). A majority (58%) are also concerned about biometrics in the workplace to track employees’ emotions. A small majority are concerned about biometrics in schools to track students’ facial expressions to work out their emotional states and attention levels in order to tailor teaching. Large minorities are concerned about automated understanding of the emotional and affective behaviour of drivers (45%) and usage in out-of-home advertising to gauge reactivity to ads (45%) (see Table 9.1). We include this survey snapshot because it is notable that people appear to be more concerned about undue political influence and manipulation though social media in politics than biometric profiling. Given sensitivities around the body especially in relation to questions of workplaces, this finding was unexpected.

Table 9.1 UK adult attitudes to different form of emotional profiling

It remains to be seen if and how uses of emotion recognition will scale and whether seemingly low-stake emotional AI interactions with people (such as via outdoor ads and in cars) will increasingly feature without significant societal pushback (see McStay & Urquhart, 2022). For now, people (at least in the UK) are clearly not keen on higher stake emotional AI interactions (such as for political influence or that affect the workplace and schools). One might safely wager that people would not be ‘OK’ with emotion-based biometric insights from their engagements with devices being used for political purposes, such as data generated by longitudinal profiling of interaction with home voice assistants, facial expression data collected by phones, or in-world tracking of emotion and behaviour. In addition to well-known problems of embedded values and biases in sociotechnical systems, and the methodological and conceptual flaws of emotional AI technologies (AI Now Institute, 2018; McStay, 2018; Russell, 1994; Stark & Hutson, 2021), we suggest a need for greater recognition of the potential for biometric profiling to spill into political profiling. This recognition would alert us to the need not just for individual protections but also for those of a collective and civic sort. This would involve being alert to organisational justifications for aggregation of biometric affinity data, where profiling does not occur directly but through people’s ‘affinity’ with a group defined by such data (Wachter, 2020) and their biometrics and reaction types. The consequence of this is that, paradoxically, while the data points collected about a person may be relatively few, when they are assembled alongside indirect inferences and assumed dispositions, profiling and targeting becomes, and may feel, much more personal.

Mindful that proposed transparency obligations, bans on undue influence and specification of what is deemed ‘high risk’ may be diluted via lobbying before the European Union AI Act is passed, we also note the increasing clamour for human-centric design for emotional AI and empathic technologies by industry critics, the basic tenet of which is to design to benefit humankind rather than to exploit it (Institute of Electrical and Electronics Engineers, 2019; McNamee, 2019; McStay & Pavliscak, 2019). Yet, seen most charitably, as this book has shown, companies cannot always foresee, nor are prepared to adequately remedy, real-world harmful uses of their technologies, especially if such remedies damage their engagement-driven business model (as evident in the case of false information and digital platforms). As such, global technology standards board, the Institute of Electrical and Electronics Engineers (IEEE), comprising multinational volunteers from academia, industry and government, formed the IEEE P7014 Working Group in 2019 to try to standardise the ethical design associated with empathic technologies and its tools, frameworks and processes (Soper et al., 2020). However, while useful as a means of identifying and promoting good behaviour, adherence to standards is voluntary, so lacking force of law. Mindful of ongoing legislative activity and weakness in technological standards-based initiatives to protect citizens in the coming era of optimised emotions, we try to crystallise the social problem: the need to protect mental integrity.

Protecting Mental Integrity

In the 2021 Reith Lectures, AI expert Stuart Russell observes the pressing need to protect our ‘mental integrity’ (a right in the Charter of Fundamental Rights of the European Union (European Union, 2012, Article 3)) from the profiling and predictive capacities of AI (Russell, 2021). Neuro-ethicist, Andrea Lavazza (2018), defines mental integrity as an ‘individual’s mastery of his [sic] mental states and his [sic] brain data so that, without his [sic] consent, no one can read, spread, or alter such states and data in order to condition the individual in any way’. While Lavazza is concerned to protect mental integrity from devices capable of directly interfering with it, such as brain implants and neuro-prosthesis, McStay (2022) urges that we should be similarly concerned with plans, models, processes and potentially ubiquitous systems that seek to automate empathy. This includes emotional AI tools that monitor and condition human emotion.

Emotional AI technologies claim to be able to gauge human emotions for the purposes of influencing, predicting and controlling human behaviour. Yet, if these technologies were judged in human terms, they would be considered psychopathic (McStay, 2022). Despite being marketed under the auspices of empathy and sensitivity to emotion, they do not actually understand our emotions: they only process signals (such as biometrics) and predict outputs (named emotional states). The judgements of emotional AI may display deeply cold-hearted behaviour (such as playing on an audience’s fears to maximise engagement with specific content). Ultimately, our relationships with them will be inauthentic and fake (such as deepfaking a political actor’s dialect to establish closer connections with target electorates).

To be subjected to profiling by emotional AI systems, we argue, is not just psychopathic but also highly invasive. Emotional AI clearly does not ‘understand’ first-person outlooks, the phenomenology, or lifeworld of the individual. However, that it can discern and predict proxies of mental life to some degree should raise concerns about human privacy and dignity. If such systems become anywhere near as accurate as their developers claim, we would be stripped of our privacy and dignity as our inner life and feelings would be exposed and mined. As Alegre (2021, May, p. 4) puts it, we must rapidly work out where we draw the line ‘between what we choose to reveal about ourselves and what is being unlawfully inferred about the absolutely protected space inside our heads’.

Beyond the individual, what of collective mental integrity? Feeling-into the collective may well be useful to optimise societal moods and behaviour change in time of national emergency (such as pandemics). More collectivist societies, such as China, may prefer a more permanent arrangement of feeling-into their society, in the name of social cohesion, order and harmony. For them, the social good of such emotional optimisation may outweigh the social harms of an overzealous surveillance state, including its chilling effects on freedom of thought, expression and association. However, such emotional optimisation capabilities can be abused by bad actors, not least hostile states conducting information warfare on unsuspecting populations by fomenting division, dissent and ontological insecurity. Furthermore, if freedom of thought is a fundamental human right that underpins all other human rights (as argued in Chap. 7), then this should lead even collectivist societies to step back from endeavours to optimise the datafied emotions of their collective.

Whether at the macro-level (such as protecting elections or health drives) or at the micro-level (such as protecting an individual’s freedom to privately think and feel whatever they like without interference), the civic body across the world is highly exposed to attempts at undue influence. We suggest that the principle of protecting mental integrity can be applied by individualistic societies (such as the USA) and collectivist societies (such as China) alike. Whether it is individual or collective mental integrity that is prioritised by governments, we argue that both are necessary to protect the civic body.

The Last Word

In dissecting how emotions are optimised to fuel contemporary false information online, we have reached an understanding of the twin incubators of the politics of emotion and the economics of emotion; the harms to the civic body that have ensued; and the many solutions proposed by diverse stakeholders. Yet, society has yet to tackle the false information media ecology head on as the underpinning business model driving it on social media remains intact. We suggested in Chap. 8 that all other solutions are merely tinkering at the edges.

As emotional AI expands from being the purview mainly of globally dominant social media platforms to a wide range of biometrically oriented forms, we see far greater potential for manipulation and exploitation of the civic body. If we are still not prepared to combat global disinformation and misinformation, we are far from ready for the coming era of emotional AI. This chapter outlined three near-horizon futures emanating from the coming automated industrial psycho-physiological profiling of the civic body to understand affect and infer emotion for the purposes of changing behaviour. None of them are without concern.

Scenario 1, where the civic body is empathically optimised so that governments can better manage populations, will concern those who prioritise individual agency above being dictated to by a wider, or leading, group. It will also raise concerns about its enhanced potential for information warfare where an adversarial state or group manipulates the emotions of citizens in its target country.

Scenario 2, where the civic body is empathically optimised so that campaigning groups can better connect with their target audiences to influence votes, donations or behaviour, will raise concerns in countries where profiling is poorly regulated and where campaigning groups engage in deceptive practices to achieve their aims. With the rise of machine learning ingesting wider forms of data, the accuracy of such optimisation is likely to increase, and with this, manipulation of the civic body.

Scenario 3, where individuals and companies profit by feeling-into the online and offline behaviour of the civic body, raises the spectre of perpetual surveillance that is perhaps tempting for some (e.g. via the metaverse). However, it will be difficult to resist given the coming ubiquity of smart and augmented environments and the difficulty of fooling context-aware, affect-based recognition tools utilised by complex assemblages of actors that may be monitoring emotions in public spaces. That we may be turned into perpetually targeted data pools does not accord with principles of human dignity and flourishing.

To prevent the perpetuation or intensification of false information as the global civic body becomes increasingly awash with datafied, optimised emotion, urgent preventative action is needed. Although emotional AI has raised concerns in the United Nations, and is achieving regulatory attention in the European Union, elsewhere AI and data privacy are far less regulated and deserve immediate attention to protect their citizens and those of other countries (for instance, from information warfare). Failure to do so will leave the world unprotected from manipulative emotional profiling for commercial and political ends.

The European Union’s draft AI regulatory proposals of avoiding undue influence and promoting greater transparency are a good place to start to avoid the harms that may arise where the granularity of online emotional profiling spills offline and becomes the everyday, resigned-to and mundane. However, we propose that this should be underpinned by the principle of protecting mental integrity, both individual and collective. As demonstrated by the ecology of false information, if the business model pushed by the emotional AI industry is one that exploits our emotions to maximise user engagement, then the battle to ensure that emotional AI is not used for harm will be an uphill one.

For the principle of protecting mental integrity to take root across the global civic body will require simultaneous effort across stakeholders. This embraces regional prosocial policymakers, ethically minded technologists, innovators, standards bodies and other international policy influencers, through to educators. And as citizens, we should be prepared to learn about the perils, as well as promises, of an emotionally datafied and optimised world. We hope this book helps in this task.