Introduction

Having examined the nature of false information and understood the energising role of emotion and related states in its promulgation, in this chapter we examine profiling and targeting in citizen-political communications. Profiling and targeting are how emotion is understood, harnessed, amplified, dampened, manipulated and optimised (by platforms and would-be influencers). This chapter focuses on profiling and targeting in political campaigning as this is an intensively studied area awash with emotion and deception (as previous chapters demonstrate) and attracts uneven protections across the world (as we will show below). We examine the targeting and profiling technologies and practices in political campaigning in the USA, the UK and India, so highlighting the impact of different data protection regimes as well as uneven digital literacies. In exploring these issues, this chapter also outlines key tools and techniques utilised by digital political campaigners in the big data era to profile and target datafied emotions.

Profiling and Targeting in Citizen-Political Communications

Profiling and targeting have long been apparent in political campaigning. In one of the first detailed analyses of why Americans vote and arrive at their political attachments, Lazarsfeld et al. (1944, p. 15) describe the persuasive advantages that personal face-to-face communication has over mass communication (which, at that time, was radio and print in domestic settings).

But suppose we do meet people who want to influence us and suppose they arouse our resistance. Then personal contact still has one great advantage compared with other media: the face-to-face contact can counter and dislodge such resistance, for it is much more flexible. The clever campaign worker, professional or amateur, can make use of a large number of cues to achieve his end. He can choose the occasion at which to speak to the other fellow. He can adapt his story to what he presumes to be the other’s interests and his ability to understand. If he notices the other is bored, he can change the subject. If he sees that he has aroused resistance, he can retreat, giving the other the satisfaction of a victory, and come back to his point later. If in the course of the discussion he discovers some pet convictions, he can try to tie up his argument with them. He can spot the moments when the other is yielding, and so time his best punches. (Lazarsfeld et al., 1944, p. 15)

While writing in the 1940s, the personally tailored and optimised attributes that Lazarsfeld et al. (1944) ascribe to face-to-face communication are all seemingly achievable by today’s digital profiling and targeting and at scale. This was the result of a century-long journey by advertisers, public relations experts and political campaigners to understand, and target, audiences with persuasive messages based on scientifically derived insights (Herbst, 2016; Hopkins, 1923; Wells, 1975). Even at the time that Lazarsfeld et al. (1944) were writing, a vast range of consumer feedback procedures had already been developed in the USA including testing of ads (1906), systematic collection of retail statistics (1910s), questionnaire surveys (1911), coded mailings (1912), audits of publishers’ circulations (1914), specialised market research departments and house-to-house interviewing (1916), research text books (1919), saturation (1920), dry waste surveys (1926), a census of distribution (1929), sampling theory for large-scale surveys (c. 1930), field manuals (1931), retail sales indices (1933), national opinion surveys and audiometer monitoring of broadcast audiences (1935) (Beniger, 1986, pp. 378–80).

In terms of political marketing, as mass literacy and mass media rapidly expanded across the 1920s and 1930s in the USA, so did polling the public using more scientific methods (Herbst, 2016). Opinion polling allowed political parties to merge broad demographic data (statistically socio-economic in nature such as population, gender, race, age, income, education and employment) with insights into how to craft messages that resonate with large parts of the population. This led to the development of targeted campaigning and direct mail in the USA in the late 1970s. By the twenty-first century, the rise of ‘big data’ and associated datamining techniques, tools and analytics enabled discovery of hidden patterns in seemingly unrelated data points and provided real-time, automated insights into massive, unstructured, diverse, unconventional datasets such as social media, transactional data and administrative data (Ceron et al., 2017). One common datamining technique is ‘classification’ that classifies items or variables in datasets into predefined groups using linear programming, statistics, decision trees and artificial neural networks. Another common datamining technique is ‘clustering’ that creates meaningful object clusters that share the same characteristics. Unlike classification that puts objects into predefined classes, clustering algorithms dynamically correlate seemingly unrelated data points into unnamed and undecipherable ‘clusters’. These are then translated back into a limited number of describable categories that, in turn, are dependent on the values assigned to them by the people who buy and use them. These are unlikely to enable explainable algorithms, where people can understand why a certain insight has been reached about them from the data (Kotliar, 2020). Nonetheless, using such (and other) datamining techniques, political campaigning can now combine public voter files with commercial information from data brokers to develop detailed, comprehensive voter profiles (Bartlett et al., 2018, p. 27; Perloff, 2018, pp. 246–247) to enable microtargeting (Dobber et al., 2019).

Profiling is defined in the European Union General Data Protection Regulation (GDPR) as: ‘any form of automated processing of personal data … to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’ (European Union General Data Protection Regulation, 2016, Recital 71). Profiling enables people to be targeted with honed, controlled messages to create adaptive ads; to provide location-based services; or to increase efficiency and personalisation of marketing messages in the ‘Internet of Everything’ (which brings together things, people, processes and data) (Petrescu et al., 2020). While a regular targeted message does not consider matters of audience heterogeneity, a microtargeted audience receives a message tailored to one or several specific characteristic(s) that are perceived by the advertiser as instrumental in making the audience member susceptible to that message (Dobber et al., 2019).

Political marketing with such granular targeting is not inherently bad and could even service democracy. As noted by the UK’s data regulator, it can better engage electorates and citizens on issues of particular importance to them (Information Commissioners Office, 2018, November 6, p. 18). Where conducted openly and honestly, it can manifest voters’ desires, concerns and policy preferences to politicians thereby helping elected leaders develop programmes that meet voters’ demands (Perloff, 2018, p. 250). However, critics point to more nefarious practices of profiling and microtargeting messages designed to bypass thoughtful deliberation in favour of emotionalised engagement and deception (as detailed in previous chapters). These are more difficult to guard against as political microtargeting is a form of political communication: as such, it is an exercise of the right to freedom of expression, which is guaranteed by Article 11 of the European Union Charter of Fundamental Rights and Article 10 of the European Convention on Human Rights (ECHR). Furthermore, such microtargeting practices can be highly innovative, as exemplified in 2018 when Dutch pro-immigrant party DENK microtargeted people who use a special sim card (one used mostly by immigrants to phone abroad), thereby efficiently reaching traditionally difficult to reach people. In order to scare its own base to vote, DENK experimented with fear appeals in the form of a false ad made to look like it came from the anti-immigration Party for Freedom, with the statement that after election day ‘we are going to cleanse the Netherlands’ (Dobber et al., 2019).

Unsurprisingly, data regulators have expressed concerns about voter profiling and microtargeting (Information Commissioners Office, 2018, November 6, 2020, November). Reflecting on the situation in the European Union, by December 2020, the European Commission warned:

Existing safeguards to ensure transparency and parity of resources and airtime during election campaigns are not designed for the digital environment. Online campaign tools have added potency by combining personal data and artificial intelligence with psychological profiling and complex micro-targeting techniques. Some of these tools, such as the processing of personal data, are regulated by EU law. But others are currently framed mainly by corporate terms of service, and can also escape national or regional regulation by being deployed from outside the electoral jurisdiction. (European Commission, 2020, December 3, p. 2)

Such developments have generated concepts like the ‘automated public sphere’ (Andrejevic, 2020) and ‘computational politics’ (Chester & Montgomery, 2017). Care should be taken not to overstate the impact of these developments on voting behaviour, as the scholarly field examining the impact of political advertising is divided. For instance, there is a long tradition that finds ‘minimal effects’ of campaign interventions (Berelson et al., 1954; Dobber et al., 2020; Klapper, 1960). Reinforcing these long-standing findings, Kalla and Broockman’s (2018) meta-analysis of field experiments shows that the effects of campaign contact and advertising (mainly via mail, phone calls and canvassing) on candidate choices of Americans in general elections are, on average, zero. However, this meta-analysis caveats that there is less evidence regarding online and television advertising, these also being areas of largest spend. It also concedes that issue-based persuasion remains possible when campaigns have resources to identify and target relevant issue cross-pressures. Furthermore, Jacobson’s (2015) review of scholarship on US elections concludes that campaigns do influence voters. More recent studies also find that targeted, data-driven campaigns have some influence on American voters. For instance, in the 2012 US presidential campaign, Republicans influenced Democrats’ voting behaviour when targeting them with issues where they and the Republican candidate shared common ground (such influence was minimal when targeting Democrats with incongruent issue messages or when targeting Republicans with either incongruent or congruent issue messages) (Endres, 2020). A field experiment study of a municipal election in Dallas, Texas, in 2017 finds that individually targeted banner ads generate a modest statistically significant increase in turnout among Millennial voters in competitive districts (Haenschen & Jennings, 2019).

What cannot yet be ascertained are the direct effects of continuously refined profiling and targeting techniques on unsuspecting populations’ voting behaviour. It would be difficult to find a linear relationship between exposure to political microtargeting and political participation outcomes as it is difficult to separate out microtargeting inputs and outputs from other forms of campaign data and communication (Schäwel et al., 2021). Nonetheless, several studies are instructive. Dobber et al.’s (2020) experiment using a microtargeted deepfake on Dutch respondents finds that political microtargeting can amplify the effects of the deepfake, but for a much smaller portion of their sample than expected. Also of interest is a study of the campaigning tactics of Jair Bolsonaro, a far-right, legislative backbencher to successfully become Brazil’s president in 2018. This big data study of Twitter during the 2016 Rio de Janeiro municipal election concludes that Bolsonaro used that election to prepare his communications strategy for his successful, subsequent presidential campaign by testing potential targets and narratives, experimentally disseminating divisive narratives and microtargeting potential voters who shared a common range of diffused values, capturing anti-systemic tendencies and criticising corruption in financial, moral and religious terms (Santini et al., 2021).

Even in parts of the world lacking infrastructure for fixed-line Internet connections, much higher mobile penetration exposes connected populations to datamining, targeting and profiling during election campaigns. For instance, in the continent of Africa, fixed-line Internet connections are primarily an urban phenomenon and, in many African countries, lag the rest of the world. Yet Africa leads the world in daily time spent on social media (on average, 3 hours 10 minutes compared to the global average of 2 hours 27 minutes in 2022), largely driven by users in Nigeria, Ghana, South Africa, Egypt, Kenya and Morocco (Kemp, 2022). Profiling and targeting for electoral gain in Africa are concerning given that it also suffers from low digital literacy, extensive false information online and poor data privacy regimes. Many countries on the continent are weak democracies with largely unregulated political funding or are governed by autocracies with associated governmental digital surveillance of the political opposition, journalists and activists (Dzisah, 2020; Mare & Matsilele, 2020; Ndlela, 2020; Nothias, 2020). Indeed, an overview of electoral cybersecurity in Commonwealth countries (funded by the UK’s Foreign and Commonwealth Office) concludes that the increase in highly targeted digital advertising, often using data obtained via insecure transmission and brokerage, could disrupt electoral campaigning (Brown et al., 2020, p. 28).

For better or for worse, global adoption of datamining, profiling and targeting technologies in political campaigning is accelerating worldwide. The following sections examine these developments in the USA (where many of the globally dominant social media platforms are headquartered), followed by the UK and India (democracies with different data protection regimes and occupying different places in the digital literacy spectrum). In doing so, we outline some key tools and techniques utilised by digital political campaigners in the big data era.

Profiling and Targeting in US Political Campaigning

Although the Fourth Amendment to the US Constitution [1791] upholds the right to privacy, the USA currently lacks any comprehensive privacy framework (unlike Europe). Only the states of California, Virginia and Colorado have comprehensive consumer privacy laws. Instead, privacy protections are embedded in sector-specific laws and regulations, such as the Health Insurance Portability and Accountability Act 1996 (for health-related data) and the Fair Credit Reporting Act 1970 (for credit-related data) (Fukuyama & Grotto, 2020, p. 200). This means that beyond several state and sectoral limitations, the government has largely left it to online companies to set their own privacy policies, which evolved into increasingly broad authorisations for the companies to extract data. The government can take action against the companies if they violate their own privacy policies and deceive consumers, but this does not guarantee institutional change (Starr, 2020). This absence of a comprehensive privacy framework helped spawn the profiling technologies and practices of the globally dominant US technology platforms, these then exploited in each election cycle to microtarget and mobilise voters. This takes place in a wider media context of low levels of trust in mainstream media, polarisation of mainstream media and a weakening of journalism (including local news deserts) created by digital platforms, leading the USA to be ranked only 42nd (out of 180 countries) on press freedom in 2022 (Reporters without Borders, 2022b).

Compared to traditional advertising companies that only track user browsing behaviours via opaque cookies, social media platforms access much richer data sources. For instance, they know users’ personally identifiable information and often allow advertisers to target users based on this (Andreou et al., 2018). All Facebook users have some 200 ‘traits’ attached to their profile. These include dimensions submitted by users or estimated by machine learning models, such as race, political and religious leaning, socio-economic class and education level (Hao, 2021). To reconcile conflicting goals of protecting the privacy of users’ personal information but also profiting from microtargeted advertising, in 2007 Facebook implemented a targeted online advertising system that provides a layer between individual user data and advertisers. The advertising system collects from advertisers the ads they want to display and their targeting criteria and then delivers the ads to people fitting those criteria. Rather than ‘selling’ information about their users, the business model is to sell space to advertisers, giving them access to people based on their demographics and interests (Facebook, 2007, November 6; Korolova, 2010). Why a user received a particular ad is therefore the result of a complex process depending upon many inputs including: what the platform thinks the user is interested in; characteristics of users the advertiser wants to reach; the set of advertisers and parameters of their campaigns; the bid prices of all advertisers; active users on the platform at a particular time; and the algorithm used to match ads to users (Andreou et al., 2018).

Given these legal and platform affordances, it is unsurprising that intensive datamining in political elections is well documented in the USA, with each election cycle adopting technological innovations to microtarget and mobilise voters (Stromer-Galley, 2014). Political parties and aligned political consultancies maintain political technologies (such as canvassing applications) and databases that candidates use, and electoral campaigns have many potential data sources (Kreiss, 2016). As well as their lists of donors and voter rolls (provided by local or state offices, typically containing each voter’s party registration and electoral voting history), campaigns can rent lists from other candidates (Edelson et al., 2019). From the 1960s to 2004, campaigns targeted broad demographic groups (such as gender-based) by purchasing television spots (e.g. daytime spots for female voters) (Fowler et al., 2016). Big data and digital targeting in political campaigns was first utilised in a large way for the 2008 US presidential election (Barack Obama v. John McCain) to work out voter sentiments, target key market segments and design messages to mobilise voters in core electoral areas (Kreiss, 2016; Owen, 2014; Tufekci, 2014). Since 2012, digital platforms have advertised their wares to politicians to teach candidates how to use their platforms during elections to reach new voters using data such as demographics, behaviour, interest and attention measures that represent the public in new ways, and to facilitate digital advertising buys (Kreiss, 2016; Kreiss & McGregor, 2018). The amount spent on US digital political advertising increased significantly from $159 million in 2012 to $2847 million in 2020 (Statista, 2021). Edelson et al.’s (2019) analysis of over 1.3 million ads with political content from over 24,000 sponsors archived by Facebook, Twitter and Google in the USA (coinciding with the 2018 US midterm elections) finds that most political ads cost less than $100, confirming the prevalence of small, likely highly targeted, ads that can contain custom political messaging. They also find a significant amount of advertising by quasi for-profit media companies that appear to exist solely to create deceptive, online astroturf communities to target different demographics and interests via paid and organic political messaging. These arise because regulations that require disclosure of the business that paid for the ad on broadcast stations or via direct mail do not apply to online advertising, largely because laws mandating such disclosures were drafted before these platforms were ubiquitous.

Across the past decade, then, a complex, opaque digital marketing ecosystem has emerged encompassing data brokers and data analytics companies alongside the usual professional persuaders. This enables the rise of influence activities in digital political campaigning. Targeting tools discussed below comprise those offered by social media platforms; those using social media platforms’ affordances; and bespoke campaign mobile phone apps that bypass social media platforms. These are far from exhaustive.

Social Media Platforms: Targeting Tools

Social media platforms offer many forms of targeting, and these are utilised by political campaigns. For instance, ‘A/B’ testing is used by social media companies to rapidly model users’ attention and behaviour to interactively nudge it. It compares two versions of a single variable, typically by testing a subject’s response to variant A against variant B and determining which is more effective. An old technique, across the past decade, there has been an exponential increase in deployment of rapid A/B testing using AI. In the 2012 presidential election (Barack Obama v. Mitt Romney), Obama’s digital team ran 500 A/B tests on their web pages (Formisimo, 2016). By the 2016 US presidential election (Donald Trump v. Hillary Clinton), Trump’s digital team tested around 50,000–60,000 ad variations a day (Beckett, 2017, October 9). According to a report by Demos (a British cross-party, independent think tank), this utilised Facebook’s tool, Dynamic Creative, to use predefined design features to construct thousands of ad variations, present them to users and find optimal combinations based on engagement metrics (Bartlett et al., 2018, p. 33).

A second important digital marketing tool is targeted advertising. Launched in 2012, Facebook’s ‘Custom Audiences’ product enables marketers to upload their own data files (using personally identifiable information that they hold about their own customers, such as email addresses and names) which can be matched to specific Facebook users (Andreou et al., 2018; Chester & Montgomery, 2017; Martínez, 2018, February 23). In January 2016, Facebook introduced the audience optimisation tool which allows marketers and advertisers to set preferences to target specific audiences based on a broad range of demographic data, but also interests, languages spoken, relationship status, work status, place of employment, ‘ethnic affinity’, life events, Facebook connections, tracked behaviours online, politics, likelihood to engage with political content and ideology (Kreiss & McGregor, 2019). Facebook has also allowed advertisers to use provocative targeting criterion, such as ‘interested in “pseudoscience”’, thereby grouping users by their vulnerabilities (Angwin, 2020, April 25). As Chap. 8 documents, it was not until 2022 that Facebook’s parent company, Meta, took steps to prevent advertisers targeting people based on how interested Facebook thinks they are in ‘sensitive’ topics including political affiliation (Bond, 2021, November 9). Such targeted advertising has been used to try to dissuade target groups from voting. For instance, to dissuade people from voting for Hillary Clinton, the 2016 Trump campaign targeted families of immigrants from Haiti living in South Florida to remind them that her husband, former US president Bill Clinton, had failed to sufficiently aid Haiti as president and as head of a relief effort after a major earthquake in 2010 (Vaidhyanathan, 2018, p. 171).

A third important digital marketing tool is lookalike modelling. This uses big data analytics to acquire information about individuals without directly observing their behaviour or obtaining consent (Chester & Montgomery, 2017). Facebook offers various lookalike modelling tools through its ‘Lookalike Audiences’ ad platform which allows advertisers to reach new people on Facebook who are likely to be interested in their business, or political candidate, because they are similar to existing audiences (Bartlett et al., 2018, p. 10). Antonio García Martínez, the original product manager for Facebook’s Custom Audiences, describes ‘Lookalike Audiences’ as ‘the most unknown, poorly understood, and yet powerful weapon in the Facebook ads arsenal’ (Martínez, 2018, February 23). Up until 2020, both Google and Twitter offered political or cause-based advertisers similar targeting criteria to Facebook, including custom audiences and lookalike audiences (Edelson et al., 2019; Hotham, 2021). More broadly, political digital marketing firms offer lookalike modelling to identify potential supporters and voters, by matching millions of voters to hundreds of data points to create detailed voter profiles.

Psychographic and Neuromarketing Tools

Political campaigners also use the affordances of social media platforms to deploy automated psychographic and neuromarketing tools. Psychographics, emotional testing and mood measurement have long been central to political campaigns (Jamieson, 1996) to understand voter values, attitudes, motivations, interests, opinions and lifestyles, but the rise of big data analysis and modelling enables access to psychological characteristics and political inferences beyond the reach of traditional databases (Bakir, 2020; Tufekci, 2014).

For instance, research by controversial psychologist and business academic, Michal Kosinski, finds that Facebook ‘Likes’ (a fraction of data available to data brokers) may accurately predict personal attributes, including political party affiliation and other highly sensitive personal attributes including religious and political views, sexual orientation, ethnicity, intelligence, happiness, use of addictive substances, parental separation, age and gender (Kosinski et al., 2013). It also claims to predict the ‘Big Five’ (also called OCEAN) personality traits (these traits have widespread acceptance among personality researchers): namely, Openness to experiences, Conscientiousness, Extroversion, Agreeableness and Neuroticism (Gosling et al., 2003). A meta-analysis of this young field also finds that digital footprints may be used to predict these ‘Big Five’ personality traits of social media users and that prediction accuracy for each trait is stronger when more than one type of digital footprint is analysed (Azucar et al., 2018, p. 157). Scholars disagree about the effectiveness of psychological targeting on Facebook. Some argue that it is so effective that its use should be regulated (Matz et al., 2017, 2018a, b), while others remain unconvinced (Eckles et al., 2018; Sharp et al., 2018).

Although scholarship disagrees about its effectiveness, some political marketing companies have been quick to deploy this tool. Indeed, it was research such as that by Kosinski et al. (2013) on use of Facebook ‘Likes’ to predict psychological characteristics and political inferences that attracted the attention of political data analytics and behaviour change company, Cambridge Analytica (Federal Trade Commission, 2019b, p. 3). Cambridge Analytica has since sent out mixed messages on whether it used this data for its psychographic profiling in the 2016 Trump presidential campaign (Bakir, 2020). Furthermore, the UK’s data regulator, the Information Commissioner’s Office (ICO), observes (after investigating Cambridge Analytica and parent company SCL) that the real-world accuracy of its algorithmic predictions ‘was likely much lower’ than the company claimed (Denham, 2020, October 2, p. 17).

Whether or not psychographics was used, or was effective, privacy violations led to the collapse of Cambridge Analytica and its parent companies, SCL Elections and SCL Group. They went into administration in May 2018, after public allegations made by whistleblower Christopher Wylie that Cambridge Analytica had exploited the personal data of Facebook users (Wylie, 2018, p. 14). Following its collapse, in July 2019, as well as levying a record US$5 billion civil penalty against Facebook for failing to protect users’ privacy, the US Federal Trade Commission filed an administrative complaint against Cambridge Analytica LLC (the US arm of the company) for deceptive harvesting of personal information from tens of millions of Facebook users for voter profiling and targeting. This personal information had been collected in 2014 from users of a Facebook app (the ‘GSRApp’ developed by Aleksandr Kogan). It had exploited Facebook’s now notorious (and since 2015, ended) data portal (‘Friends API’) that enabled app developers to share not only users’ data but that of users’ friends. The information comprised users’ Facebook User ID, which connects individuals to their Facebook profiles, as well as other personal information such as gender, birthdate, location and Facebook friends list (Federal Trade Commission, 2019a, July 24; Wylie, 2019, pp. 112–132). In April 2018, Facebook revealed that the maximum number of unique accounts that directly installed the GSRApp, as well as those whose data may have been shared with the app by their friends, comprised 70,632,350 in the USA, 1,175,870 in the Philippines, 1,096,666 in Indonesia, 1,079,031 in the UK, 789,880 in Mexico, 622,161 in Canada, 562,455 in India, 443,117 in Brazil, 427,446 in Vietnam and 311,127 in Australia (Schroepfer, 2018, April 4).

Describing how such data is put to work in political campaigns for deceptive and emotional manipulation, whistleblower Wylie (2019, p. 121) observes that in the USA, across summer 2014, Cambridge Analytica began developing fake pages on Facebook that looked like real forums, groups and news sources. When users joined these fake groups, Cambridge Analytica would post videos and articles to further provoke them. Cambridge Analytica now had users who self-identified as part of an extreme group and could be manipulated with data. The company did not target that many people as most elections are zero-sum games and it needed ‘to infect only a narrow sliver of the population, and then it could watch the narrative spread’ (Wylie, 2019, p. 122). Once a group reached a certain number of members, Cambridge Analytica would set up physical events across the USA, where people could find a fellowship of anger and paranoia, allowing them to feel part of a broader movement and reinforce each other’s conspiracies. Invitees were selected because of their traits, so Cambridge Analytica knew, generally, how they would react to one another. Once a county-based group started self-organising, they were introduced to a similar group in the next county, creating ‘a statewide movement of neurotic, conspiratorial citizens. The alt-right’ (Wylie, 2019, p. 123). Those targeted online with test ads had their social profiles matched to their voting records, so Cambridge Analytica knew their names and real-world identities. It then used numbers on the engagement rates of these ads to explore potential impact on voter turnout.

Campaign Mobile Phone Apps

Alongside social media platform targeting tools and psychographic and neuromarketing tools, a third and more recent type of targeting tool are bespoke campaign mobile phone apps. As a digital marketing tool, they assumed increased importance in the 2020 US presidential election (Joe Biden v. Donald Trump). By 2019, 81% of people in the USA were equipped with a smartphone, almost double the global average of 45% (Taylor & Silver, 2019, February 5). By the 2020 US presidential race, each campaign had a bespoke mobile phone app to target likely voters and to collect massive amounts of user data without needing to rely on social media platforms or expose themselves to fact-checker oversight of deceptive messaging.

Trump’s app (‘The Official Trump 2020 App’), developed by Phunware, offers carefully selected tweets and articles that reinforce the campaign’s talking points, often propagating deceptive information without a named author and rarely citing sources beyond government press releases and tweets from Trump’s supporters and White House staff. Like the Trump campaign app, Biden’s ‘Team Joe App’ sends users notifications of upcoming campaign events or training sessions for digital activists. Unlike the Trump app, the Team Joe App was bult in-house (to protect users’ privacy) and is largely built for a single purpose: relational organising where volunteers leverage their existing networks and relationships to support Biden. If app users share their contact list, this is cross-referenced with the Democratic Party’s voter files; the system identifies people the app user may have a personal connection with who might be persuaded to vote for Biden; and it prompts the app user to send these potentially undecided voters personalised messages (Gursky & Woolley, 2020).

Both apps ask users to provide the campaigns access to their phone contacts. The campaigns do not ask those contacts for permission for that information, and in the USA, they are not legally required to. Beyond users’ friend’s contacts, the Trump campaign app also seeks permission to access a far more extensive list of data to enable profiling and targeting, drawing comparisons to Cambridge Analytica (Gursky & Woolley, 2020). According to a former executive for Phunware, the data collected from Trump’s app can be poured into an information ecosystem designed to replace the Facebook features that made the 2016 Cambridge Analytica scandal possible (Kates, 2020, July 18).

The USA, then, with its weak privacy laws and long history of electoral datamining, is a global leader in developing profiling and targeting tools and applying them to political campaigns. Most of its population do not like this situation. A poll (conducted by Knight Foundation-Gallup across 3–15 December 2019) finds that 72% of Americans say that Internet companies should make no information about its users available to political campaigns for targeting voters with online ads. Only 20% of US adults favour allowing campaigns access to limited, broad details about Internet users, such as their gender, age or postal code. This is in line with Google’s policy, which, in 2019, reined in the scope of information that political campaigns could use for targeting. Only 7% of Americans say that any information should be made available for a campaign’s use. This is in line with Facebook’s targeting policies, which up until January 2022 did not put any such limits in place on ad targeting (although Facebook does give users some control over how many ads they see) (Bond, 2021, November 9; McCarthy, 2020, March 2).

Profiling and Targeting in UK Political Campaigning

Unlike the USA, the UK ranks fairly highly on press freedom, coming 24th out of 180 countries in 2022 (Reporters without Borders, 2022c) with a well-funded and regulated broadcasting sector and over 50% of the population trusting broadcast news, local news and regional news in 2022 (Newman, 2022). Furthermore, unlike the USA, the UK (as part of the European Union) was protected by comprehensive privacy legislation (the European Union General Data Protection Regulation (GDPR) 2016)) and had much stronger data protection laws. Post-‘Brexit’, the UK GDPR came into effect on 1 January 2021, based on the European Union GDPR, with some changes to make it work more effectively in a British context. The GDPR offers data protections on consent (personal data cannot be processed without freely given, specific, informed and unambiguous consent, unless allowed by law); time limits on how long personal data can be kept; and profiling (the data subject has the right to not be subject to a decision based on automated processing, while profiling to analyse or predict behaviours or preferences is legally regulated) (European Union General Data Protection Regulation, 2016/679, Recital 71).

Consequently, compared to the USA, British political parties have far less access to types of data required to target voters. For instance, many American states have an electoral register that identifies voters by partisan preference, but the UK does not. Nonetheless, digital campaigning has sharply risen in the UK across the second decade of the twenty-first century. The proportion of money that British political campaigners reported spending on digital advertising as a percentage of their total advertising spend rose from 2% in 2014 to 42% in 2017 (The Electoral Commission, 2019a), not least because while paid political advertising in broadcasting is prohibited under the Communications Act 2003, the ban does not apply online (Dobber et al., 2019). Indeed, online political advertising in the UK has been characterised as a ‘Wild West’ due to its lack of transparency, deficiencies in monitoring by regulators and civil society, and lack of deterrence for election offences (All Party Parliamentary Group on Electoral Campaigning Transparency, 2020, January).

British digital campaigning has also seen increasing use of data analytics and data management approaches to profile and thereby identify target audiences, including ‘persuadables’ and swing voters. The extent of targeting appears to differ by party. For instance, in the 2015 UK General Election (won by the Conservatives), it was only the Conservatives who seem to have adopted the US model of individual-level targeting (labour used broader segment-based targeting). The Conservative Party targeted seats based on what the party knew about types of voters living there, their propensity to swing their vote, their reactions to certain messages and other seat-specific factors (Anstead, 2017). While partisans are unlikely to change their views based on ads, it only takes a small number of ‘persuadables’ to swing close elections. According to Dominic Cummings (campaign strategist for ‘Vote Leave’, the official campaign to leave the European Union in the ‘Brexit’ Referendum), the Referendum result of 52% for Leave and 48% for Remain came down to only ‘about 600,000 people’ (Cummings, 2017, January 30). According to a report by the UK’s data regulator, and a report funded by the UK Foreign and Commonwealth Office, Vote Leave heavily relied on data scientists, using data management services of Aggregate IQ (a Canadian digital advertising web and software development company). One of Aggregate IQ’s roles was to accumulate data on individuals to build and apply predictive models, and to serve the most easily influenced individuals’ heavily targeted messages (Brown et al., 2020, p. 46; Denham, 2020, October 2, p. 10). Cummings states that Vote Leave spent 98% of its budget on digital advertising (rather than mainstream media advertising), with most spent on ads that experiments had demonstrated were effective (Cummings, 2017, January 30). The core messages were highly emotive and deceptive, conveying that staying in the European Union would lead to swarms of Middle Eastern immigrants; that we could only ‘take back control’ by leaving; and that strained, cherished national resources like the National Health Service would be better financed if Britain left. Cummings estimates that Vote Leave ran around one billion targeted ads before the vote, mostly via Facebook, sending out multiple different versions of messages, testing them in interactive feedback loops (Cummings, 2016, October 29). Additionally, having identified from focus groups that crucial swing voters were confused, and liable to change their voting decision based on whether they had last seen a message from either side of the campaign, Vote Leave ensured that their ads were delivered to swing voters as late as possible in the campaign (Cummings, 2017, January 30, Howard, 2018, November 30).

This growing importance of data brokers (who collect and aggregate data) is noted with concern by the UK’s data regulator, the Information Commissioners Office (2018, November 6). In 2019 the regulator conducted its first data protection audit of seven British political parties to assess compliance with data protection law. It finds that all parties typically obtained data from the full electoral register; the marked register (a copy of the electoral register that has a mark by the name of each elector who has voted); directly from individuals, usually by asking them, but also by collecting information electors place in the public domain about their political views; and publicly available data and other datasets such as census, election results, Land Registry and polling (Information Commissioners Office, 2020, November, p. 10). Additionally, the three main political parties (Labour, Conservative and Liberal Democrats) obtained lifestyle-type information on individuals from data brokers under commercial agreements (Information Commissioners Office, 2020, November, p. 10). The audit finds that political parties analysed and profiled this data to derive further data, such as likelihood of individuals voting a certain way and their likelihood of turning out to vote. Parties then used their datasets and analysis to inform the purchase of ads on social media to target individual social media users; send out targeted emails or telephone canvassing voters to encourage individuals to vote or change their voting behaviour; and decide who to canvass on doorsteps. The Information Commissioner’s Office concludes that ‘there are systemic vulnerabilities in our democratic systems’ (Denham, 2020, October 2, p. 1) and finds only a limited level of assurance that procedures are delivering necessary data protection compliance (Information Commissioners Office, 2020, November).

Despite this accelerated move towards profiling and microtargeting voters, there are few empirical studies on their practices in the UK, and findings are mixed regarding accuracy and prevalence. One study by Open Rights Group (a UK-based digital campaigning organisation working to protect people’s rights to privacy and free speech online) suggests that the current state of political profiling does not seem particularly accurate (Crowe et al., 2020, June 23, p. 9). A study on Facebook ads during the 2017 General Election campaign (11,421 participants exposed to 783 unique Facebook political ads) finds that rather than evidence of segmentation, messages adhere closely to national campaign narratives (Anstead et al., 2018). Targeted advertising in British general elections tends to draw on well-honed national messages deployed to reach voters who are likely to be most receptive to them and deemed electorally significant (Anstead, 2017). However, a case study of Leave.EU’s campaign (one of the unofficial ‘Leave’ campaign groups) in the ‘Brexit’ referendum points to evidence that Leave.EU’s founder (Arron Banks) used actuaries from his insurance company to copy Cambridge Analytica’s modelling (provided in Cambridge Analytica’s pitch for business to Leave.EU, and initial scoping work) to identify 12 areas in the UK most concerned about the European Union, in order to target them with in-person visits from Nigel Farage. Farage (then leader of UK Independence Party (UKIP), a party that had long campaigned to leave the European Union) was regarded as vital to turning out voters who had never voted before but were passionate about leaving the European Union because of immigration concerns (Bakir, 2020).

More field studies on the practices of profiling and microtargeting are needed, but the growing prominence of analytics companies is concerning, especially regarding transparency of their activities to the data regulator, the electoral regulator and citizens. The UK’s 2016 ‘Brexit’ referendum saw ‘dark ads’ (online ads only seen by the recipient) being discussed in public for the first time, but three years later, by the time of the 2019 General Election, many were still unaware of these techniques. YouGov survey research commissioned by Open Rights Group showed that although 54% of the British population were aware of how political parties target or tailor ads based on analysis of their personal data (political microtargeting), almost a third (31%) were not aware at all or not very aware. Only 44% of the national sample were very or fairly aware of ‘dark ads’ with a similar fig. (41%) not very or at all aware. That there is still relatively low awareness after several years of public discourse on this issue is alarming: it shows that a significant proportion of the electorate are unaware of how parties may try to manipulate them. The survey finds that a majority (58%) said they were against targeting or tailoring ads, based on analysis of people’s personal data to segment them into groups during elections (Open Rights Group, 2020, January 10). Furthermore, research into campaigning during the 2019 UK General Election finds that three quarters of people said that it was important for them to know who produced the political information they see online, but less than a third knew how to find out who produced it. Almost half (46%) were concerned about why and how political advertising was targeted at them (The Electoral Commission, 2019b).

Profiling and Targeting in India’s Political Campaigning

India, the world’s biggest democracy (with a population of 1.393 billion in 2022), provides a context of rapidly expanding access to digital services, but an inadequate data protection regime. It also ranks poorly in the world press freedom index (150th out of 180 countries in 2022) given its politically partisan media, its concentration of print news and television media ownership, and its violence against, and harassment of, journalists who are critical of the government (Reporters without Borders, 2022a). Political parties can exploit these features when campaigning.

With Internet penetration at 54% in 2022 (Krishnan, 2022), compared to the USA and UK, India suffers from a ‘digital divide’, but this is rapidly changing. India’s 2011 census report reveals that only 19% of Dalits (one of India’s most marginalised castes) had access to water, but 52% from the community owned a phone (61% in urban areas and 42% in rural areas). From December 2016 to July 2017, the number of mobile phone Internet users in India rose rapidly from 389 million to 420 million, fuelled by a decrease in data rates after a price war between Reliance-owned Jio network (a new entrant in India’s telecom market) and other telecom companies (Gowhar, 2018). In terms of daily time spent using the Internet on mobiles, by 2022, India (at 4 hours 5 minutes) was ahead of the world average (of 3 hours 43 minutes) (Kemp, 2022). By the 2019 General Election, nearly half of India’s 900 million eligible voters had access to the Internet and social media, and there were 300 million Facebook users (Naumann et al., 2019). By 2021, India had 410 million Facebook users, 440 million YouTube users and 530 million WhatsApp users (Ministry of Electronics & IT, 2021). Over half of India’s English-speaking, online news users use Google-owned YouTube (53%) and Meta-owned WhatsApp (51%) for accessing news in 2022 (Krishnan, 2022). The changes in the top ten free apps in Play Store across 2017–2018 also reflect the growing influence of regional language social media applications that are more effective at targeting local populations. For instance, Facebook and Messenger were replaced in 2018 with more vernacular language apps such as ShareChat and Helo that operate in up to 15 different languages (mainly Hindi, Tamil and Telugu) and which target the 100–150 million mobile Internet users in rural India and tier 2 and 3 cities populated by Indian language speakers (Naumann et al., 2019).

India’s data protection regime is inadequate to deal with this rapidly expanding access to digital services. India’s Personal Data Protection Bill was not introduced until 2019 and, at the time of writing (Spring 2022), is still not enshrined in legislation; neither does India have a national regulatory authority for personal data protection. In the meantime, India’s Information Technology Act (2000) gives a right to compensation for improper disclosure of personal information. Furthermore, the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011 imposed extra requirements on commercial entities in India relating to collection and disclosure of sensitive personal data which has some similarities with the GDPR. For instance, a body corporate collecting sensitive personal data should keep the data provider informed about the fact that data is being collected; for what purposes; intended recipients; and contact details of the agency collecting and retaining the data (Linklaters, 2020). In terms of protecting elections, pre-certification of social media content was mandated by India’s Electoral Commission in the 2014 General Election: ads had to be certified within the boundaries of permissibility for an electoral speech, as well as not appealing to caste or religious identity and not promoting hate speech or bribery. The Electoral Commission also has a Model Code of Conduct to promote good conduct, but this lacks enforceability (Naumann et al., 2019). In 2019, the Electoral Commission issued social media guidelines for campaigning, and there is a voluntary adoption of a code of ethics for online campaigning by Internet companies (Rao, 2019).

Given India’s growing access to smartphones alongside absence of a robust data protection regime, it is unsurprising that across the past decade, India’s successful politicians have turned to data-driven campaigning techniques to target electorates. Hindu nationalist and populist, Narendra Modi (leader of the Bharatiya Janata Party (BJP), the son of a tea-seller and one of a handful of lower-caste politicians to reach the upper echelons of power, was the second most popular politician on Facebook with over 18 million fans (after then US President Obama with over 41 million fans) (Barclay et al., 2015; Shackle, 2018, July 16). According to a report from Tactical Tech (an international non-governmental organisation that engages with citizens and civil society to explore and mitigate the impacts of technology on society), in the 2014 national elections, the BJP was among the first of India’s political parties to employ data-driven campaigning techniques, winning a landslide victory. Its techniques included sending global positioning system-enabled video trucks to villages in the most populous and politically weighty state of Uttar Pradesh to ensure digital outreach in remote areas; using 3D hologram technology to hold 1350 3-D rallies across India at the state and constituency level; and leveraging social media to ensure outreach and engagement prior to rallies, electoral data and on-the-ground reports to inform each rally speech with local context (Hickok, 2018).

India’s political campaigners claim that they can microtarget India’s citizens, although given the uneven nature of digital penetration in India, this requires much fieldwork to generate reliable data streams. For instance, in the 2017 Uttar Pradesh state election, through 45,000 telephone calls a day and multiple field visits to all 403 seats in the state, voters’ details including caste, voting pattern and preferred chief minister were fed into a database for the ruling Samajwadi party. This enabled its candidates to download an app showing voter preferences down to the level of individual booths and along caste, gender and literacy lines. The field visits were necessary as telephone numbers are not always active for long in poorer areas as it is cheaper to buy a new sim card pre-loaded with call credits than buy extra credit. The field visits returned not just detailed voter lists but also relationships with local influencers, such as village chiefs, postal workers and teachers, to help report popular sentiment or convey new telephone numbers. According to the Samajwadi party campaign in Uttar Pradesh, candidates can microtarget messages they know appeal to young, college-graduate, Muslim women, for example, in booths that skew towards those demographics, and can know to call a particular influential member of a village whose support was wavering (Safi, 2017, February 16).

The need for a robust data protection regime in India has been repeatedly highlighted by data exploitation in political campaigning. For instance, in early 2017, ahead of state elections for Uttar Pradesh, the ruling party (BJP) used WhatsApp massively for mobilisation, coordination and voter outreach, forming 10,344 WhatsApp groups to coordinate and circulate media among party workers (Gowhar, 2018). However, as elsewhere, social media not only mobilises but spreads disinformation, stokes communal tension and silences dissent. For instance, on 22 April 2018, a fake tweet began circulating under the name of Rana Ayyub, an Indian political and investigative journalist, critical of Modi and the role he allegedly played in anti-Muslim riots while governor of Gujarat. The fake tweet expressed support for child rapists and was shared tens of thousands of times, including by BJP legislators. According to the Centre for International Governance Innovation (a Canadian-based, independent, non-partisan think tank on global governance), on 23 April, another false tweet appeared under Ayyub’s name, saying ‘I hate India and Indians’ (Shackle, 2018, July 16). That evening, a deepfake pornographic video with Ayyub’s face morphed onto another woman’s body circulated on WhatsApp groups of the BJP and the Rashtriya Swayamsevak Sangh (an Indian right-wing, Hindu nationalist, paramilitary volunteer organisation) and was made public (European Science Data Hub, 2019, December 4; Shackle, 2018, July 16). Across 2021, there were social media campaigns from far-right Hindu nationalist activists fomenting hatred and calling for the murder of Ayyub, with her personal data posted online (Reporters without Borders, 2022a). As well as this gendered disinformation, more broadly, the 2019 General Election saw a spike in online rumours, fake news and polarising content on social media, including on vernacular language apps such as ShareChat and Helo as well as Facebook (Naumann et al., 2019; Krishnan, 2022). As many social media app users in India are first-time Internet users, they may lack digital literacy skills to spot disinformation, especially as content shared comes from someone known, producing a tendency to trust the source (Gowhar, 2018).

As well as exploiting WhatsApp, Modi launched his NaMo mobile phone app in 2015 to engage supporters (Kazmin, 2018, March 28). The app has no visible content moderation and propagates polarising posts based on fictitious data about the religion of criminals and voter turnout. The app’s news feed also promotes posts from accounts that share regular political updates on the prime minister’s app and whose Facebook pages openly circulate fake news. The promotion of such accounts on the NaMo app makes its millions of users vulnerable to disinformation (Bansal, 2019, January 27). Pushed via official government channels, and pre-installed on low-cost Jio mobile phones, it has become one of the most widely used politician’s apps in the world, with over ten million downloads in the Google Play Store. In late 2019 the NaMo app received a makeover that included live events, Instagram-like ‘Stories’ about Modi, gamified engagement strategies, means of accepting micro-donations and promises of a direct line to the prime minister. Also of note is that the transfer of digital campaigning techniques and practices does not always flow from the USA outwards but also in the opposite direction. For instance, Gursky and Woolley (2020) suggest that The Official Trump 2020 App copied Modi’s tactics.

Modi’s NaMo app also collected large amounts of data for years through opaque phone access requests (Gursky & Woolley, 2020). In 2018, journalists reported that the NaMo app asked users to provide access to 22 personal features on their devices, many more than the 14 data points requested by the official app of the Prime Minister’s Office, ‘PMO India App’. In March 2018, a day after an anonymous French cybersecurity researcher exposed on Twitter that the app was transferring user details to a third party (a US-based behavioural data analytics company, CleverTap, which helps clients to ‘influence’ app users’ ‘behaviour’ by uncovering insights), the privacy setting was ‘quietly’ changed, drawing accusations of parallels with Cambridge Analytica’s practices (Kazmin, 2018, March 28; NH Political Bureau, 2018, March 25). In India there is no legislative restriction regarding transborder dataflows of information that is not sensitive personal data (Linklaters, 2020). The NaMo app’s default permission settings gave it nearly full access to the data stored on users’ phones, including photos and videos, contacts, location services and ability-to-record audio, although savvy users could opt out by disabling permissions (Kazmin, 2018, March 28).

Consider also that the ruling party, BJP, was the first recorded political party in the world to use a deepfake video in an electoral campaign (the legislative assembly elections in Delhi in 2020) for targeting rather than for spreading disinformation (MIT Technology Review, 2021). The party hired political communications company, Ideaz Factory, to create deepfakes to reach voters in the 22 different languages and 1600 dialects used in India. One that went viral was of BJP president, Manoj Tiwari, criticising the incumbent Delhi government (it reached approximately 15 million people in 5800 WhatsApp groups in the Delhi and National Capital Region). Originally speaking in English, the deepfake simulates convincing mouth movements that Tiwari is speaking in Haryanvi, the Hindi dialect spoken by target voters for the party (to try to persuade the large Haryanvi-speaking migrant worker population in Delhi from voting for the rival party) (Christopher, 2020, February 18).

Despite India’s relentless campaigning ‘firsts’ driven by technology, and rapid changes in mobile phone and Internet penetration, critical digital literacy programmes and public awareness campaigns are minimal. Populations with little or no access to new technologies or limited skills to use them effectively are particularly susceptible to falsehoods peddled online: this includes the poor, rural populations, women, the disabled, migrants, internally displaced populations and the elderly (Rao, 2019). Survey experiments on a highly educated online sample in India on the effectiveness of a media literacy campaign (Facebook’s ‘Tips to Spot False News’ promoted at the top of users’ News Feeds in 14 countries including India in April 2017 and printed in full-page newspaper ads in India) find that the intervention improved discernment between mainstream and false news headlines by 17.5%. However, this increase in discernment did not last several weeks later; and there were no measurable effects among a representative face-to-face sample of respondents in a largely rural area of northern India, where rates of social media use are far lower (Guess et al., 2020).

Conclusion

Across the past century, professional persuaders (advertisers, public relations experts and political campaigners) have sought to understand and target audiences with tailored persuasive messages based on scientifically derived insights. This has accelerated globally across the past decade, as data management companies and data brokers joined forces with professional persuaders to exploit the affordances of ‘big data’, profiling technologies and microtargeting. This is evident even in regions lacking infrastructure for fixed-line Internet connections, compensated for by much higher mobile penetration. Although privacy has long been a universal human right, there are different privacy protections and levels of implementation across countries, and the technology continuously advances, facilitating privacy-invasive levels of profiling and targeting. This chapter reviewed key developments in the USA (where the globally dominant social media platforms are headquartered), the UK and India (democracies with different data protection regimes and with different digital literacies).

The USA, with its weak privacy protections, has led the way in developing profiling and targeting tools and applying them to political campaigns. US-headquartered social media platforms offer many forms of targeting utilised by political campaigns. Political campaigners also use the affordances of social media platforms to deploy psychographic and neuromarketing tools. As the case of Cambridge Analytica and Facebook shows, technological loopholes were exploited for attempted short-term influence (during a specific election campaign). Once the candidate has won the election, this cannot be undone by fines issued several years later. More recently, the development of bespoke campaign mobile phone apps, some with invasive tactics for gathering data and reaching voters, allows political campaigns to collect massive amounts of user data without needing to rely on social media platforms or expose themselves to fact-checker oversight of deceptive messaging. Most Americans do not think that information about Internet users should be made available to political campaigns for targeting voters with online ads.

At the time of writing (in Spring 2022), the UK has retained the European Union GDPR in domestic law as the UK GDPR, although keeping the framework under review. Regarded as one of the strongest and most influential privacy regulations in the world, the GDPR offers data protections on collection and processing of personal data. Yet, even in the UK, digital campaigning, profiling and targeting have sharply risen across the past decade. With the ban on paid political advertising in broadcasting failing to apply online, and digital political campaigning characterised as a ‘Wild West’, almost half the population are concerned about why and how political ads are targeted at them online; and the national data regulator concluded in 2020 that there was only limited assurance that procedures were in place to protect data in digital campaigns.

Although India has been on the wrong side of the digital divide, this is rapidly changing, while digital literacy remains low and India’s data protection regime and culture is still being constructed. Exploiting this situation, political parties have successfully embraced digital political campaigning and continue to push the boundaries of what is permissible and recognisable. Meanwhile, practices developed in India (on the NaMo app) have been copied in the USA. Such apps cater for minimal literacy levels; have no visible content moderation; are shared on personal networks (and hence are arguably more trusted); and greatly enable delivery of inflammatory and deceptive messages, targeted at profiled users.

Despite this accelerated move towards profiling and microtargeting voters, there are few empirical studies on their practices and impacts. Where they exist, studies find modest impacts on specific types of audience, and mixed findings regarding accuracy and prevalence of microtargeting voters. More studies are needed on the effects of continuously refined profiling and targeting techniques on voting behaviour, especially as it may only take the mobilisation of a small sliver of the population (the persuadables) to generate decisive results. Digital literacy, and awareness of profiling and microtargeting technologies for political purposes, is uneven across the world, but where people are aware, most do not want it (also see Schäwel et al., 2021).

Across the world, different types of government operating under different privacy regimes may be more or less inclined and enabled to allow and deploy such emotional AI on their citizens. Given what has been found in Part I of this book, the outlook could be bleak (for instance, widespread use of far richer, microtargeted disinformation and exploitation of divisive, conspiratorial, post-truth narratives that are highly contextually relevant). However, this is not inevitable: for instance, greater mobilisation and political engagement on issues that diverse voters care about is also possible; and regulators and civil society are increasingly alert to the perils of profiling. Part II of this book now turns to the issue of the social and democratic harms arising from false information online and how best to protect us in an era of increasingly optimised emotions.