Keywords

Introduction

Adopting critical perspectives in digital technology research faces several challenges. From the outset, the first consists, if we want to open up thinking about their economic, political, and social issues and consequences, of the question of the so-called neutrality of technology. Whether it is technics as an “ontological role” (Heidegger, 1958), collective memory (Stiegler, 1994), individuation dynamic (Simondon, 1989), or the capitalism phenomenon (Lefevbre, 1971), several contributions have marked the will to assign to technics attributes which go beyond simple, neutral instrumentalisation to recognise a role of co-instituting social dynamics. In addition to this continuing challenge, contemporary studies of algorithms and artificial intelligence face additional obstacles. The algorithms fundamentally lack transparency (Castets-Renard, 2018), thus inducing the need to audit them (Mittelstadt et al., 2016). This opacity is made all the greater since it takes place in a context of social acceleration (Rosa, 2010) which tends to make their presence fleeting—merchant circulation of personal data (Mondoux & Ménard, 2018) and commercial property which make their accountability uncertain (Watson & Nations, 2019) at best. Add to this that they are heterogeneous in nature and often integrated into larger systems (Kitchin, 2014) and the reluctance of social media to open up their services to research, it is understandable that it is tempting for critical studies to abandon the empirical dimension to focus on “theoretical” contributions. The aim of this project is to open up “theoretical” reflections on algorithms to the contribution of their empirical study. To do this, we have had to adopt several strategies that we share with you in this chapter, as well as their anchoring within an analytical framework inspired by critical perspectives.

Political Communication in the Age of Algorithms

The use of algorithmic processes (automatisation of the production, circulation, and consumption of data by the use of computational procedures) in political communication is increasing. Assessing the impact of the automatic production, circulation, and delivery of political messages and advertising is challenging because the work carried out by algorithms is still largely hidden. Our current research project is intended to shed light on the contribution made by artificial intelligence, more specifically recommendationalgorithms, to political advertising and messages in digital social media.

The essential function of recommender systems is mathematically predicting personal preference. […] Thematically, recommenders aid users along four key dimensions (which, may or may not overlap): they help users decide what they could or should do next: they help users explore a variety of contextually relevant options: they help users compare those relevant options; and, perhaps most critically, they help users discover options and opportunities they might not themselves have imagined. (Schrage, 2020: 5)

We will use a methodology designed to meet the challenges currently faced by research on algorithms (they are not neutral and difficult to study because of their opacity: Kitchin, 2017; Diakopoulos, 2014; Bucher, 2012) and demonstrate that social media only have as much targeting power as their users’ contributions as expressed by their actions.

Studies of political communication in industrial societies have traditionally started from the concept of propaganda and its effects on public opinion (Lasswell, 1927; Lippmann, 1922; Maarek, 2008). Whether their perspective was functionalist or critical, classical studies in political communication took as their premise the need to establish a dynamic system ensuring the mass production and circulation of messages that would convince citizens, and inform their political choices, in a context in which they lacked the ability to understand the complexity of social and political dynamics (Ellul, 1962; Herman & Chomsky, 1988; Lippmann, 1922). The concept of propaganda indicates a structural transformation of the modern democratic public sphere (Habermas, 1962), defined by citizens’ ability to rationally discuss the ends that are the basis of society. The media play a key role in this type of instrumental communication, since they provide a way of reaching “the masses” (Herman & Chomsky, 1988; Turner, 2018).

The internet has unfolded around a prophetic discourse announcing the concrete realisation of the ideal of the Habermasian public sphere. Digital social media appeared in the aftermath of postmodernity, which is characterised by two powerful tendencies: a crisis of legitimacy for political institutions and hyperindividualism.

With the collapse of grand narratives (Giddens, 1994; Lyotard, 1979), the Habermasian ideal of rational discussion based on common standards has become a mechanism legitimising a new social dynamic based on the primacy of circulation over content (Dean, 2005, 2009). Arguments based on reason are now relativised as personal opinion, and debates on means—rather than ends—now predominate in the political public sphere. The ideal of political communication based on reason becomes a circular communication process in which deliberation takes second place to “an organizational and systemic logic, centered on efficiency, effectiveness, control over the environment, launching operations with a purely utilitarian or strategic basis” (Freitag, 2002: 43; our translation). This phenomenon has been analysed in critical studies of the digital world (Andrejevic, 2013; Morozov & Haas, 2015; Stiegler, 2015) as a new form of social control described through the concept of algorithmic governmentality:

a form of government essentially fed by raw data (signals that are intrapersonal and a-significant, but quantifiable), operating through the anticipatory configuration of possible events rather than the regulation of behavior, and solely addressing individuals through notifications that trigger reflexes rather than relying on their understanding and will. Thus, the constant reconfiguration, in real time, of individuals’ information and physical environments on the basis of “data intelligence”—whether this is called “personalization” or “security metabolism”—is a new form of government. (Rouvroy, 2012, n.p.; our translation)

“Algorithmic governmentality” (De Filippi, 2016) may be said to embody a break with traditional political communication to the extent that it no longer seeks to persuade through rational discourse, but attempts to provoke responses through signals and stimuli. Processes of political communication are seen as legitimate less in relation to “great aims” than because of their pragmatic, technical, quantifiable, and verifiable effectiveness (Nickerson & Rogers, 2014).

Hyperindividualism (Mondoux, 2011) is part of the same dynamic. Now freed from the “yoke” of ideology and all that is political, individuals have become subjects for whom, ultimately, free will in itself is sufficient to justify their values, express themselves, or build their identity: this leads to processes of personalisation. Digital social media have thus been seen as tools of self-expression and the search for identity (Mondoux, 2011; Papacharissi, 2010), as new, more “democratic” information media and, especially, as sources of digital traces through the production of personal and behavioural data (Ménard, Mondoux, Ouellet & Bonenfant, 2016; Berthier & Teboul, 2018). As part of this new dynamic, political communication has also shifted, with the help of digital tools and traces, towards personalisation and microtargeting (hypersegmentation of a large target audience—Barbu, 2013) through the use of data that is produced by individuals (Barocas, 2012; Woolley & Howard, 2018) and processed by recommendation algorithms (Boyd & Reed, 2016; Shorey & Howard, 2016).

While some may see in this a sign that democracy is being restored or enhanced, one thinks about the promises of an “E-Government” trend (Lee et al., 2011), major problems and challenges undeniably exist. One of them is that algorithms contribute to a dynamic characterised as a form of totalisation without totality (Freitag, 2002), that is, the totalisation is not inscribed in symbolic politico-institutional representations (“totality”) as it is immanently assumed as immanent and “neutral” (technical abstraction). Hence, the algorithmic governmentality tends to conceal ideology and the political realm: if you accumulate “raw” data (Gitelman, 2013) and produce a quantifiable synthesis, you can then claim to have established a direct relationship with the “Real” (Ménard & Mondoux, 2018) giving rise to an equally objective view of society itself. Deprived of the normative and expressive support of ideology and the political realm, collective reflection and praxis lose their meaning. In this context, the issue of political communication becomes all the more crucial in that a twofold challenge must be met: not only to convince people in terms of ideas but also to (re)legitimise the political realm itself (Sfez, 1992). Political communication must deal with these new dynamics.

The dynamic of individualised communication contributed to the decline of journalism as the main source of mediation with citizens (gatekeeping) (Entman & Usher, 2018; Public Policy Forum, 2017). This left the door wide open to the production of personalised messages that help reduce all messages (whether political, personal, commercial, etc.) to the same level of legitimacy as opinion, in a plethoric jumble of fake news, journalistic information, sentiments, propaganda, disinformation, and even interference between states (as in the case of Russia during the 2016 American presidential election) (Boyd-Barrett, 2019; Spicer, 2018).

The dynamic of personalisation is also (re)produced by the use of recommendation algorithms that tend to confine individuals to a “personal cocoon” (Bodo et al., 2017) or “echo chamber” (Boutyline & Willer, 2011; Pariser, 2011), in which they receive only what resembles (is correlated with) their “profile”; this profile is nourished by their personal opinions and behaviour. Not only are individuals confined to a dynamic that excludes other opinions (since personalising algorithms send content that “complies” with the individual’s stated values and opinions—Gao et al., 2010; Sha, 2013), but this same dynamic tends to strengthen and radicalise opinions: in fact, this is one of the main challenges facing a number of Western societies today. In our view, the dynamic of personalisation tends to obscure what is political, giving precedence to “facts” (quantitative objectivations) over law (the political realm) and making it all the more difficult to achieve a genuine emancipatory praxis (Rouvroy & Berns, 2013; Ouellet, Ménard, Bonenfant & Mondoux, 2015).

In response to recent Facebook scandals—the integrity of Facebook debates and exchanges in the public sphere is a major issue—we, like others, argue:

Strong arguments support the position that algorithmic agents that operate without proper, or flawed, human oversight; or absent of well-defined governance and ethical frameworks, may have negative effects on greater societal norms and values such as the holy triumvirate of liberté, égalité, fraternité—or to put it in the language of the existing legal frameworks, fundamental human rights and freedoms, equality, and social cohesion. (Bodo et al., 2017: 137)

This raises the important question of the political in the age of artificial intelligence and the need to “reintroduce” what is human—both “politics” and everything that is political—in these processes of automation. Artificial intelligence can design any computer algorithm or technological method that allows a machine to simulate part of the human intelligence, that it is to learn, predict, make decisions, or perceive its surroundings. Algorithms can therefore be used in simple interactive media, in which case the entirety of the control is left to a human’s will to contribute to this interaction through person-machine communication, often in order to facilitate an arduous or complicated task. Once artificial intelligence is implemented into the process, part of this control is left to the machine and some of the thought process required by a more complex task is translated into an artificial communication monologue completed by the machine itself. With the arrival of massive data collections and machine learning capabilities (such as it can be seen with recommendation algorithms), more and more of this control is being delegated to computer and technological systems, which often dialogues between them in others to compartmentalise the information, augmenting the amount of artificial communication required being produced, which in turn leaves out humanity from most of this process with little to no means of contributing, figuring out, or interfering with these processes.

In disclosing the empirical work carried out by recommendation algorithms, this research will raise awareness among members of the public and decision-makers of the issues involved in automating political messages on digital social networks. Such issues extend well beyond the traditional problem of protecting personal data, and our research can contribute to reflections leading to the development of normative and regulatory frameworks. Lastly, access to algorithms in general, and their lack of transparency (Mittelstadt et al., 2016; Pasquale, 2015), is problematic, especially in the context of privatisation and the economic power of GAFAM (Google, Apple, Facebook, Amazon, Microsoft) (Biancotti & Ciocca, 2018). From the interface to their functions, social media platforms show a tendency to promote the image of citizens as independent individuals in control of the technology they are using (Bruneault & Laflamme, 2020), which is a problem when the advertisement shown on their feed is by nature anchored in social and political dynamics that requires the recognition of the influence generated by other individuals and by the medium itself. For these reasons, our research will contribute to emerging reflections on the socio-political contexts of a truly “social” deployment of artificial intelligence, chiefly in that it will provide an innovative empirical corpus showing how recommendation algorithms act on the basis of citizens’ personal profiles on digital social media and how political messages and advertising circulate (what profiles receive what messages, where the messages come from, how frequent they are, etc.).

Research Objectives and Methodology

The chief objective of this research project is to analyse the communicational and socio-political consequences of automating through algorithmisation the production, delivery, and consumption of political messages and advertising, in order to problematise issues related to democracy in a digital social context and their impact on election processes. This objective encompasses four sub-objectives:

  1. 1.

    Carry out an empirical analysis of algorithmic systems used as tools to produce, circulate, and consume political messages and advertising in digital social media, in order to understand how they work.

  2. 2.

    Analyse the relationship between user profiles (described in terms of their geographical, sociocultural, and media diversity) and the political advertising and messages they receive, in order to identify processes of microtargeting (personalisation) carried out by algorithms.

  3. 3.

    Analyse the circulation and targeted delivery of political advertising and messages during the next Canadian federal election campaign (2023) in order to understand how algorithmic political communication can affect election processes.

  4. 4.

    Develop recommendations about the effects of the algorithmisation and microtargeting of political communication on digital citizenship in the public sphere in digital social media, in order to support reflections that will eventually lead to the establishment of statutory or regulatory frameworks.

In this project, we intend to use a research method that will enable us to shed light on the hidden contribution of recommendation algorithms to the production and circulation of political advertising and messages in digital social media. One of the characteristics of algorithms is that they are not neutral: “algorithms are created for purposes that are often far from neutral: to create value and capital; to nudge behavior and structure preferences in a certain way; and to identify, sort and classify people” (Kitchin, 2017). This is a position shared by a number of authors (Bozdag, 2013; Fleischmann & Wallace, 2010; Gillespie, 2014; Mager, 2012). Algorithms are also difficult to study because of their opacity (“black box”), and this makes it difficult to see how their power and influence are exerted (Bucher, 2012; Diakopoulos, 2014). One of the more promising methods available is reverse engineering: “the process of articulating the specifications of a system through a rigorous examination drawing on domain knowledge, observation, and deduction to unearth a model of how that system works” (Diakopoulos, 2014: 404). This strategy is recommended (Bodo et al., 2017) and used by a number of authors (Bodo et al., 2017; Diakopoulos, 2014; Gambs, Aïvodji, Arai, et al., 2019b; Gambs, Aïvodji, & Ther, 2019a; Hannak et al., 2013; Lazer et al., 2014; Mikians et al., 2012; Mukherjee et al., 2013).

Since the information openly available on the platform (through the means of options such as “why I am seeing this content”) are either too broad or sometimes even cryptic (compared to the extent to which a company can define its targeting requirements), we have to rely on external methods of finding the answers to our questions. To extract an algorithm from its “black box”, one of the two following variables must be controlled: inputs (the targeted messages defined by producers) or targets (the profile types of those receiving them). Since we cannot control the messages produced by political entities, we need to study their reception by creating a range of possible targets with controlled profiling criteria.

Establishing and Feeding Control Accounts

The digital social network used in this research project is Facebook; this is because Facebook is easy to use and remains the most popular social media platform. Moreover, Facebook has been continually involved in multiple controversies related to electoral advertising. To achieve our objectives, we have chosen not to use the Facebook accounts of actual participants. It would be difficult to recruit hundreds of people who would be willing to provide access to their Facebook account. Their diligence in keeping a diary, and making sure they recorded the right elements, might have been problematic, and there would be ethical problems associated with the circulation of personal data. In addition, this approach would have to deal with the possibility of behaviour changes throughout the participants’ observation period and the introduction of uncontrolled biases. Instead, we have chosen to set up control accounts with profiles managed and fed by automatons (bots). This will facilitate and accelerate operations while making the accounts more uniform (thanks to a controlled environment) and reducing the number of resource persons required to feed active accounts on a daily basis. The automated strategy will also provide for the large-scale capture, categorisation, and archiving of all political advertising and messages received, thus making them complementary to the Big Data infrastructure that we are using.

Methodological criteria used to set up control accounts allow for the following:

  • Virtual accounts set up in a given region (without actual travel)

  • Maximum speed of execution

  • A process that is easily reproduced and taught

  • Ethical monitoring throughout the process

  • Accessibility of tools by automated systems in development.

  • The possibility of increasing the amount of control accounts and their regional, social, and cultural diversity (personalities, range of behaviours, number of marginalised, LGBTQ, or disabled persons, etc.).

At first glance, the project may seem to raise several of the ethical issues raised by AI, mainly the collection of personal data from Facebook profiles and the application of automated tools for mining and analysing social media (Hilyard et al., 2015; Taylor & Pagliari, 2018). But, as more and more studies are finding out, surveys need to go where people are: online (Ouchchy et al., 2020). Our research does and will continue to respect ethical guidelines. Human beings indirectly placed in relation to the control accounts will not be subject to any data collection. It will be necessary to animate the control accounts with content and ensure that they are incorporated into networks of friends while limiting interactions with “real” users to exchanges ensuring participation in a common network. Since users are not themselves the subject of the research, it is not necessary to obtain their consent. No information about users will be compiled and no information, therefore, will be disclosed, whether it is direct, indirect, or related to vulnerable persons. Since interactions with users will be minimal and chiefly limited to the transmission of messages, the control accounts will not cause users any undue loss of time. Impact on the platform (Facebook) will also be minimal to non-existent. Findings will not lead to disclosure of any Facebook security breaches or sensitive information. Loss of resources potentially caused by the control accounts will also be minimal, and the impact on advertisers (and investors) negligible, since 200 witness accounts out of more than 2 billion on Facebook will not have any perceptible effect on their data. Also, in order to pursue our research with ethical consideration (Elovici & al., 2013), we have made sure to only view ads and interact with pages that already had a large number of subscribers, diminishing their cost well under the average of $0.01 per view that the Facebook ad centre charges. This research strategy was approved by our institution’s ethics committee in January 2019 for a pilot project focusing on the 2019 federal election campaign (August–October 2019), enabling us to fine-tune the methodology through a pretest based on the creation of approximately 100 control accounts.

Setting up the control accounts proved to be a fastidious business that could not be automated. Facebook requires an email address to provide authentication when an account is created. Microsoft Hotmail was used to satisfy this requirement, since it is currently the only popular email system that does not base registration on association with a cell phone number—a piece of data that cannot easily be accessed or falsified in large numbers. A database was created combining the fields used to open Hotmail and Facebook accounts in order to keep a record of all the information required to open the accounts. Randomly generated last names, first names, and dates of birth (based on Québec population statistics) were used to create email addresses that were undetectable, since email systems themselves suggest combining these elements. Finally, a rule was set up to direct messages from all Hotmail accounts to a single address, in order to simplify the process of monitoring and storing communications generated by the Facebook control accounts.

Creating Profiles and Feeding the Control Accounts

Once established, the Facebook control accounts were provided with individual data and information based on categories that had been identified to build a specific profile for each account.

Number assigned to each profile. This was a way of tracking and archiving profiles from creation to elimination.

Control account names. We created random associations of the most popular Québec last names and first names, and then defined email addresses based on these associations and a birthday derived from the age of the profile.

Age. Profiles were randomly distributed between two age groups: 18–35 and 35–60. Since minors cannot be targeted by political ads, we decided to focus on the age groups most likely to receive the desired messages and split them into two, relying on Facebook ad targeting’s available options.

Photographs. To personalise control accounts, we used a bank of royalty-free images for Facebook cover photos (unsplash.com), and a website (thispersondoesnotexist.com) able to generate an infinite supply of portraits of non-existent persons that were used as profile pictures. Photographs were algorithmically generated using general criteria of ethnicity and age. To limit the amount of control accounts and data needing to be analysed during this first phase, and given that Facebook requires that accounts be created in the region in which they will be active, our initial accounts were set up in Montreal, Canada.

All activities, posts, indications that a page was liked, sharing or re-sharing of other Facebook posts, and so on took place according to the following parameters.

Open/closed. Control accounts described as “open” had a network of 100 friends (among the control accounts), without regard for profile type and/or political allegiance, and could “like” most of the major Facebook interest categories (see list below). Posts were written in the first person, contained marks of emphasis (“!”), and were more than 140 characters long. Profiles described as “closed” had at most 40 friends and their interactions were restricted to control accounts with a profile similar to theirs. Posts were less expressive, more neutral (they were not written in the first person), focused on a single Facebook interest category, and fewer than 100 characters long.

Active/passive. “Active” control accounts progressed towards 30 minutes of activity per day, with several different activities every day (liking, posting, sharing, etc.). “Passive” control accounts were restricted to less than 30 minutes of daily activity.

Positive/negative. Control accounts use a majority of words rated “positive” or “negative” in the Harvard IV-4 dictionary of psychology database (www.wjh.harvard.edu/~inquirer/), often used for sentiment analysis (Crossley et al., 2017).

Interests. The control accounts “liked” pages included in the Facebook “interests” that serve as the basis for advertising categories. We used the following categories:

  • Business and industry

  • Food

  • Entertainment

  • Families and relationships

  • Fitness and wellness

  • Shopping and fashion

  • Hobbies and activities

  • Sports and outdoors

  • Technology

Political party affiliation. Control accounts were randomly assigned a “political profile” dictating which political ads and messages they would like, comment on, and (re)share:

  • Conservative Party of Canada

  • Liberal Party of Canada

  • New Democratic Party

  • Bloc Québécois

  • Green Party

  • Neutral

All activities of the Facebook control accounts were preserved and documented as followed:

  • Identification number

  • Type of post

  • Text of post

  • Time taken to put up each post and collect associated data

  • Status verification for each post (posted, number of characters, list of words related to sentiments)

These operations allowed us to identify ten profile types, similar to the number involved in traditional targeting grids (Beyer et al., 2014; Lau et al., 2018).

Creating control accounts and feeding them on a daily basis in real time would require significant human resources, leading to prohibitive costs. We therefore chose to use Java-scripted interface manipulation bots to automate these fastidious and voluminous tasks. The bots were able to feed the control accounts automatically through activities (messages, shares, subscriptions, likes, keywords, etc.) that were compatible with their target profile. Bots also provided automatic capture (through screenshots) of ads and messages received in newsfeeds and stored them in a database, thus establishing a controlled environment.

List of automated operations

  • Variable length of connection and speed of execution

  • Verification of expected connection time for the control account

  • Opening of the mobile Web version of Facebook

  • “Organic” writing of user IDs and passwords (variable and random speed of writing)

  • Skipping Facebook friend suggestions and security recommendations

  • First run through Facebook newsfeed; screenshot (observation of long-term effects)

  • “Organic” writing of Facebook post

  • Second run through Facebook newsfeed; screenshot (observation of short-term effects)

  • Disconnection

  • Clearing trackers and connection history

Maintenance of the control accounts and collection of the messages and content they received were carried out as follows:

  • Automatic organisation of screenshots in files for each control account

  • Manual downloading of archives and profile information for each control account

  • Compilation of emails sent by Facebook to control accounts

  • Manual overview and sorting of images and information provided by Facebook

  • Incorporation into a database covering the various sources of information and allowing for cross-referencing between a thematic categorisation (the one used to create the control accounts) and personalisation factors for the control accounts, based on the following criteria: interests (Facebook categories), type of post (in own or followed account, sponsored or suggested page), type of source (governmental services, Facebook group or page [sub-category for political parties], business, news).

Preliminary Findings

A first test, the pilot project, was carried out between August 10 and December 10, 2019, with 100 control accounts activated and (gradually) fed automatically with daily activities (posts, shares, and re-shares). Daily data collection was also automated.

Our first analyses showed that there is a time lag (lasting several days or weeks) before ads appear in the right-hand column or on the news wall of the control accounts. It can also be shown that the time lag is associated with browser “activity”, both on Facebook and on using a search engine, and that it is associated, therefore, with collecting cookies. As long as the search engine’s browsing history and cache memory are empty, there is no possibility that ads will appear in connected Facebook accounts. Visiting a few websites that use cookies (Amazon, Aldo, Dynamite Clothing, etc.) before making a connection with the Facebook account leads to the appearance of ads, initially in the right-hand column (desktop view). The control account then needs to interact with ads in the right-hand column (by clicking on the links) in order to “activate” ads on the news wall (mobile view and desktop).

In short, although Facebook can provide advertisers with various targeting options (personalised website audiences, personalised mobile app audiences, personalised audiences based on a client list, personalised interaction audiences), the option that will most quickly reach a new Facebook account, for either political or advertising messages, is the “personalised website audience” using browsing history with cookies. This kind of targeting associates people who visit a website with Facebook accounts; the Facebook pixel incorporated into the webpage is one of the ways this is done (Trevisan et al., 2019). With this kind of targeting, advertisers may, for instance, launch a campaign to reach people who have visited a product page on their website, in order to encourage them to come back to the website and continue shopping. They can also create an “audience” consisting of every person who has visited their site over the previous months, in order to share similar new products with them.

We need to carry out further analysis of this observation: despite all our efforts to ensure the ongoing existence of control accounts and their receptivity to advertising content, with only a few exceptions, the majority of these accounts, even on the day before the election or on election day itself, did not receive any advertising from any of Canada’s five major political parties. It remains difficult to explain why this is, although we can put forward some hypotheses.

A first possible cause is related to advertising targeting options. It is likely that community managers and/or those responsible for digital marketing in political parties such as the Conservative Party of Canada (CPC) or the Liberal Party of Canada (LPC), each of which spent close to a million Canadian dollars on Facebook advertising, decided to target only the following: people who had interacted with their Facebook pages, people who were on their membership lists, or people who had visited their website. This way of activating ads was in fact validated through our research project test accounts. However, if this hypothesis is confirmed, it remains surprising, since it was our assumption that parties would generally try to increase the number of potential voters.

A second, less likely hypothesis is that no control account was targeted by political advertising because the “Montreal” geolocation was not part of the targeting criteria. Given that all of our control accounts were set up in the same region, it is impossible for us to completely eliminate this factor as a potential cause.

A third possibility is that political parties may have chosen the “broad targeting” option. Broad targeting mostly relies on Facebook’s delivery system to find the best people (as defined by Facebook) to show ads to. In other words, the parties might have chosen to let the Facebook algorithm define their targeting. Given that this algorithm is known to create echo chambers, it is likely that control profiles without membership in political groups, or friend networks or browser histories displaying clear political convictions, would not be targeted. From a methodological perspective, despite their divergent ideals, political parties use overlapping keywords to discuss their electoral programme, which means that control profiles could not create politicised posts associated with one party rather than another. In addition, in order to comply with ethical rules governing this kind of research, profiles could not join or participate in Facebook groups because of the requirement to avoid establishing relations with Facebook users.

A fourth hypothesis is simply related to the stages that must be gone through on Facebook before an account is included in targeted advertising based on interests or interactions. Probably to avoid the proliferation of fake accounts, a certain amount of time seems to be required to observe the technical parameters involved in the creation, activation, and activity of a new account, but also to observe its connection network, interactions, activities, and so on. To automate the accounts, therefore, it was not sufficient to deal with technical connection variables; other factors had to be taken into account to respond to Facebook’s scrutiny. After several months, we also noticed that accounts with a “passive” level of activity never received targeted advertisements, regardless of any other criterion and independently of the targeting option or the account’s browsing habits. All of these accounts were therefore eliminated after a certain time, given that it was impossible to collect data from them. Ensuring that a profile was linked to a more active account through friendship (in this case, with the researchers’ account) was also identified as a necessary step for the account to be recognised for targeting.

One last point: these initial tests enabled us to identify the conditions enabling a Facebook account to be “activated”, that is, to receive messages and content from the service provider. According to what we now know, these conditions do not include how many “likes” are given to pages or posts, how much connection time is involved, how many searches are carried out on Facebook, or how many games are played. However, our tests have shown that to receive content through Facebook, an account must have a browsing history with cookies. In the next stages of the research, it will be important to verify, using massive data, whether variables such as posted or shared content, number of posts, or quality of friends (active or passive) affect the reception of messages and ads in general, and in particular political ads and messages.

Next Steps

Now that the pilot project is finished, we can start preparing for large-scale research to be carried out during the next federal election campaign (fall 2023). Our goal is to enrich the parameters established for control accounts by extending: (a) their geographical scope (to cover all of Québec); (b) their social and cultural scope (by increasing the diversity of control account profiles to include minorities and marginalised or vulnerable groups in terms of ethnicity, sexual orientation, disability, educational level, etc.); and (c) their scope in terms of media (each Facebook profile will be matched with a control account in other digital social media such as Twitter, Instagram, and YouTube). The project will involve the creation of approximately 1500 control accounts.

To prevent piracy, Facebook geolocates account activities, which means that accounts must be automated from the region in which they are set up. To carry this out, we will put together kits including modules consisting of five small computers, already configured with control accounts and profiles specific to the newly targeted regions, and automation scripts (bots) to feed the accounts, gather data, and send it to Montreal. We will rely on our contacts and the Université du Québec network to set up modules in five cities: Chicoutimi (UQAC), Gatineau (UQO), Québec (Université Laval), Sherbrooke (Université de Sherbrooke), and Trois-Rivières (UQTR). Each module will be managed remotely by three high-performance computers in Montreal (UQAM) that will provide the interface for the research group’s Big Data architecture. This infrastructure already exists and has been operational since 2015. We will be able to store, analyse, and visualise all of the data from the control accounts in real time.

All political advertising and messages that are received will be used to create a database. Ads and messages will be compared with the federal government’s database of officially “registered” advertising in order to detect any issues of conformity or potential interference. We also intend to establish a database of “unofficial” messages and ads, identifying their sources, in order to pave the way for an analysis of the circulation of fake news or any other type of interference. In a second phase, we plan to identify and analyse through correlation which profile types receive political advertising and messages, and how often this occurs; we will also identify and analyse, through correlation, if there is any variation/personalisation of a given message according to the targeted profile (microtargeting).

Conclusion

One of the main preliminary results was that Facebook targeting skills are not like the “hypodermic model effect” (Bineham, 1988), unilateral and automatic. To be able to target, Facebook needs the trace generated by online activities such as using a search engine or visiting websites. Facebook thus needs a larger ecosystem where data circulates openly among commercial partners. It is to be noted that we had to outsmart Facebook and its undesirable accounts detecting strategies in order to get a small glimpse of their algorithms at work. Also, to be noted, geolocalisation by Facebook plays a central role in account creation protocols.

This pilot project gave us a glimpse into Facebook’s black box and allowed us to formulate observations that are surprising, to say the least, and that go well beyond the scope of our research. Our purpose was to analyse the communicational and socio-political consequences of automating (algorithmising) the production, delivery, and consumption of political advertising and messages, in order to problematise issues related to democracy in a digital social context and their impact on election processes. However, our preliminary findings convincingly demonstrate that Facebook’s ability to detect accounts that fail to comply with community standards is still flawed. This raises an economic question: if “sponsored” posts are seen by all accounts, even duplicates or automatised profiles, are advertisers paying a fair price for their targeted ads?

These findings also lead us to formulate observations which, although they are outside the scope of our current project, should be the subject of future work. We believe there is research to be done on Facebook users and the Facebook algorithm. (1) How are suggestions made in regard to other accounts that “you might know”, and how do you become “friends” with other accounts? Some of our control accounts received invitations from other accounts within hours or days of being activated. No response was given to these invitations. In addition, (2) gender seems to have an impact on the number of invitations received from strangers. Control accounts associated with “women” aged 18–35 were the ones that received outside invitations. (3) The more we study Facebook’s targeted advertising, the more obvious it becomes that this advertising is lacking in transparency for advertisers and community managers. The fragmentation of users’ areas of interest appears to be lacking in documentation and clarity, which may make targeting, and even the classification of business pages, less effective. (4) We also deplore the overall lack of transparency and understanding in relation to Facebook’s advertising tools.

The preliminary results also show that it is possible to go beyond the empiricism/theory dichotomy, but at the cost of overcoming several obstacles, mainly refuting technic’s neutrality without giving in to technical determinism. This allows to open the “black box” of algorithms (Pasquale, 2015) to reveal empirically their presence by their effects. It also bonifies the research’s case when appearing before ethical committees that are not always up to par with bleeding-edge approaches involving “new technologies”. Nonetheless, several obstacles remain, mainly that algorithms are still private property. This has consequences when it comes to obtaining social media companies’ full collaboration. In our project, for instance, we still had to play a game of hide-and-seek in order to maintain the presence of our control accounts, with Facebook trying to expunge them as fake accounts. More importantly, revealing the algorithms is the basis for any meaningful audit, ethically, politically, and socially.

This non-visibility of the algorithms also has major repercussions on the political front: our preliminary results allowed us to observe the effects of the “machinization of politics” where the values/finalities are being concealed by the means (technics): political goals are being measured by success itself. In other words, circulation is the main goal (Dean, 2009) over the message itself, thus creating a void—or loss of symbolic efficiency (ŽiŽek, 2009). We can translate this notion into two main trends: “empowered” individuals are now emancipated of the disciplinarian yoke of ideology, but at the same time they lose the normative contribution of ideology (transcendental symbolic mediations producing “universal” common values), a void being picked up by algorithmic automatisation. A look at the state of America in the 2020 elections already shows us a possible future for political communication: all values are reduced to the expression of a personal opinion and thus the individual prevails over the institutions and their norms, and at best de facto leaving the latter in the hands of the technical automation projected as a neutral and a “natural” means—nullifying the need for visibility—to achieve goals that are primarily defined in terms of pragmatic efficiency. This brings to mind the Heideggerian warning: the more Man sees himself for the “lord of the earth”, the more he confuses his destiny for that of modern technic, as the Dasein succumbs to the lures of the power of power itself.