Introduction: social media corporations and governmental responsibility

Donald Trump joined Twitter in 2009, using the platform throughout his 2016 campaign for president and his subsequent presidency, until he was banned towards the end of his term in office. From the creation of his Twitter account until his suspension, he posted 57,000 tweets (Madhani & Colvin, 2021). Moreover, he used his social media accounts, both Facebook and Twitter, as political tools. Trump’s social media accounts on Facebook and Twitter were suspended on January 6, 2021, following rioting and violence at the Capitol building while the U.S. Congress met to certify the election results confirming Joe Biden as president. The official statement from Twitter stated, ‘we have permanently suspended the account due to the risk of further incidents of violence’ (Twitter, 2021). Similarly, Facebook declared that ‘our decision to suspend then-President Trump’s access was taken in extraordinary circumstances: a US president actively fomenting a violent insurrection designed to thwart the peaceful transition of power; five people killed; legislators fleeing the seat of democracy’ (Clegg, 2021). In the same statement, Facebook highlighted the growing concerns for multi-national corporations. Referring specifically to the tech industry, the company noted a growing ‘delicate balance private companies are being asked to strike’ (Clegg, 2021) in relation to their interaction with state structures and actors and their role in the governance of civil society. This sentiment can also be applied to a broader set of corporations in the current era.

The Donald Trump example prompted several questions regarding the role of the corporation in modern societies. What responsibility do corporations have to the world that is being shaped by the development of products they create? When are they obligated to step in to secure the welfare of society if their products are threatening it? Do they have the legitimacy to act on behalf of society? Can their intervention be seen as governmental? Where do companies fit into the assemblage of governmental structures? How is technology transformed into the technologies of government? More broadly, have companies adopted a new epistemological view of their role in society, and, if so, will or could it disrupt the current roles of the private and public sectors?

Text from an exposition by the former chief security officer (CSO) of Facebook, Alex Stamos (Berkeley School of Information, 2019), highlighted these challenges and a relatively under-researched area that it is essential to explore:

The major tech companies are all acting in a quasi-governmental manner. When I was a CSO of Facebook I had an intelligence team. I had a team of people whose entire job was to track the actions of state governments and their activities online and then to intercede to protect the citizens of other governments. That is a unique time that a private company has had that responsibility, that is pretty unique.

Stamos was not alone in his assertion of corporate responsibility. Brad Smith, president and chief legal officer at Microsoft, stated that ‘the time has come to recognise a basic but vital tenet [for the change the tech sector]: When your technology changes the world, you bear a responsibility to help address the world that you helped to create’ (Browne & Smith, 2019). While Stamos specifically spoke about tracking the activities of state governments to protect the citizens of other governments, the events surrounding Donald Trump also highlighted that corporations have a responsibility to protect individuals from their own governments. As a result of the private sector intervention in the political speech of the U.S. president, David Kaye, the former United Nations monitor for freedom of expression, posed the following query:

The question going forward is whether this is a new kind of standard they [Facebook and other social media companies] intend to apply for leaders worldwide, and do they have the resources to do it? … There is going to be a real increase in demand to do this elsewhere in the world (Satariano, 2021).

Notably, Facebook had already interceded in other countries, taking down private and state media accounts in Uganda and Iran (Satariano, 2021). Accordingly, the scope of responsibilities tech companies have are far-reaching. According to Stamos:

[Facebook] had a child safety team. I had a counterterrorism team. These are governmental responsibilities that have been taken by the companies by the fact that they own the platform, they own where the data is, and they have access to data and resources that the public sector does not. The companies all have people who decide what is acceptable political speech, people that decide what is acceptable advertising standards for people to run ads and democratic elections. These are government decisions that, generally, are being made privately, they have, effectively speech police [emphasis added].

The responsibility that these companies have accepted, or been forced to take on, because of their products have a dualistic quality. First, control over the information provided to the public at large is a technology of government that is well established in authoritarian regimes. The evidence for this has been outlined in the literature surrounding propaganda and information dominance (Kamalipour & Snow, 2004).

Corporations taking responsibility for the welfare of citizens from governments and their ability to counter the negative aspects of information dominance by governments has the potential to provide a check and balance against authoritarian ambitions. States have an innate desire to maintain their power, and for the continuation of the government, they have ‘to arrange things so that the state becomes sturdy and permanent, so that it becomes wealthy, so that it becomes strong in the face of everything that may destroy it’ (Foucault, 2004, p. 4). The state’s desire to project strength in the face of everything that may destroy it can potentially lead it towards repressive action against anything considered a threat to its continuation. The private sector intervention in government action that has the potential for repression, violence, or misinformation can provide a check on this type of action. However, there is a question of legitimacy. Do private sector institutions have the legitimacy to take responsibility for society, to decide what is acceptable speech? Returning to the reflections of Stamos.

Companies are acting like governments, but they don’t have the legitimacy of governments. They don’t have the transparency, they’ve never been elected. People choose to use their products, but they can become so powerful that they are acting at the same level as a government from a power perspective, where people can’t really choose to be free of the indirect impacts of that platform; and that causes a lot of problems, and then, a related issue is while they are acting as their own governments. They are also responsive to the legal requirements of dozens and dozens of countries. (Berkeley School of Information, 2019)

In The Birth of Biopolitics, Foucault suggested that governmental practice may not need to be legitimate, noting that ‘there will either be success or failure; success or failure, rather than legitimacy or illegitimacy’, which ‘now become the criteria for governmental action. So, success replaces legitimacy’ (Foucault, 2004, p. 16). This assertion establishes the premise for this paper, exploring corporations that accept responsibility—or are being forced to take responsibility—because they are successful and, therefore, the potentially legitimate actors to defend individual rights in the digital age.

The context of this paper will focus specifically on the theme of speech practice and the rights surrounding freedom of speech. The paper is organised into the following sections. The first will look at the adoption of technologies as technes of government in the Foucauldian tradition utilising Dean’s (2010) elements of governmentality. What follows will be a contextualisation of applying the concept of governmentality to corporate actors, building on the work of Collier and Whitehead (2021). This section focuses on the characteristics of one of the three topologies of corporate governmentality, forced governmentality, and identifies and evaluates its specific characteristics. In order to identify these characteristics, the penultimate section empirically evaluates the case of Facebook, which was forced to adopt corporate governmentality within the action space of speech practice. The paper concludes with a discussion of additional areas of study in light of a governmentality view of the corporation, particularly in algorithmic governance.

Governmentality in the twenty-first century: from technologies to technes

‘The literature on governmentality asks: by what means, mechanisms, procedures, instruments, tactics, techniques, technologies and vocabularies is authority constituted and rule accomplished?’ (Mitchell, 2010, p. 42). Similarly, Dean (2010) has categorised four elements that allow for the analysis of what Deleuze would consider regimes of practice, including an examination of the field of visibility of government, the technical aspects of government (techne), the rationality of government (episteme) and the formation of identities. A exploration of all of these aspects of the practices of forced governmentality is beyond the scope of this particular inquiry. However, identifying the characteristics of forced governmentality through Dean’s (2010) lens provides a valuable tool for illuminating the corporate practices captured by this particular form of corporate governmentality. This section will examine how corporations have been forced to take responsibility for the action space surrounding speech practice, demonstrating how technology has become a techne exercised by the corporate sector. Further, this transition develops into the ‘thought, knowledge, expertise, strategies, means of calculation, or rationality that are employed in the practices of governing’ (Dean, 2010): the episteme of governmental practice.

A multitude of investigations by ‘political and social historians … chart the insatiable appetite of modern states for statistics about every aspect of citizens’ lives and deaths’ (Daston, 2007). For example, Ian Hacking’s (2006) account of the emergence of the mathematical technology of statistics and probability in the mid-1600 s—and the resultant adoption of the statistical average as a techne and episteme of government in the eighteenth and nineteenth centuries—provided a clear articulation of the phenomena of technology becoming a techne and the subsequent adoption of the techne into the episteme of governmental practice. In the Foucauldian style, Hacking’s (Hacking, 2006) genealogy of this emergence starts with the philosophical problem of the objective and epistemic probabilities that overlay Dean’s (2010) technes and epistemes of government.

Inherent in Dean’s (2010) evaluation of governmentality, there is a dialectic relating to technes and epistemes of government. The tension derives from the direction of flow, from the adoption of technes due to the adoption of a particular episteme, which has shifted to the adoption of a particular episteme as a result of the technes of government. The developments of probability and the adoption of the concept of the statistical average and Gaussian distribution in the seventeenth and eighteenth century as a techne of government (Amoore, 2017) was pre-dated by an epistemological change in the mentality of government that, according to Foucault, began in the sixteenth century (Foucault, 2007a, 2007b). As explored in the conclusion of this paper, this shift may actually have begun in the fifteenth century with the invention of the printing press. Moreover, it is possible that adopting the governmental object of the population as an epistemological change may have driven the acceptance of a statistical government as a techne. Privately developed digital technology, including social media platforms, have undergone (or are undergoing) a similar transition from technology to techne, mirroring the technological and governmental revolution that preceded the development of probability and statistics. However, its directional flow is much more clearly defined in the case of social media, shifting from technes to epistemes. This evolution can be seen in the timeline of social media adoption, technological developments, and the growing use of such technologies by governmental actors (both public and private).

Social media gained mass adoption around 2006. While Myspace was then the most prominent social network, it quickly lost out once Facebook changed its policy of only allowing university students to sign up for a Facebook page to permitting everyone with a valid email address above the age of 13 to create one (on September 26, 2006). Twitter also launched its service earlier that year, on March 21. When these services opened to the general public they pioneered a new technology that quickly gained traction.

Knowledge and use of users’ personal data for customisable advertising were quickly developed by Facebook, beginning with a programme called Beacon in 2007, which allowed for the injection of advertisements into the News Feed without user consent. Subsequently, they launched Custom Audiences in 2012, allowing advertisers to link their databases with that of Facebook to further target advertisements to users, and in 2013 the advertising capabilities were further enhanced with Partner Categories. There were several key moments in 2014, including the publication of a study surrounding emotional contagion and the development of thisisyourdigitallife by Aleksandr Kogan, which led to the harvesting of individuals’ data and the development of psychological profiles related to U.S. voters (these insights would ultimately be exploited by Cambridge Analytica). Cambridge Analytica was employed to use ‘“psychographic” analysis of voters to try and win them over with narrowly targeted micro-messages’ (Vogel & Parti, 2015) by Ted Cruz in advance of the 2016 elections. Throughout 2017, Facebook released information surrounding Russian entities’ disinformation campaigns that purchased ads ‘to interfere in U.S. politics and the 2016 presidential election’ (Constine, 2017) and in 2018, the company announced that it shut down a number of troll farm accounts originating in Russia.

The timeline of some of the challenges Facebook has gone through with the use of data has one common thread: behavioural and digital territory management. What moved this technology to a techne of government was the way it was used. Cambridge Analytica may seem benign, developing psychographic profiles of voters, but as Christopher Wylie, the former Cambridge Analytica employee explained,

Cambridge Analytica could … craft adverts no one else could: a neurotic, extroverted and agreeable Democrat could be targeted with a radically different message than an emotionally stable, introverted, intellectual one, each designed to suppress their voting intention—even if the same messages, swapped around, would have the opposite effect. (Hern, 2018)

The outcome of suppressing voter intention demonstrates the adoption of data analytics and predictive models for governmental aims. This practice falls into the category of adopting the result of technes becoming part of the episteme of government. The focus of this inquiry is the transformation of technologies to technes. In this case, the adoption of technes is the adoption of algorithmic forms of governance (Amoore, 2017; Cooper, 2020). The key is the concept that, in essence, the development of algorithms to shape social media sites created a problem that needed to be governed and managed. However, the private sector is being called to take responsibility for the problems that have been created. Interestingly, by taking responsibility, social media companies are also using algorithmic methods to solve the created problems.

The management of speech practice by the private sector provides an interesting insight into its adoption of algorithms as a techne of their governmental practice. The Trump example provided a clear picture of the intervention of social media companies into speech practice, but it also represented a special case. The management of Donald Trump’s speech was moderated by humans, individuals within the respective companies of Facebook and Twitter. However, this approach is not general practice for the wider scale of content and speech moderation.

The European Parliament report (2020) on algorithms for online filtering and moderation provided a concise model for the socio-technical moderation system generally used by platform companies. The major platforms use a hybrid model for content moderation. However, as indicated below in Fig. 1, uploaded content initially goes through automated filtering that is performed by algorithmic tools; only those deemed uncertain are then passed to a human for assessment, and the feedback loop, once a decision is made, is passed into the training data for the algorithm for future filtering of similar content. Thus, algorithms—not people—categorise the majority of content either as harmful, removing it, or not harmful, making it visible to users.

Fig. 1
figure 1

The integration of automated filtering and human moderation (European Parliment, 2020)

Before discussing the marriage between algorithms as a techne of government and speech practice, the next section will explore platform companies’ intervention in speech practice as a form of corporate governmentality.

The historical evolution of governmentality: from states to corporations

In 2020 Collier and Whitehead (2021) outlined a general theory of corporate governmentality that was ‘concerned with the ways in which corporate governmentality challenges established notions of governmentality and how a Foucauldian perspective can itself contribute to the study of the governmentalisation of the corporation and the corporatization of government’. The general theory provided analytical language for understanding the intervention within society through behaviour management for the welfare of populations of individuals. The authors identified three typologies of corporate governmentality: (1) deliberate corporate governmentality, where companies are actively seeking responsibility for social interventions (Collier et al., 2021); (2) incidental corporate governmentality, where companies take actions within the sphere of governmentality for the production of data, but without interest in societal change (Whitehead & Collier, 2021); and (3) forced corporate governmentality, where corporations are being thrust into a governmental role as a result of their technology becoming a techne of government or having to engage in government actions due to a lack of regulation in their industrial space. This paper seeks to develop the concept of forced corporate governmentality.

The first two manifestations of corporate governmentality (deliberate and incidental) primarily concern themselves with an ethos of care for a narrow set of stakeholders, predominantly current and potential customers, rather than the population at large. In the case of forced governmentality, there is a trend towards the broader notion of populations and an ethos of care for the totality of potential stakeholders and customers (i.e. the global population). Furthermore, while Stamos highlighted that corporations are acting governmentally but are ‘responsive’ to the legal requirements only in the territories in which they operate, due to the technology development becoming technes of government(al) institutions, they have developed a potential security apparatus unlike that of the state.

The state uses its security apparatuses (e.g. police, military) to ensure the optimal functioning of vital economic and social processes. In contrast, the private sector provides security through technology, including the removal of tools and utilities necessary for states to govern and, more generally, for modern life, thus embodying the concept of Foucauldian governmentality. Put simply, a select number of private-sector corporations control users’ conduct. Twitter and Facebook, quite literally, in their banning Donald Trump from their services (and their interventions in Iran and Uganda), were able to establish this. They arbitrated the conduct acceptable by political actors, and more incredibly, the then head of state. Indeed, they were called upon to, not only set a precedent and policy for such conduct but also had the ability to enforce it.

While the topic of political conduct is a fascinating manifestation of how corporate governmentality can apply to a particular situation, similar to the initial work on corporate governmentality by Collier and Whitehead (2021), this paper sets out a more general theory of forced governmentality. It provides an avenue for further study and an analytical tool that can be used to evaluate specific instances of corporate governmentality related to political speech intervention. As such, this paper will look to establish some characteristics of forced corporate governmentality to provide scope for further research into the inquiries of the source of private-sector responsibility, potential industries and sectors where forced governmentality may emerge, and some of the implications of corporations being thrust into a governmental role.

The exploration into this aspect of corporate governmentality presented methodological problems for a full enquiry into the concept. Unlike voluntary governmentality, which originates in the genealogy of the state and the corporation, the discourse surrounding corporate social responsibility or incidental governmentality is more akin to conducting randomised control trials and behavioural experimentation for analytics development or profit generation. Due to the scale and nature of the intervention of multinational corporations, it has governmental characteristics. Forced governmentality appears to tend toward newer industries and spaces without established government legislation and intervention. While there are multiple sources for the lack of an origin point for forced governmentality, Zuboff (2019) has described modern technical corporate actors in terms of trespass, in her work on surveillance capitalism, stating that ‘trespass is important to data rendition because it enabled surveillance capitalists to expand their data reach rapidly without having to gain legal authority or personal consent’ (Whitehead, 2019, p. 8). Unlike bricks and mortar businesses, and traditional industries, where planning or general permissions must be granted prior to development, the tech industry stems from the ‘Californian ideology’. It attempts to ‘push beyond the limitations of both the technologies and their own creativity’ (Barbrook, 1996, p. 18), which ‘combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies’ (p. 1). The resultant developments chart unknown territories where responsibility is accepted in an ex post facto manner. Consequently, private sector actors are called to accept governmental responsibilities, leading to increased tension between the public and private sectors.

Action space

Each of the observed typologies of corporate governmentality has distinct characteristics. The primary difference between them is the action space (the area where actors take action) of their governmental interventions. First, voluntary governmentality allows corporations to choose the action space for their responsibility. As they are voluntarily taking responsibility for aspects of societal welfare, they can choose a single space or multiple action spaces from a range of options that match their brand or corporate ambitions (Smith, 2019). One example is the Mexican corporation Bonafont, part of Danone, which has become a champion of women’s rights and gender equality and has voluntarily chosen their action space for a specified population: women. It may seem counter-intuitive for a company in the water distribution industry to take on something unrelated to their brand (D'hond, 2020). However, they have built a reputation on a particular action space that they specifically chose.

Second, incidental governmentality allows for governmental interaction in a fluid action space that depends on the experimental focus of the behaviour change trials a specific actor is running. In contrast, forced governmentality takes the choice out of the hands of the actor who implements it. A range of examples demonstrates the forms of incidental governmentality, including the Facebook VoterMegaphone Project (Collier & Whitehead, 2021) and social media companies’ experimental emotional contagion research, which specifies the action space of mental health. The intervention and governance of that particular action space related specifically to the research conducted at the time and shifted from one action space to another.

Finally, within forced responsibility, the action space is chosen by external actors (e.g. stakeholders, civil society or the public sector). In the case study of social media platforms, the enforced action space for corporations related to speech rights and practices. An additional example of a potential action space where corporations may be forced to accept a governmental role is space equity and the information rights of the global population, specifically in relation to project Athena by Facebook and SpaceX by Tesla founder Elon Musk (Barbara, 2020; Matsakis, 2018). While information rights might not be as established as freedom of speech, there is increasingly more discussion of the right to access information, which may become an inalienable right similar to that of private property or speech. For example, legislation such as the Freedom of Information Act in the UK and the Right to be Forgotten in Europe tend towards this view of information rights. If challenged by private sector intervention from the likes of SpaceX or Athena, this right to information could provide a forced action space of responsibility like the advent of social media and the resultant calls for private interventions into the action space of speech practice.

However, forced governmentality also has a dualistic character. There is a difference between legislative intervention, where the private sector is compelled to act and accept a governmental role, and forced governmentality, which relates to a non-state compulsion to act as the government. To clarify, the drive to adopt governmentality is either based upon the internal acceptance of responsibility for the world companies have contributed to creating, or it derives from the calls from civil society to force private companies to act, threatening to reject their service or product if the company refuses to accept responsibility. Further, it is not the appearance of governing, akin to companies taking on environmental policies that have not been inculcated into their organisational culture, amounting to surface-level changes or the appearance of change like greenwashing. Instead, forced governmentality is characterised by the meaningful adoption of responsibility that has been forced upon companies due to internal or external pressure unrelated to a legislative body. While there is no direct internal or external pressure related to a legislative body that does not exclude the indirect influences resulting from the relationships of corporations to governments, nor is it outside the broader political-economy discourse of the interactions between corporations and government, government and civil society and civil society and the corporation. However, the focus of this inquiry is directed at an analysis of the lack of a first order involvement of the legislative body on the adoption of responsibility by corporations. This forced adoption of responsibility and this application of governmentality analysis with broader political-economy discourses or theoretical perspectives (e.g. Marxist, surveillance capitalist see Zuboff) is out of scope for this inquiry, it provides an interesting avenue for further research.

There is an additional consideration that needs to be taken into account regarding the spectrum of governmentality that corporations can adopt. As noted above, companies may voluntarily take responsibility for a specific action space, such as Uber accepting responsibility and adopting governmentality towards mobility ‘rights’, which led to forced responsibility relating to the welfare of their riders and drivers. As this choice may lead to their inclusion in public sector mobility services through their platform, there may also be an expansion of their responsibility towards the safety of individuals and populations that utilise public mobility services (Collier et al., 2021). This practice leads to a lack of mutual exclusivity between the three typologies of corporate governmentality developed by Collier and Whitehead (2021). Further to this point, once technologies are adopted as technes of government—either by private or public sector actors—there is a component of incidental governmentality, as demonstrated in the case study put forth surrounding the track and trace application in the exploration of incidental governmentality by Whitehead and Collier (2021).

Speech practice: control and curation of knowledge

The precedent for speech practice as an object of governmentality is not a novel occurrence or a technological development changing the global power-knowledge relationship of governing actors on a global scale. Centuries ago, the invention of the printing press challenged the governmental power of the Catholic Church and made possible liberalisation, capitalism, and democracy (McLuhan, 1962). Prior to the invention of the printing press, the Church was able to operate as the singular curator of religious views, managing the dissemination of knowledge and the interpretation of scripture. However, with the technological development of the printing press, ‘many citizens no longer needed to rely on religious authorities for knowledge, interpretation, and analysis of religious literature’ (McDanie, 2015, p. 30). The Church’s control over language, specifically scholarly Latin, enabled tight control over religious knowledge. The combination of the technological development of the printing press and the translation (the removal of sole control by the Catholic Church over religious knowledge) of the Bible into German, the language of the common people, by Martin Luther precipitated a global shift in power relations that supplanted the Catholic Church with the secular state.

Interestingly, the Catholic transition to Protestantism parallels the underlying debate surrounding social media and speech as an object of governmentality. To use terminology from the modern debate, the Catholic Church operated as publishers while the Lutherans (via the printing press) operated as distributors or platforms. Underlying the following discussion on speech practice, there is an ongoing debate that frames the idea of the management of speech practice by private sector actors. It asks whether social media companies are a platform (distributor) or publisher (for a full discussion on the nature and nuance surrounding Sect. 230—section of the U.S. legal code relating to platforms responsibility for 3rd party content—and its implications for the legal rights and differences between platforms and publishers, see Kosseff, 2019). In general, publishers can be held liable for the content they produce, whereas platforms—under Sect. 230—are not liable for third-party content. Social Media HQ provides practical examples in plain language that highlight the difference:

The easiest way to think about this dichotomy is by thinking about newspaper companies…. These companies make editorial decisions about what news to publish…. If they publish a slanderous or defamatory article about a high-profile individual, they can expect to be sued in court…. Now, think about companies like Verizon, AT&T or Comcast. These are platforms, in that they primarily serve to facilitate communication and distribute information. They can’t ‘ban’ you from using their services, even if you are trafficking in all kinds of conspiracy theories. If you want to talk about ‘false flag’ conspiracy events with your crazy uncle on the phone, nobody is going to block or throttle that conversation. You won’t get a letter from AT&T saying that you are no longer a customer. (Zilles, 2020)

However, this distinction does not mean that all content is created equal on platforms. For example, James Grimmelmann (2015) has detailed content on social media platforms that falls into two categories ‘information goods’ and ‘information bads’. Of the information bads, he further categorises them into four areas: congestion, cacophony, abuse, and manipulation. Forced governmentality in speech practice management relates to the latter two categories of abuse, where ‘the community generates negative-value content—information ‘bads’; and manipulation, ‘in which ideologically motivated participants try and skew the information available through the community’ (p. 13). It is important to ‘note that abusive content is not limited to what is prohibited by law: the concept of “information bads” needs to be contextualised to the participants in an online community, to their interests and expectations’ (European Parliment, 2020). This characterisation demonstrates the aforementioned space and characteristics of enforced governmentality that falls outside of the legislative sphere.

Like speech practice, moderation is not a new concept. Content moderation can be applied for the benefit of the complete set of users, the platform community, or smaller sets of communities that are able to set stricter or looser moderation standards. Highlighting this feature, the streaming service Twitch has two sets of standards: general platform standards and a set of standards that are unique to each channel creator. The channel creator can then appoint voluntary channel moderators who moderate the chat and content on the channel in line with the individual community’s guidelines. In small scale communities, the content review process is reactive and reviewed by human moderators, who may take down reported content.

However, as communities expand, the review process changes to ‘proactively enforce policies’ (Zuckerberg, n.d.). It is this ‘proactive’ platform-wide moderation that is of concern in the concept of corporate governmentality. Another primary aspect of using the language of forced corporate governmentality relates to the scale of the platform. While size and scale (as we will see) are important, the reach of a company and its broader engagement with a population is critical. Companies must also adopt certain governmental practices such as behaviour change (as we will see in the section on natural engagement patterns) and the management of territory (in the context of this paper, the management of digital territory) (Collier & Whitehead, 2021).

There are specific characteristics that organisations must possess to be classified as corporations that can be considered to have accepted governmentality. The limitations relate to the minimum size and reach of corporations that can ‘meaningfully engage’ with enough individuals within a population. As noted above, smaller communities that engage with fewer users are typically not large enough or fiscally secure enough to meaningfully engage in governmentality, forced or otherwise. Companies also need to be fiscally able to undergo governmentalisation. Novel industry sectors such as technology and platform-based corporations are more likely to be engaged in governmentality to a greater extent.

Facebook, Twitter, Uber, Lyft are examples of companies that meet the reach, size, and fiscal minimums. Furthermore, due to their reach and size, they are able to influence the socio-cultural environment through their minimum standards that go beyond legislated compunction (Collier et al., 2021) (Grimmelmann, 2015). This feature is particularly pertinent as the data shows that there were approximately:

4.33 billion social media users around the world at the start of 2021, equating to more than 55 percent of the total global population…. Facebook remains the world’s most widely used social media platform, but there are now six social media platforms that claim more than one billion monthly active users each … [and] at least 17 social media platforms have 300 million or more monthly active users. (Datareportal, n.d.)

While there is a potential for large-scale platforms, tech, and novel industry companies to become governmentalised, the process is not inevitable. In order to explore the phenomenon of forced governmentality, this paper will explore how social media platforms such as Facebook and Twitter have been governmentalised within the action space of speech practice and contrast this with similar social media platforms that operate in a differing political-cultural landscape and which have not undergone governmentalisation: for example Weibo and QZone.

The choice of these particular case studies was based on their popularity, size, and reach, inspired by the work of map designer Martin Vargic (Ang, 2021) and Halcyon Maps following a yearlong project to provide a concise, ‘but still comprehensively visualize the current state of the World Wide Web, and document the largest and most popular websites over the period of 2020–2021’ (Halycon Maps, n.d.). As shown in Fig. 2, the dominant social media platform is Facebook, while in China and Russia, there are alternatives. As part of the analysis of Facebook’s corporate governmentality, we will contrast this with an inquiry into Chinese social media platforms and indicate how—as opposed to the Facebook case study—social media platforms in China failed to be governmentalised due to its distinct socio-political landscape.

Fig. 2
figure 2

Map of the Internet 2021 (Halycon Maps, n.d.)

While the conclusions of this paper do not specifically focus on a specific social media site (e.g. Facebook, Twitter, Shina Weibo, QZone), this examination highlights the characteristics of social media sites that have become, within our definition, governmentalised or that have adopted corporate governmentality. To paraphrase Bruns’s (2018) articulation of the interrelationship between Facebook and Twitter:

these platforms do not exist in isolation from each other; that they share users to a considerable extent; that through automated as well as manual means, information flows between them at considerable volume; and that they both exist as part of a broader, thoroughly interconnected social media network means that—with the necessary adjustments—many of the actions we find on Twitter also translate to Facebook, and vice versa. (p. 9)

There are slight differences among social media platforms in their orientation and use. Facebook orientates itself more towards a private network experience in contrast to Twitter, where most user accounts are public and thus oriented towards a microblogging network. As noted by Larrson and Christensen (2016, as quoted in Bruns, 2018, p. 13), ‘we can perhaps consider Facebook as the news “showroom”—used mostly for broadcasting messages—whilst Twitter is the news “chat room”—used more for interaction’. This difference is mirrored in the Chinese equivalents QZone and Sina Weibo.

While these slight differences may contribute to ‘important distinctions’ being missed when discussing them under the heading of a general social media platform, for the purposes of this paper (as we are seeking to establish similar characteristics and provide a comparison for macro-level phenomena), the differences do not disrupt the general purpose of this inquiry. As such, if there are specific instances when their differences impact more general conclusions, this will be noted, but we will explore them in general and specific terms that relate to the similar characteristics that apply within the action space, here defined as speech practice.

The backbone of the case study will focus on Facebook, analysing Mark Zuckerberg’s (n.d.) open letter, ‘A Blueprint for Content Governance and Enforcement’. Due to the global popularity of Facebook and its nearly 3 billion users, this document on content governance could arguably be seen as one of the most important and far-reaching documents in history because there has not been a governmental or corporate entity with as much responsibility for this number of individuals in human history. The content of Zuckerberg’s letter will be supplemented with secondary data sources that address specific themes that may not be included within it. The letter is laid out into the following sections:

  • Community Standards

  • Proactively Identifying Harmful Content

  • Discouraging Borderline Content

  • Giving People Control and Allowing More Content

  • Addressing Algorithmic Bias

  • Building an Appeals Process

  • Independent Governance and Oversight

  • Creating Transparency and Enabling Research

  • Working Together on Regulation

Broadly these topics can be separated theoretically using three of Dean’s (2010) four elements of governmentality: field of visibility, technical aspects (techne), rationality (episteme), and the final element, the formation of identities—indicating a broader shift in the view of the world adopted by the governmentalisation process. This final aspect will be addressed in the discussion on the datafication of society, presenting a new view of the world and examining the possible implications of social media governmentality. It will also note the parallels between contemporary changes in the governance of society and the technological advancement embodied by the printing press.

Characteristics of forced governmentality

The overarching purpose of this paper is to explore the characteristics of forced governmentality. To do this, it will analyse the process of governmentalisation that Facebook went through regarding the governance of speech practice through algorithmic content moderation. As noted above, some characteristics of forced governmentality can potentially apply to technology and novel industries. However, it is not a deterministic process, and there are platforms and technology companies like Facebook that have not undergone governmentalisation and operate in different socio-cultural environments. To explore the adoption of governmentality, we will address organisations that provide a similar platform but have not undergone governmentalisation, adding a comparative perspective.

In 2009, China blocked Western social media platforms from open access to its citizens. As in other countries around the world, there are legislated limitations on the content that can be distributed on the internet. However, in China, the limitations imposed are articulated in Article 5 of the Computer Information Network and Internet Security, Protection and Management Regulations—1997 (Ministry of Public Security, 1997). These restrictions include:

  1. (3)

    Inciting division of the country, harming national unification

  2. (5)

    Making falsehoods or distorting the truth, spreading rumours, destroying the order of society

  3. (6)

    Promoting feudal superstitions, sexually suggestive material, gambling, violence, murder

  4. (8)

    Injuring the reputation of state organs

The restrictions in China and the overall control of the internet within their physical territory is typically called the Great Firewall, ‘a joint effort between government monitors and the technology and telecommunications companies compelled [emphasis added] to enforce the state’s rules’ (Bloomberg News, 2018). The level of control exercised by the centralised Chinese state prevented social media platforms from assuming the role of independent governmental actors and therefore adopting their own governmentality. Instead, the platforms became part of the techne of government for the state apparatus.

Unlike some Western countries, the Chinese state uses platforms as an extension of its security apparatus. It has become a tool for policing ‘to punish users who post sensitive content to induce self-censorship and to avoid content being posted’ (Qin et al., 2017, p. 121). This monitoring is carried out across all levels of government by information officers and internet monitors (ibid). It also is a tool of control over its citizens, highlighting how China deals with contested virtual and physical space and its relationship to the people who engage in it (King et al., 2017). The tools of control include censorship and propaganda. Not only can the government limit access to information, but it can also ‘affect debates and sentiment on social media by actively posting their own content’ (Qin, Stromberg, & We, 207, p. 122).

In contrast to the way it is used as a techne for state actors in other countries (e.g. Trump using Twitter for political speech or Cambridge Analytica being hired by political actors to influence U.S. voters), the Chinese government (similar to Russian state-sponsored troll farms for the dissemination of information) utilises what is known as the 50c party to strategically draw attention away from controversial topics. This approach is known as ‘“astroturfing” or what might be called reverse censorship [emphasis in original]’ (King et al., 2017, p. 484). If the Chinese government used this activity on platforms such as Facebook, it would contravene their Community Standards relating to Integrity and Authenticity and would be blocked, which would be an unacceptable compromise for a platform operating within an authoritarian regime’s territory.

The power of non-authoritarian controlled platforms also contrasts with those that operate in China. The recent Facebook publisher controversy with the Australian government demonstrates a critical difference in the ability of these private actors to contest and exert control over the conduct of states within. The Australian government attempted to impose controls on Facebook, claiming they were a publisher and therefore required to pay news agencies for curating their content as part of new legislation called News Media and Digital Platforms Mandator Bargaining Code (Morrison, 2021). Facebook’s response was to ban all news (including news links) for Australian users, forcing the Australian government back to the table, and the government subsequently amended the legislation for Facebook, ‘that will allow us to support the publishers we choose to’ (Brown, 2021). Thus, while Facebook will ‘invest’ in journalism they will, ‘retain the ability to decide if news appears on Facebook so we won’t automatically be subject to a forced negotiation’ (Brown, 2021).

This event is not only a demonstration of the contestation of power dynamics between the public and private sector but additionally clarifies how they monitor conduct, designating such private companies as the actors responsible for governing content distribution (and therefore, according to a Foucauldian perspective, legitimate actors who govern the digital conveyance of information via speech). However, it is inconceivable for platforms within China’s jurisdictional territory to contest a legal mandate issued by the state. Moreover, it is implausible that they would be able to contradict the Chinese government similarly to how Facebook moved against the Australian state.

Case study: facebook as a governmental actor in the speech practice sphere

Note: Any direct quotes in this section that are not specifically attributed to an alternate source are taken from Zuckerberg’s ‘Blueprint for Content Governance and Enforcement’ (Zuckerberg, n.d.)

Mark Zuckerberg begins his ‘Blueprint for Content Governance and Enforcement’ letter with two seemingly contradictory ideas. Firstly, he stated that he ‘believe(s) that the world is a better place to share their experiences, and when traditional gatekeepers like governments and media companies don’t control what ideas can be expressed’. The implication of this statement is layered with additional depth into the view of government and media concerning the right to speech practice and information freedom. Moreover, it implies that governments and media organisations cannot be trusted as ‘gatekeepers’ of speech practice. Then, he expresses the need for some governance of the space, expressing that:

at the same time, we have a responsibility [emphasis added] to keep people safe on our services—whether from terrorism, bullying or other threats. We have a broader social responsibility to bring people closer together—against polarization and extremism. The past two years [2016–2018] have shown that without sufficient safeguards, people will misuse these tools to interfere in elections, spread misinformation and incite violence.

The implication of this second statement is, in itself, significant in several ways, adding additional importance to the previous statement. The first provided initial support for the argument surrounding the use of speech and content, noting that it can be abused and manipulated. By extension, it recognised that political actors and governments have adopted the technology of a social media platform as a techne of government. He then stated that there must be ‘safeguards’, speech management in place to curb the ills resulting from the failure to govern such spaces.

Secondly he stated that, private actors have a ‘responsibility’ to address the problems that arise from their technologies, implying that they have not voluntarily chosen to take responsibility for a specific action space but are forced to take responsibility for areas their technology transforms. There are subsequent responsibilities that arise due to this forced responsibility for a specific action space—in this case, welfare and security, resulting from companies’ responsibility for speech practice.

Finally, when combined with the previous sentiment surrounding governments’ inability, ulterior motives, or lack of legitimacy to manage this action space, Zuckerberg argued that, as Foucault indicated, these actors can successfully manage this space. Thus, they should be the ones to govern it. As a result of their successful ability to govern the space, they are the legitimate actors to govern. The management of speech practice and content on the platform is what he has ‘focused more on…over the past couple of years’, which indicated that in the prior ‘couple of years’, Facebook had undergone a process of what we would describe as governmentalisation or, alternately, the process of adopting governmentality.

Field of visibility: aspects of speech practice included for moderation

What kind of light illuminates and defines certain objects and with what shadows and darkness obscures and hides others. (Dean, 2010, p. 41).

Within the context of speech practice, Facebook claims that the governance practices that relate to speech are illuminated by the Community Standards. To paraphrase Dean (2010), the Community Standards make it possible to ‘picture’ who and what is to be governed. The ‘who’ is codified in the Additional Information section and is described in terms of ‘stakeholders’:

By ‘stakeholders’ we mean all organizations and individuals who are impacted by, and therefore have a stake in, Facebook’s Community Standards. Because the Community Standards apply to every post, photo, and video shared on Facebook, this means that our more than 2.7 billion users are, in a broad sense, stakeholders. (Facebook, n.d.)

While the term stakeholders here relates to “just” 2.7 billion users, this is an understatement because it only includes users, whereas stakeholders would actually also include those organisations and individuals who are ‘impacted’ by Facebook. As noted earlier, the minimum standards impact the socio-cultural norms of the territories where they operate (Collier et al., 2021). Moreover, the assemblage of social media networks and other platforms do not exist in isolation from one another (Bruns, 2018). Furthermore, the reach of stakeholders necessarily includes the public sector actors who utilise the network as a techne of government, therefore extending the set of stakeholders to the entire assemblage of actors involved in the governance of individuals and populations.

Regarding what is included in the field of visibility within Facebook’s Community Standards on speech practice, it included the following enumerated sections of the Community Standards related to speech moderation: legislated, unlegislated, and those that straddle, to a greater or lesser degree, both codes. An example of this is the Community Standard related to Bullying and Harassment: there is no legal definition for bullying unless it crosses the boundary into harassment, which is unlawful discrimination of a protected characteristic under the law. Similarly, the majority of the section on Integrity and Authenticity falls firmly in the unlegislated category. There is no legal compunction, for instance, to regulate false news, manipulated media, or misrepresenting oneself online.

The items that were in doubt, or straddled the legislated and unlegislated categories, were discussed between the paper’s authors and were included or excluded through a consensus determination. Those aspects that we concluded went beyond legislated compunction and had a discretionary level of governance by Facebook are detailed in Table 1. The importance of categorising the standards this way is to reveal the contested space we noted in the section on action space. Specifically, it is vital to separate the instances when corporations are compelled to act due to regulations already in place, allowing them to choose to independently govern an action space, versus being forced to do so.

Table 1 Difference between issues that force companies to act based upon legislative intervention versus forced governmentality

Within the topic of speech practice, the next section will explore the mechanisms, techniques, and technologies used to govern, specifically the algorithmic governance of speech practice on platforms.Footnote 1

Techne: the technologies of and mechanisms of government

Through what means, mechanisms, procedures, instruments, tactics, techniques, technologies and vocabularies is authority constituted and rule accomplished? (Dean, 2010, p. 42).

The technological development that led to the creation of social media and other digital platforms created a problem that required governance. There are two specific levels of this necessity of governance generated by the development of social media. The first is a local-level problem that impacts the platform level, managing content that contradicts its self-interest—be that ‘value’, reputation, or revenue. We introduce this topic here and explore it in the next section on the natural engagement pattern of user engagement as content approaches policy. The second is a larger strategic level—specifically, allowing and not managing borderline content that tends towards polarising political views will have a spill-over effect into the broader society, as demonstrated in the introduction with the Trump example. The ability to access polarising content can become a strategic-level problem that undermines popular values or presents problems related to weakening the authority and the stability that states and societies are trying to achieve. As states (in most cases) cannot manage these strategic-level problems, it is left to or forced upon the platforms themselves to govern this action space.

At Facebook, content moderation was a process that was initially adopted to manage and enforce Community Standards and the local platform level:

For most of our history, the content review process has been very reactive and manual— with people reporting content they have found problematic, and then our team reviewing that content. This approach has enabled us to remove a lot of harmful content, but it has major limits in that we can’t remove harmful content before people see it, or that people do not report.

In smaller communities, the manual approach can be effective. However, as a community grows beyond a certain threshold to that of a comprehensive network, manual moderation fails to keep up with its size and scale. The Facebook network is at such a scale that the media uploads to the platform are enormous. There are over 300 million photos uploaded per day, 500,000 comments, and nearly 300,000 status updates per minute on Facebook (Marr, 2020). The sheer number of content moderators needed to review this volume of content is unthinkable and not scalable.

Consequently, the need for a solution that acts as quickly as possible to determine what content to include and exclude was predicated on technological innovation, ‘moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence’. The technology is still not sufficiently developed to manage the totality of decisions, and therefore, there is a hybrid operating model as detailed in Fig. 1.

Zuckerberg’s letter identified three distinct areas that related specifically to the techne that Facebook adopted in its move towards governmentality: Proactively Identifying Harmful Content, Discouraging Borderline Content, and Giving People Control and Allowing More Content. The development of increasingly sophisticated algorithms and artificial intelligence (AI) that would allow for the totality of decisions and eliminate the human aspect is implied in several sections of the letter: ‘This work will require further advances in technology’, and Facebook is making, ‘multi-billion dollar annual investments we can now fund’ in pursuit of this goal. Handing complete control over the governance of content moderation to AI would be a giant leap forward in algorithmic governance, able to govern at a macro-population-level scale. The future of technology and the ambition of algorithmic governance goes beyond the scope of macro-level governance. Zuckerberg’s section on Giving People Control and Allowing More Content provides insight into the ambition of the platform in governance:

For those who want to make these decisions [regarding what content is visible] themselves, we believe they should have that choice since this content doesn’t violate our standards.

Over time, these controls may also enable us to have more flexible standards in categories like nudity, where cultural norms are very different around the world and personal preferences vary. Of course, we’re not going to offer controls to allow any content that could cause real world harm. And we won’t be able to consider allowing more content until our artificial intelligence is accurate enough to remove it for everyone [emphasis added] else who doesn’t want to see it.

This emphasis on individual controls that will govern not only at the macro- but also at the individual or micro-level has implications for the customised governance of both populations and individuals. There is a distinct link between pastoral care and algorithmic governance. The idea that there could be governmentality that caters to both a population and to an individual’s needs is something that governmentality is only able to aspire to theoretically. With the investment in AI and algorithmic governance, there is the potential that, suddenly, actors will be able to look at the whole and the parts of governmentality to which they previously have only been able to aspire. It is not just a possibility but is being actively sought through AI and algorithmic governance.

The natural engagement pattern

As part of Facebook’s governance strategy, in addition to content moderation, they are also developing and employing additional governance algorithms. The section on Discouraging Borderline Content declares that one of the ‘biggest’ issues they (and all social networks) face is ‘borderline’ content that ‘when left unchecked, people will engage disproportionately with more sensationalist and provocative content’. There are two important points regarding borderline content. The first is related to addressing it at the level of the ‘incentive’ via algorithmic governance. The second is whether  it is possible to ‘simply move the line defining what is acceptable’ rather than ‘reducing distribution’.

The borderline content issue raised two critical concerns. The first concern determined that the company has a conflicting business incentive to encourage content that increases engagement on the platform, borderline or otherwise, because of the financial component. Generating engagement can be directly correlated to attracting advertising spending on the platform. Therefore, from a purely monetary perspective, limiting content that drives engagement is counterproductive to the stated desire of Zuckerberg to address the problem despite his stated concern that ‘it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services’. This issue then becomes an optimisation problem. If the impact of declining quality in services, public discourse, and polarization can be quantified, it can then be mathematically balanced against the financial incentive for non-intervention in the spreading of borderline content. The balance falls in line with the Examination, Maintenance, Inspection and Testing (EMIT) optimisation model.

Figure 3 models the optimisation problem of the declining quality of content, which is classified as preventative maintenance, against the financial cost of non-intervention, pictured as the cost of corrective maintenance. The implication is that if Facebook manages the behaviour of its users relating to borderline content, then there will be a quantifiable impact. Thus, a business decision must be made regarding whether to address the ‘incentive’ problem surrounding this type of content. Further discussion of this will be addressed in the episteme section as the ‘means of calculation, or rationality that are employed in the practices of governing’ (Dean, 2010, p. 42) falls into the episteme of governmentality rather than the techne of it.

Fig. 3
figure 3

Emit Optimisation (Risktec, n.d.)

The second concern of managing borderline content fits firmly within the framework of corporate governmentality. It addresses the digital ‘territory’, via behavioural management, in addressing the welfare of its stakeholders (Collier & Whitehead, 2021). Figure 3 highlights the natural engagement pattern for borderline content, and Fig. 4 shows the desired engagement pattern for borderline content through behavioural management achieved through algorithmic governance to address its user ‘incentive’ problem. Consequently, Facebook ‘can address [the incentive problem] by penalising borderline content so it gets less distribution and engagement’. By making the distribution curve look like the graph below, where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible. Further, this is achieved by ‘train[ing] AI systems to detect borderline content so we can distribute that content less’.

Fig. 4
figure 4

Natural engagement pattern and adjusted pattern to discourage borderline content (Zuckerberg, n.d.)

Control over speech: The ‘goods’ and ‘bads’

The second important aspect of Zuckerberg’s letter related to a single line that speaks volumes about Facebook’s ability to define its field of visibility regarding speech practice and what is and is not acceptable for the community. Facebook could ‘simply move the line defining what is acceptable. In some cases, this is worth considering, but it’s important to remember that won’t address the underlying incentive problem, which is often the bigger issue’. This single line demonstrates the ability of Facebook to define the field of visibility for governmentality not just by ‘illuminating’ the objects that are governed but also clarifies what remains in the ‘shadows and darkness’ outside the field of visibility.

While there is general information on the disincentivisation of borderline content, the specifics of exactly how the algorithm detects this type of content and the details surrounding how and what is included within the suppression algorithm to reduce engagement is shrouded by proprietary intellectual property. The negative space held by proprietary information in addition to addressing speech practice as the object of governmentality also presents itself as a techne of government. This reality has implications for accountability and legitimacy that fall outside the scope of this paper but deserve additional inquiry in future research.

While technology companies like Facebook are presently the only actors that can successfully and dynamically govern the action space of speech practice on platforms—and for Foucault, this would present itself as then taking precedent over the legitimacy issue—it presents some interesting challenges that need to be addressed surrounding accountability. The development of the governmental assemblage has built within it checks and balances for accountability that are inherent in the democratic process. When the field of visibility is set by a governing actor such as a private platform, shrouded from view due to the claim of proprietary intellectual property, companies do not experience the same level of accountability and legitimacy as state governmental actors, particularly when they cannot be open and transparent and choose not to engage in debate and negotiation.

This obfuscation of the actual technological instruments used in identifying borderline content is further obscured by Zuckerberg’s commitment to creating transparency ‘into how our systems are performing so academics, journalists and other experts can review our progress and help us improve’. This statement is a way to define what is illuminated: the progress of the resultant ‘performance’ of the algorithm but not the technology itself.

As indicated in the introductory sections of this paper regarding the movement from technology to techne, the case study and Zuckerberg’s letter provide strong evidence for the direction of travel of governmentality in the case of platforms regarding speech practice. Specifically, with platforms, technological development created a moment where there is a need for governmentality of speech practice that then necessitated the adoption of a techne for governing. The adoption of AI and algorithms then required an episteme for the governance of the techne. The flow of Zuckerberg’s letter highlighted this process. In short, the evolution of technology drove the adoption of a governing mentality. The following section will address challenges and changes to the episteme adopted by Facebook due to the technes of governance: AI and algorithms.

Episteme

The thought, knowledge, expertise, strategies, means of calculation, or rationality that are employed in the practices of governing. (Dean, 2010, p. 42).

One of the assertions made earlier in this paper was the direction of transference with platform technology: from technology to techne to episteme. While there are no specific dates in Zuckerberg’s letter to form an exact chronological timeline, statements suggested that this was the case with Facebook. The development of the AI and algorithms to govern content moderation covered in the sections on Community Standards and Proactively Identifying Harmful Content described the adoption of technology in terms of the ‘past few years’ and a ‘three year roadmap thought the end of 2019’. The advancement to address the ‘incentive’ problem of borderline content and the sections on Addressing Algorithmic Bias, Building an Appeals Process all indicated a more recent and near-future timeline, ‘in this last year’, ‘this year’, ‘by the end of this year’, and ‘in the next year’; and regarding issues relating to Independent Governance and Oversight, Creating Transparency and Enabling Research and Working Together on Regulation and even further future timeline, ‘late next year’ and ‘in the next couple of years’.

This timeline overview implied that the episteme and techne were not yet completely aligned and that the techne of algorithmic governmentality predated the construction of strategies, means of calculation, and rationality in the practices of governing (see Whitehead and Collier for a discussion on governing practices without a specific governmentality).

The section on Addressing Algorithmic Bias also engaged in ethical considerations, but a discussion on the philosophy of fairness falls outside the scope of this inquiry. The more pertinent feature was that the adoption of AI and algorithms necessitated the rationalisation of the implementation and use of algorithms as a techne to govern speech practice. Creating an appeals process and an independent oversight board and implementing regulation all share common features within the discussion of forced governmentality, particularly the implied desire for Facebook to abdicate the responsibility for decision making surrounding speech practice while still being able to govern it.

Facebook indicated in the letter that they began the process that led to the creation of the Oversight Board (Oversight Board, n.d.) to act as an independent body to review its governance and moderation policy. This statement is slightly contradictory: ‘Facebook will fund the trust and will appoint independent trustees’ (Oversight Board, n.d.). Further, concerning Facebook regulation, it asserted that the techne of algorithmic governance for speech practice, where an ‘ideal long term regulatory framework would focus on managing the prevalence of harmful content through proactive enforcement’. Moreover, while they are ‘working with several governments to establish these regulations’, Zuckerberg only described the direction they did not want the regulations to go:

It would be a bad outcome if the regulations end up focusing on metrics other than prevalence that do not help to reduce harmful experiences, or if the regulations end up being overly prescriptive about how we must technically execute our content enforcement in a way that prevents us from doing our most effective work. It is also important that the regulations aren’t so difficult to comply with that only incumbents are able to do so.

Under these auspices for the future development of oversight and regulation for the practices of governmentality that Facebook is already employing, the evidence points to the suggestion that the company is looking for a dynamic that legitimises their approach and practices while at the same time limiting their liability as the actor that governs the digital speech practice territory. Consequently, the company can continue the practice of governing speech and have those decisions reviewed by an ‘independent’ body. Further, they are gifted with a degree of sovereignty similar to that of the nation-state because they are exempt from specific legal regulations or, as Giorgio Agamben would say, they are gifted the ban or the exemption from legal rules (Barkan, 2013).

The example of Facebook in the field of speech practice governance demonstrates that it can circumvent the content laws of both ‘local’ (national) government and supra-national governmental structures (European Commission). Evidence suggests that this is already underway. The regulation embodied by the German Network Enforcement Act (NetzDG) that was enacted in October 2017 that obligated social media platforms to remove ‘obviously illegal’ content within 24 h (Google, n.d.) as a means to address hate speech only compelled sites to provide a user feedback mechanism and provide reporting on removed content. Notably, ‘the law did not specify how social media platforms needed to implement the complaint tools, and how granular their transparency reports should be regarding the removal reason’ (Heldt, 2020). Thus, the impact of the regulation was relatively minor while providing Facebook legitimacy for the means with which it identified and removed illegal content. Due to the loose interpretation of the German Criminal Code in relation to Volksverhetzung (incitement of hatred), the regulation also provided Facebook with the ability to determine what constitutes illegal speech and govern the content with the legitimacy of acting in accordance with the law. However, Facebook was afforded the exemption of liability by the flexibility within the legislation that lined up with regulation that avoided Zuckerberg’s ‘bad’ regulation and is not ‘overly prescriptive about how we must technically execute our content enforcement’.

The final aspect of governmentality identified by Dean (2010) is the concept of identity formation, which is situated within a broader discourse on the datafication of society. The datafication of society has changed the whole perspective of how society is viewed. It is based upon access to surplus data on individuals and advancements in the ability to change individual and larger sets of population behaviour through the scale of randomised control trials and the ability to quantify the resultant change through access to data enabled by platform technology.

The comparison between Chinese platforms and Facebook demonstrated that while technology or novel industry corporations can undergo the governmentalisation process, it is not predetermined. Additional factors play into the process, such as the current governmental actors within a particular territory, the degree of control they have, and the power they can exert over other actors operating within their territory. The analysis supported the application of the notion of corporate governmentality as an analytical language and method for analysing certain practices in the private arena that is not sufficiently described in alternate discourses for corporate practice.

By tracing Facebook’s approach to governance surrounding content moderation and curation, making speech practice an object of governmentality and adopting a governmentalised process as articulated by defining the field of visibility, has led to the adoption of various technologies as a techne of governmental practice and the acceptance of an epistemic rationale as a result of the adoption of a specific techne of governmental practice. The integration of action spaces is not chosen voluntarily by individual corporations. However, they are expected to and—at times—forced to take responsibility for a particular action space, as opposed to other typologies of corporate governmentality that are chosen for a range of reasons, including brand reputation advancement, specific causes aligned with a corporate purpose, or chosen because of a desire to investigate the impacts of company experiments.

Discussion: echoes of church control in companies’ management of proprietary algorithms.

This paper explored how the creation of a particular technology becomes governmentalised in order to constitute a part of a regime of practice that can be analysed through the lens of corporate governmentality. The action space examined here centred on speech practice. Using the Foucauldian analytical language of governmentality, it is possible to illuminate aspects of corporate governmental ambition that were previously unavailable through the current discourses on the didactic power-knowledge relations between the private and public sectors. This analysis distinguishes itself from e.g. Bartlett’s (2018) work surrounding the impact of technology on the democratic process. Bartlett explores how Big Tech firms are supporting technologies technology that are destabilising the established democratic process. However, we do not claim that the current system has been eroded. Instead, we view the rise of corporate and private sector actors within the assemblage of governance as colonising the governmental process. We compare this process to the effect of the printing press on an assemblage of actors in the 16th–eighteenth centuries.

To clarify, Chapter 9 in Foucault’s influential Security, Territory, Population is dedicated to the transformation of political ideology, ‘from the pastoral of souls to the political government of men’ (, p. 227). He traced the ‘break up’ of the Empire and Church ‘complexes’, indicating this as one of the factors that led to the transformation into the governmentality of people and the formation of ‘this thing that would be the state’ (p. 248). Foucault asked, ‘What if the state were nothing more than a way of governing?’—an entity that was fit for purpose at the time. Subsequently, he claimed that ‘the state is only an episode in government, and it is not government that is an instrument of the state’ (p. 248). In the Birth of Biopolitics, he traced the rise of neoliberalism in the wake of World War II and was meaningfully able to engage in the discourse surrounding a revolution that became the dominant regime of practice from the late twentieth century to today. The exploration into corporate governmentality follows the shifting regime of practice in governmentality for the twenty-first century, but there are more significant implications for a broader shift in the assemblages of governmental power.

The current discourses on algorithmic governmentality can be classed as a ‘counter-conduct’ to the implementation of state governmentality. Several authors have explored the concept of algorithmic governmentality and the change in the ‘modes of power’ (Bucher, 2012), noting the displacement of human decision making by machines and exploring how algorithmic governmentality may bridge the individualistic pastoral mentality of the sovereign to the raison d’état of population governmentality (Cooper, 2020). Foucault identified five pastoral counter-conducts that led to ‘the crisis of the pastorate’: asceticism, communities, mysticism, scripture, and eschatological belief (Foucault, 2007a, pp. 191–216). As this inquiry has focused on the theme of speech practice, we will discuss the implications of the counter-conduct centred on a similar theme.

While Foucault acknowledged the impact of the speech practice of priests relegating the scripture to ‘the background of the essential presence, teaching, intervention and speech of the pastor himself’ (Foucault, 2007a, p. 213), he failed to explicitly link this relegation of control over speech practice to the contingent technological advancement of the printing press that made the counter-conduct possible. The ecclesiastical authority of the Church that predicated the authority of the sovereign through the interpretation of the divine right of kings was eroded by the development of the printing press. Consequently, while the ‘pastor can still comment on scripture, he can explain what is obscure and point out what is important’, the printing press technology allowed the reader to interpret scripture as well (Foucault, 2007a, 2007b, p. 213). The technological advancements associated with the digital age and the speech platforms of social media have the potential to be as critical to the future developments of governmentality as the printing press was to the adoption of a liberal model of governance and raison d’état of the seventeenth and eighteenth centuries.

As we speculate on the future implications of the development of technologies into the regimes of the practice of government, it is necessary to highlight one additional correlation to the ecclesiastical power of the Church in the context of technology companies’ reliance on algorithmic governance. It is particularly crucial to reemphasise the shroud of proprietary intellectual property that obscures the mechanism but not the outcome of this form of algorithmic governance. Similarly, the Catholic Church managed to control the field of visibility of scripture, which it held through its episteme of government and control over the dissemination of scripture through their representatives. The tradition of holding services in scholarly Latin and controlling the interpretation of the text to the masses mirrors the private sector’s control over the mechanisms of algorithmic governance.

It may be that only a technological digital revolution will lead to a change in the governmentality of the actors when the veil is lifted to reveal the code, and individuals will then be able to self-direct governance through the structures in place when the veil is lifted. At this point, the rise of the thing we call private governmental actors (and the state) is only the next episode of government in the limited field of visibility, and alternate unconceptualized structures and actors may come to the fore in the assemblage of governmental actors. For instance, the governmental techniques and episteme developed by private sector technology and novel industry actors may be better suited—due to their increasing reliance on soft governmental behavioural modification approaches—to supranational actors such as the European Parliament. Alternately, it is possible that algorithmic governance will govern individuals and populations, leading hard-governing actor networks such as states to become obsolete. However, there would need to be some accompanying advancements in algorithmic literacy when the veil is lifted, similar to how the increasing literacy accompanying the printing press was a necessary development that made the scripture available to the masses.

The limiting of the field of visibility brings us back to the questions surrounding legitimacy, accountability, and success. As we noted earlier, the globalised world has led to an expansion of digital territory that crosses the physically controlled territories of governments and nation-states. Unless the current state structures are, as in the case of China, in control of the digital space within their physical territory and able to dictate the conduct of private actors, new actors may adopt governmentality over particular action spaces such as social media platforms and speech practice. At present, these social media actors are the only ones who can manage this action space across multiple territories. In the Foucauldian view, this situation would imply they are then the legitimate actors to govern the space. As noted above, this is similar to the Catholic Church’s ability to govern the pastoral care of the individual.

However, like the decline of the Catholic Church in relation to the state, this shift was predicated on the obscurity of their foundational structures. The lack of accountability allowed them to control the space until there was a technological revolution that lifted the veil of invisibility. This decline led to the rise of accountability (through transparency) of the thing we call the state. The state was equally as successful in managing populations but with the addition of accountability for its actions through the formation of democratic transparency.

Today, the crux of the issue is that with the technological development taking place, these technologies are creating governmental problems that are governed by the technologies that created them. This complication, coupled with the lack of territorial alignment on how to govern the spaces created by technological advancements, means that there is a place for alternate actors to enter. The actors that create this technology are being forced to manage the action space and are trying to (in some instances) put into place or use existing liability-limiting structures to address their lack of accountability while reinforcing their legitimacy without having to abdicate their ability to govern the space. This contradiction was explored in the example of Facebook’s creation of an ‘independent’ oversight board coupled with their ability to influence state regulations due to the power they wield. It also parallels the dynamic of the Church and the Empire. The Church was able to convey legitimacy to the sovereign who derived their power through the Church control through the concept of divine rule. As the Church was legitimated and protected by the sovereign, the sovereign was bestowed legitimacy through the divine right of kings but was accountable in a way that the Church was not. The Church and social media platforms hold similar positions in this comparison, as do the sovereign and the Facebook Oversight Board.

However, there is an additional aspect to consider—the forced aspect that may negate, or dimmish, the need for accountability. There is an unequal power dynamic between an actor being called to take responsibility for governing and a lack of alternate options for governing actors. If they are being forced to govern the space, and if no other actors can successfully govern, companies can determine the conditions of that governance. This situation brings to mind the old adage, ‘you can tell me what to do or how to do it but not both’—meaning that sacrificing some accountability to achieve successful implementation may be an appropriate trade-off (or, in the current context, the only option). However it must be noted that the state actors imposition of regulation would, to some degree, have an effect down stream on the end users. This is evident in the current defacto internet regulation of third party content in Sect. 230c of the US penal code. The regulatory framework may be imposing its own cultural and societal values in other territories as a result of regulation; and further, or additional, regulatory impositions (such as the German NetzDG law) may have an impact if platforms as they need to comply with an ever growing nationally determined set of regulatory impositions.

The final aspect that needs to be considered are the changes created by algorithmic governance and the actors creating the conditions for this governance technique, which are further changing the way the individual and populations are viewed. As Amoore (2017) noted in reference to algorithms and their impact on the identity of the individual, this process is shifting the binary view of action and governance. Amoore (2017) discussed the work of Agrawal, noting that algorithms, as ‘“thresholds of support and confidence” … actually present the world in a novel way of deciding what matters, which associations can be acted upon, which item sets should be pruned out’ (p. 1). This discussion leads to a whole host of additional questions and discourses that we will not be able to cover, even in brief, here but that represent and will continue to be an area for further study. The critical aspect to note in this work is that the advancements in technology and the continued datafication of society necessarily involve a new way of looking at the world, society, and the individual. Thus, they will continue to impact the formation of identities.

Conclusion

Through an evaluation and critical reading of Mark Zuckerberg’s Blueprint for Content Governance and Enforcement this paper applied the theoretical utility of Collier and Whitehead’s (2021) theory of Corporate Governmentality, specifically on of the topology of forced governmentality. We conclude that Facebook has undergone governmentalisation through the evolution of its technology inherent in social media platforms and content moderation becoming a techne, a mechanism and instrument for governmental control over the action space of speech practice. The implications of this research allow avenues for further research into the aspects of corporate governmental ambition and established the characteristics of forced corporate governmentality for future research to use in inquires surrounding questions of private sector actors’ responsibilities towards civil society. It also highlights potential industries and sectors where forced governmentality may emerge, and some of the implications of corporations being thrust into a governmental role.