The internet has come far from Barlow’s infamous “Declaration of the Independence of Cyberspace”, in which he asserted that governments were not welcome on and had no sovereignty over the internet (Barlow 1996). Indeed, Barlow’s libertarian, utopian viewpoint failed to anticipate the predictable emergence of commercially incentivised private forms of order that in some ways match or surpass that imposed by states (Boyle 2000). Even before Barlow’s declaration, though, some warned that internet utopians risked becoming unwitting advertisers for corporations and others who would financially benefit from societal uptake of new technologies (for example, Rheingold 1993: 305). This has indeed happened—the open internet of the past has to a large extent been enclosed by a small number of commercially operated platforms (Tufekci 2016); the spaces of freedom and flourishing democratic debate envisaged by utopians have been replaced by centralised platforms operating as sites of surveillance and corporate power and control.
Other work by various authors has looked at the power of social platforms, where it comes from, and how it is exercised (for example, Gillespie 2010; Khan 2018; Cohen 2019). As well as through algorithmic censorship, this power is exercised by social platforms in various other ways—for instance, by setting their terms of service and acceptable use policies (Belli and Venturini 2016), through their traditional human moderation processes and practices (Gillespie 2018; Langvardt 2018), and by disseminating and amplifying content through algorithmic personalisation using recommender systems to shape content feeds and drive user engagement (Tufekci 2015; Cobbe and Singh 2019). In some cases, platforms will direct their various mechanisms in fulfilment of legal or regulatory obligations or responsibilities (Bloch-Wehba 2019; Elkin-Koren and Perel 2019) (and, of course, in authoritarian states political imperatives will likely play a greater—perhaps overriding—role in driving censorship; in China, for example, censorship algorithms are deployed in accordance with political or other priorities set by the state (Ruan et al. 2016; Yang 2018)). In many other cases, the priorities will be commercial: growth, scale, and profit; pacifying policymakers and regulators so as to forestall potential (and potentially costly) regulation; and proactively seeking to limit or exclude liability for illegal content or activity. In the context of content moderation, commercial priorities typically mean platforms intervening to suppress harassment and abuse, hate speech, disinformation, and other forms of undesirable or unlawful communications so as to appeal to as broad a mainstream audience as possible and to be seen to be acting responsibly for the benefit of policymakers and advertisers (although these interventions are often taken haphazardly). Even where social platforms seemingly deviate from commercial considerations—such as by adding warning labels to posts by prominent politicians (Culliford and Paul 2020)—the overall priorities of the platform generally remain primarily corporate and commercial.
Algorithmic censorship extends the already extensive surveillance of the internet by commercial entities motivated primarily by profit. Out of this surveillance, social platforms can more actively and pre-emptively determine which speech should be permitted and which should be suppressed, often according to their own criteria determined according to commercial considerations and incentives. I do not argue that algorithmic censorship is the only way by which social platforms exercise power over communications or in society more generally, nor do I claim that algorithmic censorship is the only means by which their power is increasing. But algorithmic censorship does augment the existing power of social platforms in a new and distinct way. Given the widespread use of these platforms throughout society for both public and private communications, the introduction of commercially driven algorithmic censorship into the structural conditions of online communication allows social platforms to insert commercial considerations deeper into communications and relationships of many kinds: social, familial, commercial, political, and others. As a result, the ability of those platforms to provide sites for open and inclusive discussion, discourse, communication, and connection is further undermined.
Two things in particular, in my view, are distinctive about ex ante algorithmic censorship that do not necessarily exist together in other forms of content moderation (in particular, that involving ex post reporting and human review). First: algorithmic censorship potentially brings all communications within reach of platforms’ censorship operations. Whereas moderation by humans can typically only consider a (small) proportion of all content, the automated surveillance and analysis inherent in algorithmic censorship would potentially allow for the assessment of all communications at upload, whether they were intended to be public or private. Second: algorithmic censorship allows a more active and interventionist form of moderation by platforms. With human moderators, content moderation is typically passive in nature, relying on user reporting rather than on actively seeking out prohibited communications. With algorithmic censorship, social platforms can, in theory, instead intervene to suppress any content their algorithms deem prohibited according to the platform’s criteria. The distinctive effect of these two features of algorithmic censorship when taken together, I argue, is to potentially give social platforms a power over private communications that has never previously been possessed by any commercial actor. As Keller says, “No communications medium in human history has ever worked this way” (Keller 2019).
My argument here progresses in three parts. First, I argue that algorithmic censorship is a surveillance-based governmentality that provides social platforms with a level of disciplinary control over communications that would not be possible to achieve with solely human moderators. Second, I argue that the nature of algorithmic censorship is such that it potentially allows platforms to exercise a more active, interventionist form of control than would be practically possible with review by humans. Third, I introduce Foucault’s concept of dispositif to argue that, as a result of these features, algorithmic censorship forms part of the structural conditions of online speech, allowing social platforms to further insert commercial priorities into those conditions and to enforce commercially driven limits on everyday communication, with potentially detrimental consequences for the ability of those platforms to provide open and inclusive spaces for communication and discourse.
Surveillance and Algorithmic Governmentality
I take as my starting point Foucault’s concept of governmentality. Governmentality theories are concerned with “the ensemble constituted by the institutions, procedures, analyses, and reflections, the calculations and tactics” (Jessop 2007) that underpin the exercise of power in pursuit of a desired goal. Governmentality—originating with Foucault (Foucault 1980; Foucault 1993) and subsequently developed into a more complete way of thinking about power by others (Rose and Miller 1992; Dean 1999; Rose 1999; Miller and Rose 2008)—allows power relations to be conceived of as involving two fundamental components. The first of these are rationalities, or “ways of rendering reality thinkable in such a way that it [is] amenable to calculation and programming” (Miller and Rose 2008: 15; Foucault 1991: 79; Rose 1999: 26). The second are technologies of power, or techniques and strategies “imbued with aspirations for the shaping of conduct in the hope of producing certain desired effects and averting certain undesired events” (Rose 1999: 52; Foucault 1993: 203). If the real world is rendered into thought by rationalities, then technologies of power translate thoughts and desires into reality (Rose and Miller 1992: 48; Rose 1999: 48; Jones 2007: 174). Together, these components make up governmentalities—forms of power relation in which one actor seeks to use particular strategies and techniques to effect changes of behaviour in others in pursuit of desired outcomes.
The governmentality of algorithmic censorship would depend upon widespread surveillance, forming part of a broader, more extensive surveillant assemblage (Haggerty and Ericson 2000). This would of course not be new to most social platforms; much has been written about their surveillance business models, for instance (Andrejevic 2011; Fuchs et al. 2012; Zuboff 2015). These business models typically involve the pervasive tracking and analysis of metadata describing the behaviour of users in relation to content, to other users, and to the platform itself in order to use knowledge of that behaviour to attempt to modify it in some desired way (Zuboff 2015). However, other areas of platform surveillance have been less discussed. These include for censorship undertaken either in accordance with the law or to enforce the platforms’ own terms of service, community standards, and other policies. Surveillance-based algorithmic censorship would involve the analysis of content, rather than just metadata (although, for other purposes, metadata can be more useful than content).
Foucault argued that modern governmentalities pursue “the dream of a transparent society, visible and legible in each of its own parts”, eliminating the “zones of darkness” established by governmental or corporate power (Foucault 1980: 152). Through the internet, this has become somewhat inverted. While, of course, the internet has brought a degree of transparency over governmental and corporate affairs, it has also resulted in much of society becoming transparent to governments and corporations. As Bruno says, the drive towards participation online has meant that “cyberspace is marked by expansion of the edges of visibility of what we used to understand by intimacy” (Bruno 2012: 343). And, of course, participation in internet-enabled surveillance environments has itself been actively encouraged by social platforms—who frame themselves as inclusive and progressive and surveillance as beneficial and desirable—as they seek to cement their position and hold off regulation (Cohen 2016). In part, this expanded surveillance has been achieved through automation (particularly when involving machine learning), allowing communications and behaviours to be analysed in far greater quantities and at far greater speeds than can be reached by humans.
Algorithms—whether involving machine learning or not—are at the heart of this automation. But algorithms themselves are better thought of as just one part of “algorithmic systems”—“intricate, dynamic arrangements of people and code” (Seaver 2013). They operate not just as technical systems, but as sociotechnical systems. All algorithmic systems, including machine learning systems, are therefore non-neutral (Winner 1980; Gillespie 2014; Hill 2016; Beer 2016; Just and Latzer 2017); they are inherently normative and contextual in nature (Beer 2017; Kitchin 2017), in addition to whatever functional capacity they possess. They are designed, deployed, and used to give effect to the desires and goals of their designers, deployers, and users. The algorithmic systems that contribute to making everyday life increasingly visible are therefore elements of technologies of power, imbued with rationalities, forming component parts of governmentalities. Indeed, Rouvroy has written about the emergence of “algorithmic governmentality” (Rouvroy and Berns 2013; Rouvroy 2015) as a form of surveillance-based algorithmic power. Through the internet and algorithmic governmentality, the zones of darkness of private social life have become increasingly illuminated.
Surveillance-based governmentalities are often (rightly or wrongly) described as producing a “panopticon”, the metaphor for disciplinary power adopted by Foucault (Foucault 1991) and derived from Jeremy Bentham’s ideas for a liberal reforming prison designed to allow for easier control of prisoners by making them more visible to guards. Panopticism is much discussed in surveillance studies; the essential idea is that if the subjects of surveillance know that they can potentially be watched at any given point in time then they do not need to actually be watched all of the time. The panopticon’s effect, according to Foucault, is to “induce … a state of conscious and permanent visibility that assures the automatic functioning of power” (Foucault 1991: 201); that is to say, the uncertainty over whether at any given point in time an individual is being watched should, in theory, induce them to regulate their own behaviour. This reveals the fundamental nature of disciplinary power; subjects internalise its exercise, becoming self-disciplining.
I argue, however, that algorithmic censorship, while involving both surveillance and discipline, does not depend upon panopticism as its mechanism of power. In algorithmic censorship, visibility does still enable the exercise of power—the greater capacity to “see” the supposedly private behaviours and communications of individuals allows platforms to extend their reach into everyday life. But, like other forms of algorithmic governmentality, algorithmic censorship would not be panoptic. Rather, due to the greater capacity to see, it would represent a more encompassing, more total form of surveillance. Potentially, with ex ante censorship, all—or substantially all—communications on a platform could in fact be surveilled. There is no need to induce uncertainty in the subjects of surveillance when modern data communications, storage, and processing technologies allow machine learning systems to move those subjects from being permanently visible to being permanently watched.
According to Deleuze, a decline of panopticism in western societies has meant that they have increasingly moved away from the disciplinary forms of power described by Foucault, instead becoming what Deleuze calls “societies of control” (Deleuze 1992: 3–4). For Deleuze, the shift to a society of control meant fewer hard boundaries around what one could and could not do. For the most part, individuals would be to be free to live their lives. Rather than fixed structures (physical or otherwise) intended to provide discipline for individuals as primarily physical subjects, societies of control would adopt flexible, malleable, “free-floating” means to modulate behaviour (Deleuze 1992: 4). Through these more flexible structures, individuals are not subject to power as a unified whole, but are instead an abstracted “dividual” (Deleuze 1992: 5), broken down into component parts—data points describing interests, preferences, behaviours, and so on—that themselves become the locus of control. According to Williams, the dividual is “a physically embodied human subject that is endlessly divisible and reducible to data representations via the modern technologies of control, like computer-based systems” (Williams 2005). Indeed, computers, for Deleuze, are emblematic of a control society (Deleuze 1992: 6). While locks are binary in function—either locked or unlocked—computers, despite their binary architecture, can produce variable outputs to modulate behaviour according to what is involved and what is required.
I contend, however, that “control” and “discipline” are not necessarily so easily distinguished as Deleuze claims (see, for example, Kelly 2015). Certainly, he is correct that individuals in modern society are in some ways imbued with greater freedom (Miller and Rose 1990). But they are accordingly also tasked with taking greater responsibility for managing their own behaviour (Powell and Steel 2012: 2) and held accountable when they fail to do so acceptably (Beck and Beck-Gernsheim 2001; Harvey 2005). It is true that in some areas of society rigid structure has been replaced by a more flexible form, but, I would argue, the kind of individual freedom identified by Deleuze (that of freedom from enclosure (Deleuze 1992: 4)) does not necessarily mean that individuals are free from discipline. Indeed, enclosure was not, in Foucault’s view, required for discipline at all (Foucault 1991; Kelly 2015). In modern societies, as Rose and Miller observe, “power is not so much a matter of imposing constraints upon citizens as of ‘making up’ citizens capable of bearing a kind of regulated freedom”; autonomy is not, they say, the antithesis of power, “but a key term in its exercise, the more so because most individuals are not merely the subjects of power but play a part in its operations” (Rose and Miller 1992). Even in societies of control, individuals are disciplined according to internalised societal logics. The shift to a more flexible society of control does not mean the end of discipline, but its transformation into a form that allows it to extend more widely (Hardt and Negri 2000: 330–331).
Although Deleuze misdiagnosed the decline of disciplinary power, he did provide some useful observations on “control”—which I locate not as a replacement for discipline, but as a non-panoptic form of disciplinary governmentality—and its relation to new technologies. Computers do indeed allow for more flexible forms of power. Deleuze, writing in the early 1990s, described a scenario imagined by Guattari that today seems quite conceivable: “a city where one would be able to leave one’s apartment, one’s street, one’s neighborhood, thanks to one’s (dividual) electronic card that raises a given barrier; but the card could just as easily be rejected on a given day or between certain hours; what counts is not the barrier but the computer that tracks each person’s position–licit or illicit–and effects a universal modulation” (Deleuze 1992: 7). Disciplinary power in the governmentality of algorithmic censorship, I argue, takes this form of control; censorship algorithms do not provide hard barriers that prevent individuals from speaking entirely (although platforms may of course impose suspensions and bans on users for serious or repeat violations through their other moderation processes). What ultimately determines whether any given communication will be permitted or suppressed is the algorithm’s judgement of what is being said in that communication. Indeed, research suggests that more interventionist moderation policies lead users to self-censor to a greater degree, significantly affecting discussion (Gibson 2019). It is not unreasonable to imagine that, faced with the governmentality of algorithmic censorship, many users of social platforms might internalise and begin to apply themselves the perceived boundaries of acceptability—becoming, in effect, self-disciplining, moulding their communications to the desires of corporations.
I argue that the result of its total surveillance is that the governmentality of algorithmic censorship would bring private conversations—and, indeed, everyday life as played out online—further within the reach of corporations and others who seek this kind of control. Algorithmic censorship, particularly where applied to all posts, messages, and uploads, would potentially allow corporate control of communications—already considerable and growing—to be extended into every corner of society, positioning social platforms as mediators and moderators of even private (digital) conversations in a way that would not be possible with content moderation undertaken only by humans. Foucault wrote of power that is capillary, affecting “the grain of individuals, [touching] their bodies and [inserting] itself into their actions and attitudes, their discourses, learning processes and everyday lives” (Foucault 1980: 39). Through the governmentality of algorithmic censorship, the regulatory power of social platforms takes such a capillary form. With its surveillance-based technologies of capillary power, the governmentality of algorithmic censorship would therefore help extend the regulatory power of social platforms deeper into society and into the discourses and everyday lives of individuals.
Private Ordering and Control of Communications
The potential control over communications brought about through algorithmic censorship contributes to furthering the already extensive private ordering by platforms. Indeed, the existence online of “private speech regulation” (Li 2018) is not new; nor would the (re)emergence of private authority over greater areas of life be unique to algorithmic censorship. Governance theorists have long acknowledged that modern societies take the form of a “differentiated polity” (Rhodes 1997), involving a network of power relations between many economic and political actors (Burris et al. 2008). As Rose and Miller observe, throughout society, “power is exercised today through a profusion of shifting alliances between diverse authorities in projects to govern a multitude of facets of economic activity, social life and individual conduct” (Rose and Miller 1992). Private ordering has thus become a feature of modern societies more generally. And social platforms, due to their position of mediating between individuals and organisations of all kinds, exercise a significant amount of power through a variety of practices, as discussed above. But, with algorithmic censorship, both the extent of the potential influence over public and private communications and the concentration of this more extensive censoring power in the hands of relatively few corporations—each with control over its own platform—would be new developments. I argue here that private ordering in algorithmic censorship takes the form of a more active, interventionist mode of control than could be achieved solely by humans or through non-algorithmic systems.
Of course, private ordering operates as a function of the regulatory power of social platforms in multiple ways, whether that power is exercised directly by humans or by algorithms on their behalf. Terms of service, for example, have been recognised as having the normative power of law on some platforms (Belli and Venturini 2016). Platforms determine their terms of service, making whichever changes they deem necessary at any point in time (usually without the careful attention called for by Winner (Taplin 2017)). For instance, between November 2016 and September 2018, Facebook, Google, and Twitter each made numerous changes to their terms of service, attempting to curtail disinformation and electoral manipulation (Taylor et al. 2018; YouTube 2019; Bickert 2019; Zuckerberg 2018a; Zuckerberg 2018b). As well as terms of service, social platforms can alter their algorithms to exercise control over the dissemination and amplification of content through systems for personalisation, seeking to drive user engagement and build market share, with increasingly negative consequences for society (Tufekci 2015; Cobbe and Singh 2019). Indeed, Facebook alone made 28 announcements in the same time period about changes to its algorithms, including moderation algorithms, and 42 about enforcement (Taylor et al. 2018: 10). Through the establishment of a form of private order on social platforms through these existing practices, corporations have gained significant influence over the public order of political debate and discourse and of the ability of individuals to act and speak freely.
Through the governmentality of algorithmic censorship, though, I argue, social platforms can engage in a more active, interventionist form of private ordering than would have otherwise been possible. Of course, not all moderation will be undertaken entirely of platforms’ volition, but even where some censorship is mandated or encouraged by law or regulation (to supress illegal activity such as hate speech, for example, or, in authoritarian states, for political reasons), it is likely to be the social platforms themselves who are responsible for implementing those requirements, for developing systems for the surveillance and identification of undesirable communications, and for conducting censorship according the logics thereof. And platforms may also desire to go further than governments in censoring communications carried out over their platform. As discussed above, the capacity to do this comes from the capabilities of algorithmic systems—dynamic arrangements of people and code. And code, of course, has long been recognised as providing a form of regulatory power (one that can readily be located within Deleuze’s depiction of the societies of control, tightly bound as they are with the advent of computers). Lessig showed how code establishes norms and boundaries in a manner analogous to architecture (Lessig 2006: 81). Indeed, Lessig argues that, through its architectural effects, code acts effectively as the law in virtual spaces—offering what I would describe as a more passive form of control, facilitating some behaviours and providing no opportunity for others. Even with this more passive form of control would one only rarely come up against hard barriers; instead, behaviour is shaped, influenced, and directed primarily through a platform’s design, features, and affordances.
Lessig believed that the web’s technical architecture was its greatest protector of free speech (Lessig 2006: 236). He argued that, in allowing for decentralisation and relative anonymity and in lacking systems to identify content, the web’s code could prevent censorship and provide a global “First Amendment”. He also felt that “the market”, alongside the web’s code, could protect freedom of expression online, with the low barrier to entry for blogs (in particular) and other online media giving anyone the ability to put forward their ideas (Lessig 2006: 236). To that extent, writing in 2006, Lessig still remained loosely aligned to the utopian view of the internet that was prominent since its early days. But he did acknowledge that things might change in the future (Lessig 2006: 237), and warned that the web was being reconstructed in such a way that it could become a “perfect tool of control” (Lessig 2006: 4). Indeed, even in the mid-2000s, the seeds of a very different future had already been sown. The web’s centralisation around a handful of companies over the subsequent decade—the enclosure of the open internet described by Tufekci—and the resulting decline of blogging and other forms of communication has fundamentally changed the conditions that Lessig described (although, of course, alternative means of communication do still exist nearer the margins). And the development by those companies of more sophisticated systems for actively identifying and suppressing content points to a more fundamental shift in the role of code in controlling behaviour.
We can, I think, distinguish between Lessig’s depiction of “code as law” in acting akin to architecture (on one hand) and the form of control offered by algorithmic censorship (on the other). In the former, in Lessig’s view, the web’s code—its “architecture”—could, in the right circumstances, passively prevent censorship and allow free expression to flourish, but it could also become a limiter of behaviour and communication. In the latter, I argue, code embedded in algorithmic systems offers the possibility of those systems being a more active enforcer of norms and behaviours than even Lessig recognised as being possible through code’s more passive architectural qualities; algorithmic systems can become effectively law and law enforcement in one. This can be seen to an extent in the use of recommender systems to algorithmically personalise content feeds, through which platforms exercise a more active power to shape the information environment presented to users and to promote or downrank certain kinds of content, but generally not remove or otherwise restrict access to it (Cobbe and Singh 2019). But when embedded in processes for actively suppressing or permitting communications at upload, the degree of control over those communications provided by code takes platforms even further away from Lessig. This remains, though, a form of control in the sense described by Deleuze—only rarely would an individual be prevented from speaking entirely; they are instead prevented from saying certain things, perhaps only to certain people or in in certain contexts, according to the judgement of the algorithm. In providing both a functional power (detecting and suppressing communications) and a normative power (enforcing the platform’s rules and standards), algorithms would thus play a significant role in active private ordering, establishing and maintaining the boundaries of acceptable speech in online spaces and made possible through governmentalities involving the total surveillance of communication and behaviour.
An example of the more active, interventionist form of private ordering enabled by algorithms can be seen in the development over time of different digital methods of protecting intellectual property. As Graber observes, the rise of digital rights management (“DRM”) had the effect of enforcing IP standards (Graber 2016), but in a relatively passive way. If, for instance, content did not come with the correct digital key or was incompatible with the DRM system used by the software or platform in question then it could not be played; there was neither analysis of nor active enforcement based on the content itself. The development of YouTube’s ContentID system, an algorithmic process which actively checks all uploaded videos for potentially IP-infringing material, provides a more active form of code-based intervention (Elkin-Koren and Perel 2019). To analogise with other areas of regulation, if DRM merely checks that the paperwork is in order, then ContentID opens the crate and examines the contents. ContentID has been criticised for enforcing rules that go beyond what is required by IP law (Elkin-Koren and Perel 2019); effectively, YouTube’s own IP standards become law on its platform, actively enforced by ContentID.
With algorithmic censorship, I argue, the code of social platforms not only constrains or facilitates certain behaviours through architectural effects, but can similarly analyse content and actively intervene to (ex ante) suppress it at upload. Algorithmic censorship, as an algorithmic governmentality, can therefore also be understood as a form of algorithmic regulation (Yeung 2018); specifically, a manifestation of what Hildebrandt terms “code-driven regulation” to describe regulation systems where the system itself seeks to modify behaviour (rather than providing information or advice to a human who then intervenes) (Hildebrandt 2018). Through this, social platforms could more actively enforce their own standards for acceptable communication, which effectively operate as law on those platforms. The capacity to do this takes platforms beyond anything that they might reasonably be able to achieve with human reviewers. The governmentality of algorithmic censorship—through the capillary form of this power, extending into private, everyday conversations; through the active intervention by platforms for which I argue it would provide—thus potentially provides a kind of private regulatory power over discourse that has never been privately possessed before.
Commercialising Communications and the Dispositif of Social Platforms
I want to introduce here Foucault’s concept of the dispositif (Foucault 1980: 194–195) (“apparatus” or “assemblage”). For Foucault, “dispositif” referred to the “heterogeneous ensemble consisting of discourses, institutions, architectural forms, regulatory decisions, laws, administrative measures, scientific statements, philosophical, moral and philanthropic propositions” that together form the system of power relations of a particular domain (Foucault 1980: 194–195). The dispositif in this context, then, represents the structural conditions and power relations within which online discussion, debate, communication, and interpersonal connection take place. The governmentality of algorithmic censorship would, I argue, become an important part of the dispositif of online speech, part of the structural conditions and power relations for discussion, debate, and communication, extending the influence of those platforms over various elements of that ensemble as it exists online.
Platforms of course already exercise some degree of influence over discourses; by amplifying certain communications through algorithmic personalisation of content feeds (Cobbe and Singh 2019), for example, and by permitting or suppressing some communications through ex post human moderation. And they exercise perhaps greater influence over other elements of that dispositif—architectural forms, regulatory decisions, laws, administrative measures, philosophical and moral propositions—through their corporate policies and mission statements, their design and affordances, their terms of service, and their accountability procedures. But, as I argue above, two features of algorithmic censorship—bringing all communications on a platform within reach and enabling a more active, interventionist form of moderation—together give social platforms a distinctive regulatory power over communications that goes beyond the power they derive from other sources. This would position those platforms further in the dispositif as arbiters of permissible speech. Through algorithmic censorship, then, social platforms could exercise a more active, interventionist form of control over discourses, in particular, and, in doing so, more actively influence various other elements of that ensemble. My argument here is that, as a result of algorithmic censorship governmentalities, a small number of private companies have potentially greater power to set the terms of speech regulation and of the dispositif more generally according to commercial incentives and imperatives and to therefore insert those commercial priorities further into public and private communications.
One aspect of the dispositif involves what Foucault called “regimes of truth” (Foucault 1980: 131); that is, the constructs around “the types of discourse which it accepts and makes function as true; the mechanisms and instances which enable one to distinguish true and false statements, the means by which each is sanctioned; the techniques and procedures accorded value in the acquisition of truth; the status of those who are charged with saying what counts as true” (Foucault 1980: 131). Much of Foucault’s work was concerned with “power-knowledge”, discussing the power generated by the determination of truth (Foucault 1980). This does not just mean what is factually accurate in the literal sense, but which interpretations of, narratives about, and desires for the world are “normalised”, or held to be acceptable and correct (Foucault 1991: 184). For Foucault, regimes of truth are ultimately produced under the dominant control of “a few great political and economic apparatuses” (Foucault 1980: 131–132). Every society, he says, produces such regimes of truth. In the past, dominant regimes of truth were commonly maintained by what might collectively (and, perhaps, pejoratively) be called the “establishment” or the “elite”: governments, politicians, the media, and members of the academy (Foucault 1980: 131–132). In the contemporary world, with the democratisation of communication through the internet, the status of such dominant regimes has been challenged. This is not just through “fake news” and “post-truth” discourses, but through the promulgation and reification of fundamentally different interpretations of the world and desires for society’s future.
In response, through terms of service, community standards, changes to algorithmic ranking, and content moderation—and often reluctantly—social platforms have attempted to define and impose their own regimes of truth. In some cases, this has been undertaken collaboratively with governments (Bloch-Wehba 2019: 42–50), seeking to re-establish the limits of what is acceptable and understood to be correct. These platforms increasingly structure reality algorithmically (Tufekci 2015; Just and Latzer 2017), mediating social interactions, deriving significant power from and exercising enormous influence over the flow of information and over collective awareness, and understanding of current affairs (Tufekci 2015; Tufekci 2016; Gillespie 2014; Graber 2016; Cobbe and Singh 2019). Social platforms can leverage this power to take a more interventionist role by downranking certain content of their choosing (Facebook, for instance, tweaked its algorithm in 2018 to demote content that comes close to violating its community standards but does not actually do so (Zuckerberg 2018a)).
Algorithmic censorship would similarly allow social platforms to take a more active, interventionist role in the establishment and maintenance of regimes of truth and in shaping the discourses, architectural forms, regulatory decisions, and so on relating to online speech than would otherwise be possible with human moderators. But this power goes further than with simply downranking undesired content—while platforms’ use of recommender systems for personalisation allows them in theory to restrict the dissemination of certain material in content feeds and thereby shape the information environment presented to users, algorithmic censorship potentially allows them to intervene to suppress undesired communications entirely, even in private messaging services (which typically do not use algorithmic personalisation, instead presenting communications chronologically). As such, I argue, algorithmic censorship—in permitting a more active, interventionist form of control over public and private communications that extends further into newly visible areas of everyday life—sharpens the regulatory power of social platforms into a capillary form that permits them to exercise a greater degree of influence over the system of power relations constituting the dispositif on their platform, and, from there, in society more generally. As a result, algorithmic censorship potentially gives social platforms significantly greater influence over the structural conditions for online discourse, communication, and interpersonal relation.
Foucault argued that a dispositif has a dominant strategic function in responding to some identified need (Foucault 1980: 195); this makes a dispositif “a matter of a certain manipulation of relations of forces, either developing them in a particular direction, blocking them, stabilising them, utilising them, etc.” (Foucault 1980: 196). The early internet and its predecessor, the ARPANET, prohibited commercial activity (Stacy 1982), and even by the mid-1990s, it was not clear that the web would offer real commercial opportunities (Hoffman et al. 1995). As discussed above, however, the web more recently has largely been captured by a handful of companies who extract significant economic value from our behaviours, interactions, and communications and exert significant power over communications and in society more generally. Today, though often explicitly appearing to be spaces for free discussion, promoting sharing and exchanging information, social platforms are heavily commercialised and surveilled. Although, as noted previously, other considerations do play a role, most platforms are constructed primarily to produce profit, prioritising commercial rationalities of engagement, revenue, and market position over any professed desire to facilitate free discussion (Cobbe and Singh 2019). The strategic function of the dispositif on social platforms should thus, I argue, be understood to be primarily aligned with those commercial priorities—revenue, market position, and profit above all. And, of course, these algorithmic systems are not neutral, impartial tools—they encode the priorities and goals of their designers, deployers, and users and effect changes in power relations and structural conditions accordingly. In extending control over the dispositif through algorithmic censorship, I argue, social platforms can thus to a greater extent shape those discourses and other aspects of the dispositif in line with commercial priorities.
Although social platforms are increasingly important sites for political discussion and debate, interpersonal connection and relation, and community and solidarity, this commercialising of public and private communications and everyday conversations by social platforms has potentially deleterious consequences for their ability to adequately fulfil that role. Because commercially operated social platforms inevitably prioritise commercial considerations over others, they have generally not in practice prioritised freedom of expression or paid due regard to the societal role that they now play in mediating public and private communications. Nor have they generally promoted consistency, fairness, or transparency in their policies and moderation practices (Kaye 2019; Gorwa et al. 2020). Instead, as a result of their commercial priorities of growth, market dominance, and profit, social platforms typically seek to appeal to a wide mainstream audience, to placate advertisers and policymakers, and to forestall potential (and potentially costly) regulation. They have, as a result, not necessarily left space for the marginalised or minoritised or for those with unorthodox views (Allen 2019). Indeed, journalistic investigations have revealed that the prioritisation of these commercial goals has resulted in some platforms effectively excluding sex workers and marginalising women and LGBT people by removing or restricting their communications (Allen 2019; Cook 2019).
Although censorship algorithms themselves are not the focus of my analysis, I do want to highlight here that the exclusionary nature of social platforms as a result of these commercial interests could be amplified or exacerbated by the limitations of those algorithmic systems. Tumblr’s system for identifying and removing adult content, introduced in December 2018, reportedly routinely misclassifies innocuous material, with content by LGBT users seemingly particularly penalised (Bright 2018; Matsakis 2018). Similar problems have been reported on YouTube (Allen 2017). Indeed, bias is a significant challenge for censorship algorithms more broadly (Binns et al. 2017; Dixon et al. 2018; Park et al. 2018). While hate speech itself is difficult to automatically identify and remove, groups likely to be victims of abuse and hate speech may themselves find their communications censored; one analysis of popular hate speech datasets and classifiers, for instance, found that tweets by African-American users were up to two times more likely to labelled as offensive than tweets by others (Sap et al. 2019). These limitations and biases risk producing greater censorship of communications by members of marginalised groups. It is possible, I would suggest, that, combined with the exclusionary nature of platforms’ primarily commercially-driven policies, users’ perceptions of biases in censorship algorithms—a form of what Bucher calls the “algorithmic imaginary” (Bucher 2017)—may also contribute to a disparate disciplinary effect of algorithmic censorship in producing self-censorship (whether those perceptions are accurate or not). This could result in something akin to a “spiral of silence” (Noelle-Neumann 1974; Stoycheff 2016), whereby, based on the perception that marginalised views are unwelcome in these commercialised spaces, people from those groups self-censor to a greater extent, thereby reinforcing that effect.
While platforms’ interest in appealing to as broad an audience as possible in pursuit of growth and profit can lead to them excluding non-mainstream groups and communities, commercial pressures can also drive ongoing changes in policies and practices. In some cases, platforms that once did provide a space for those outside the mainstream to build community—such as Tumblr—have shifted their positions as a result of commercial considerations. Tumblr’s move to impose automated restrictions on “adult content” came as a result of Apple’s decision to excise Tumblr’s app from iOS’s App Store (seemingly out of Apple’s own desire to sanitise the apps available to iPhone users) (Koebler and Cole 2018), and disproportionately impacted LGBT and sex-positive communities. Recent years have also seen advertisers withdrawing from some platforms in response to negative publicity around certain kinds of material (Grierson et al. 2017; Ram and Vandevelde 2017; Paul 2020a). In some cases, platforms have changed their policies in response to these commercial pressures (in other cases, I should note, some platforms have appeared less than willing to bow to the demands of advertisers (Paul 2020b)). YouTube, for instance, responded by changing its policies on advertising associated with controversial content (Pierson 2018) and issuing guidelines on “advertiser-friendly content” (YouTube n.d.); videos that are algorithmically determined to be unacceptable to advertisers are now automatically demonetised. Although this does not necessarily lead to content being removed (unless it violates YouTube’s community guidelines), the influence of commercial pressures on YouTube’s policies and practices is clear. More generally, social platforms have reluctantly become increasingly interventionist as they attempt to deflect the attention of governments and policymakers and forestall regulation that might impose upon them greater costs and compliance obligations (Kaye 2019). The risk of potentially costly regulation itself provides a commercial incentive to develop automated tools to more actively intervene in communications and in doing so appear to policymakers and others to be acting more responsibly, and provide a key driver behind the development of tools that can moderate content on a more comprehensive ex ante basis.
While changes in their policies may be brought about by the influence of advertisers or the threat of regulation, social platforms have repeatedly shown little interest in systematically considering users’ views on what should and should not be acceptable, with their moderation processes typically providing no formal mechanisms by which users can directly influence the boundaries of acceptable speech. Social platforms often have opaque and unaccountable review mechanisms even in relation to their human moderation processes, often refusing to provide clear and precise information about rules and or enforcement. It is true that, with enough of the right kind of pressure, some platforms have shifted to some extent the parameters of what they deem acceptable. A sustained campaign over several years to reverse Facebook’s policy prohibiting photos of breastfeeding, for example, eventually led to it being relaxed (although not abolished) (Dredge 2015). Controversy over Facebook’s deletion of a photo depicting victims of the Vietnam War led it to amend its community standards to allow portrayals of violence or nudity that are, in Facebook’s determination, “newsworthy, significant, or important to the public interest” (Ohlheiser 2016). And various platforms have repeatedly changed moderation decisions in the face of an overwhelmingly negative response (Kaye 2019). But these changes came as unilateral decisions made as a result of prolonged campaigns and widespread outrage. They were not the product of democratic, accountable processes for determining the boundaries of acceptable speech. Moreover, the fact remains that such policy changes are at the discretion of the platforms themselves, rather than the result of any form of transparent, accountable, or democratic process; as does the fact that the primary duty of social platforms is to their shareholders and the pursuit of profit, not to their users or to society more generally.
The strategic function of the dispositif of social platforms is therefore not, as those platforms may claim, the free exchange of ideas, the sharing of information, or the building of community, but generating revenue, market position, and profit for social platforms. This is not to say that social platforms are motivated only by commercial priorities and corporate interests (others do play a role), but that these are greatly significant and often overriding in driving platform policies and practices. That the contemporary web is heavily commercialised is of course not a new observation (Fuchs 2011; Andrejevic 2012; Zuboff 2015; Srnicek 2016). But the governmentality of algorithmic censorship must, I argue, be understood in that context. While social platforms do have other mechanisms by which they can influence behaviours and communications, the capillary form that the regulatory power of social platforms takes through algorithmic censorship would allow them to insert commercial considerations and rationalities further into the dispositif of online speech and thus into the everyday conversations of billions of people in an unprecedented way. As previously discussed, the use of algorithmic systems allows platforms to process far greater quantities of information than would be possible with human moderators, potentially allowing for the automated and more interventionist moderation of all communications, public and private. Algorithmic censorship could therefore effectively establish a new mode of automated, surveillance-based, commercially driven, privatised speech regulation, prioritising commercial imperatives over others. Where algorithmic censorship is undertaken of the platform’s volition, or goes beyond what law requires, this form of authority would be answerable ultimately to the platform’s shareholders.
Through algorithmic censorship, social platforms thus unilaterally position themselves as both the mediators and the active, interventionist moderators of online communications in a way that would otherwise be impossible (even accounting for other processes deployed by platforms to shape the information environment or to moderate communications). This regulatory power could be leveraged to identify and disrupt the emergence of alternative discourses and regimes of truth that are thought by platforms to be commercially disadvantageous in some way, with platforms deciding to automatically suppress certain lawful but distasteful, undesirable, or non-mainstream communications in order to protect their revenue streams. The ability to partake in ordinary conversations as well as in societally important sites of discussion and debate would thus be increasingly subject to the vagaries of corporate priorities. The introduction of algorithmic systems for censorship by social platforms into the structural conditions for discussion, discourse, and interpersonal connection thus changes and further commercialises those conditions and potentially permits platforms to more effectively intervene to enforce commercially determined limits on acceptable speech. The result of algorithmic censorship, at its most effective, would be homogenised, sanitised social platforms mediating commercially acceptable communications while excluding alternative or non-mainstream communities and voices from participation. Though, as noted previously, social platforms often emphasise the benefits of communication, connection, and the sharing of experiences and ideas, the commercialisation of communications made possible by algorithmic censorship has the potential to significantly degrade the capacity of those platforms to effectively provide inclusive spaces for participatory democratic discourse and discussion as well as for private communication.