Having considered the undecidability of law/regulatory texts and their making, interpretation and enforcement, it is now important to interrogate the ungraspable undecidability that haunts technological regulation. For purposes of scope, in talking about technology, I focus on what I call online communication technologies, such as the Internet and its different social networking platforms.
With a few exceptions, technologies are conceptualised as property/tools that are exclusive to us as humans, property/tools for particular ends, almost like a ‘child’s toys’ (Johnson 1993, p. 105), obeying us, and having anthropomorphic qualities (Derrida 2002b) that are graspable and in our control. However, the complexity of artificial intelligence and computer science today complicates this understanding. Certainly, today, computers not only outperform human operators in mathematical operations and in proving complex mathematical theorems but they also drive cars, translate between human languages, outthink grand-masters at chess, and play improvisational music differently—‘smart’—in a rhythm notated instantaneously, faster than ours (Virilio and Bertrand 2012), one finitely surpassing our programmability (Gunkel 2012).
Owing to the fact that online communication technologies are (aided by) computers, they are susceptible to evading our impositions of spatiality and calculable programmability, determinability and stability. Nevertheless, computers, due to this very programmability, are constantly having their code being redesigned and rewritten (Joque 2018, p. 15). As such, they are inherently deconstructive machines or texts susceptible to resisting and disrupting regulation. To understand this claim, it is important to think of online communication technologies as modern prosthetic extensions of writing—‘the page remains a screen’ (Derrida 2005, p. 46). Online communication technologies thus belong to a ‘digital history’ of finger-operating devices and handheld devices, like ‘pen tools’ that process words or print words with voices and with words (Derrida 2005). Thus, as with the signatures of law and regulation discussed above, online communication technologies are always embedded within an iterable and disseminatory ecological process of writing and communication. They are ever in (and of) a process of languaging i.e., of reproducing and being produced as copies and duplicates of texts interminably looped in a network of coded computers and their human and computer addressees (Joque 2018, p. 19; Hayles 2010, p. 15).
Further, due to the interfacing (human/machine) synchronic engagement intrinsic to online communications technologies, these technologies can be thought of as disseminatory organisms that produce a new kind of dual-authored writing, i.e., a ‘duplicitous’ double speech that ‘seems to originate not just with the persons who are individually identifiable in a genealogical sense, but also with a computer discourse that carries with itself its own textual protocol’ (Aycock 1993).
Because this writing occurs between human/machine or human/computer it re-enacts a spectral play of différance. Accordingly, for us ‘the humans’, it occurs within an invisible techno-hallucinatory trickery or automatic spontaneity, ‘an internal demon’ i.e.—an ‘other’ that can (or not) be withdrawn, in front of us; one that is faceless, from a different place, remote, secretly—behind the computer screen (Derrida 2005, p. 23). This spectral and phantasmic element of spontaneity and trickery is manifested in the manifold ways in which online communication technologies come up with new or unarticulated conjunctional combinations of solutions to divergent situations (as well as slippages e.g. ‘glitches’, ‘crashes’ or ‘leaks’) that befuddle, surprise, ‘freeze!’ and outwit not only us, their users, but also their designers and programmers.
Moreover, it is worth noting that the kind of writing produced by online communication technologies is faster and has more mobility and fluidity than the kind of writing produced by humans in the real world. Because of this, writing done via online communication technologies accelerates all the traces of speech and writing that occur in the real world hence blurring communicative contexts duplicitously in a more immediate out of time register. To belabour this point, it is worth exploring the notion of context within communication.
Derrida has suggested that ‘context’, which is always determined by the presence of a receiver, is a notion based on a hermeneutic consensus. However, this consensus can never be absolutely ascertained because the predeterminability of meaning within which communication (i.e. texts or images or speech) is received is always at once absent (Derrida 1988). Hence, one is never sure of the destinations or arrival of speech.
In other words, the meaning of what a speaker or reader says or intends to say always loses its original form and rhythm and is susceptible to becoming lost or unreadable. This means for example, that words, which are intended to offend or cause harm, can miss their intended target and produce an unintended and unforeseen effect on the readers or listeners (Butler 1997, p. 87), their context is always shifting, dislodged, drifting in a flux of rupture. The possibilities of this occurring are incalculable, particularly online, given the condensed cross-cultural landscape of the Internet.
Certainly, the re-citation, re-iteration, and re-contextualisation of writing is perhaps nowhere more evident than on the Internet where a number of Internet media signatures like memes, tweets (including retweets, subtweets) and videos allow for the citing, re-linking, recoding and reworking of content non-deterministically, multipliably and cross-jurisdictionally.
This is done using a number of online communication technological tools in processes of remixing (Lessig 2008) that involve the endless deferral, translation, invention and repetition of texts in and at differing times. To illustrate this, if we consider a re-mark like ‘blood is red’, a statement which at first may appear simple and graspable, it is highly likely that when disseminated and recited by various speakers online, it can infer a different meaning (a spectrum of meanings) than was originally intended by its (absent) online speaker (Derrida 1988; Butler 1997). Other speakers and audiences could then (re)cite it and through this recitation, create a non-deterministic, derivative, re-punctuated vocabulary—with each single word, a pictograph, + [‘emoji’]Footnote 12 or even a ‘Deepfake’Footnote 13 image (Quach 2018; Cole 2017)—that contests and challenges our normative understandings of fiction/reality; i.e., ‘isness’, ‘blood’ and even the very colour ‘red’ instituting a free play of meaning upon substitutable meaning—‘iterability alters, contaminating parasitically’ (Derrida 1988, p. 62).
Of course, the argument can be made here that online communications can be trapped and are contained within certain limits (e.g. through filtering and blocking technologies), and that these very filtering and blocking technologies are used to limit the iterability of online communication technologies through censorship. Nonetheless, because of their irrevocable bind to an exterior (in other words, to that which they exclude) these very blocking and filtering technologies also paradoxically yield symbiotic possibilities of invention and improvisation—for improvisation is a subversion that always occurs within limits and frameworks (Murphy 2004). This claim is supported in the scholarship of a commentator like Levine (1994, p. 2) who has argued that writers or speakers can be ‘spurred on’ by the impediments of censorship to innovate new styles of communication, which anticipate and bypass the calculable limits imposed by censorship.
An example of such a phenomenon would be the re-appropriation and re-contextualisation of ordinary and seemingly innocuous words such as ‘milk’ by online right-wing and neo-Nazi extremists to iconise and connote white supremacy (Freeman 2017). For the regulator(s), such a change in terminology, a repetitive scattering of a sign (within a different context) would create an unanticipated graft of polysemic (ad infinitum) possibilities. It would thus subvert normative assumptions of what constitutes ‘hateful speech’ and would alter prevalent notions of certainty and clarity (i.e., through widening the lexicon of hate speech with derivative, imitated, faked and differentiated words) hence making the very regulation of such speech intractable.
Even in the most repressive regulatory regimes, with the most technologically advanced filtering system in the world, ‘closed-off words’ can still give rise to a regeneration and invention of infinite textual possibilities based on those very closed-off words. Hiruncharoenvate (2017), for instance, has shown how digital activists employ non-deterministic homophones of censored keywords to avoid detection by keyword matching algorithms on Chinese social media/online communication websites (Hiruncharoenvate et al. 2015). Zeng (2018) highlights a relevant practical example of such non-deterministic circumvention wherein Chinese women and feminist activists on social networking websites like Weibo use the hashtag #RiceBunny as a substitute to the #MeToo campaign. With #RiceBunny, users manipulate emojis (+ pictographs and homophones) of rice bowls (pronounced as ‘Mi’) in addition to emojis of bunny heads (pronounced as ‘Tu’) hence creating (Mi +Tu = #MiTu/#MeToo) in order to avoid censorship and detection by the software and the authorities (Zeng 2018).
Because these homophones and emojis are or were not pre-determined by the software (and its designers) they create new unprogrammable situations for censors. These new unforeseen homophones can stay up on the Internet undetected three times longer than their censored counterparts. Consequently, in a play upon play of meaning, the cancelled excluded other returns to the fore. It subverts the ‘logical systematicity’ (Spivak 1993, p. 180) of that which seeks to censor it by ‘determining its conditions of existence, fixing at least its limits, establishing its correlations with other statements that may be connected with it, and showing what other forms of statement it excludes’ (Foucault 1972, p. 30). Thus, online censorship (as a form of negative-writing or cancelled-out writing) from the very beginning creates the possibilities for a reverse-play of power or counter-power situations (by ascribing or inscribing différance). Such reversed speech acts and utterances are performed in irreducible guises that divert from pre-established and pre-determined linguistic speech norms (Butler 1997).
These irreducible heterogeneous guises are always already present, haunting the originarity of locutionary violence. In other words, the outside of such speech or writing is also from the outset in the inside of it. Consequently, speech ‘invaginates’Footnote 14 itself (Derrida 1980, p. 59) in a ‘hermeneutic circle’ structured by a double contrary motion (Moten 2003, p. 6).
It is worth observing that these invaginated irreducible guises or ‘others’ within a text can be spectral i.e., psychically absent yet also present. Derrida (1978) demonstrates the presence of this other through the notion of différance, a neologism that means both to defer and to differ.Footnote 15 Derrida has proposed that the deferred-difference (i.e., différance) of writing reveals otherness i.e., it reveals the representative subjectivities of the excluded outside and binds them into a continuous relation and interaction with closed foundational and hierarchal structures. Thus, the excluded outside of regulation i.e., its prohibited outside (by virtue of différance) is compelled to interact continuously with the very homo-hegemonic structures that seek to erase, exclude or overcome it in the first place. Indeed, in every erasure or exclusion, the unconscious is revealed (but also repressed) because différance itself engages in a free-play of the forces of the unconscious. Derrida and Mehlman (1972) and Derrida (1996), drawing from Freud’s use of writing as a psychic writing pad, demonstrates that in the unconscious process of inscription, of meaning, of essence or truth, writing can also contain an erasure, a repression of difference. Crucially, this repression (or regulation or censorship) never completely deletes (Kristeva 1982; Foucault 1978). It operates within an economy of return, an economy of différance that never radically cancels out the other. Thus, it acknowledges the other immemorially and psychically etches the absence of the other and the danger/desire for/of the other into a general collective consciousness.
The implication of this within the context of reading law is that what regulation/law proscribes (i.e., risks, crimes or harms) remains, interminably and profoundly attached and bound to regulation/law. It remains already, before, after, and in the moment, emphasising its exclusion. ‘What one tries to keep outside always inhabits the inside’ (Bennington 1993, p. 217).
Therefore, regulation/law creates an interminable irresolvable aporetic relationship with what it proscribes (whether it be crime or harm) and simultaneously deconstructs itself in a ‘chronic autoimmunitary logic’ (l’auto-immunitaire), through a quasi-suicidal process wherein it works to destroy its own protection, in order to immunise itself against attack from within (Borradori 2003, p. 94; Miller 2008). The result of this is that the singularity, essence and stability of regulation/law and its commands and rules are always put into question. They are always inadequate, always lacking, always terrified—chronically. In light of this, the very process of regulation and containability becomes contaminated, inescapably unpredictable, self-defeating and more complex than is dominantly imagined.
What this means in the context of speech and conversation generally is that closed-off or cancelled-out return interminably as they are always already (in a contrapuntal and polyphonic/polyrhythmic motion) appropriated by subjects to pivot and conjure up historical, present and futural meanings for which they were never intended (Butler 1997; Said 1993, pp. 59–67; Aptheker 1989, p. 28; Brown 1989). For the subordinated speaker, or the excluded speaker, the ability to re-appropriate and juxtapose meanings within language/speech becomes an instance of disruption and a re-centring, or renegotiation of dominant homo-hegemonic linguistic imperial projects. Hence, speech and writing, as forms of language/speech and communication, become counter/reversible tools for agency and for validating subjectivity. It is this inherent illimitable power, this inescapable reverse power play within speech writing and communication that perhaps makes it such a spectral concept and makes its regulation irrevocably difficult, especially online.
Having looked at how online communication technologies can compromise themselves and complicate regulation, it is important to explicate the ways in which this happens in more detail. My focus here is on textual filtering processing technologies or NLP technologies. Seeing as textual filtering and software are inseparable, I also consider filtering software and software more generally in my discussion. My intention here is not to explain what these technologies do in detail but to interrogate the role of technological regulation vis-à-vis offensive online content (in the context of communication and writing) and the peripheries of this relation. In doing this, I hope to underscore some of the underlying undecidabilities of regulation that these technologies demonstrate.
In rather reductive terms, NLP techniques work by scrutinising the meanings of language generated within online communications technologies. Using algorithmic systems (Khurana et al. 2017), they scrutinise euphemisms, references, code words and colloquialisms online to predict their proximity to crime and its commission. NLP techniques associate and identify extracted words and sentiments to specific topics by using statistical extraction and retrieval algorithms. By looking at documents as a ‘bag of words’, each word in each document is assigned a score reflecting a related word (Jain et al. 1999). The document is then allocated a vector whose coordinates correspond to the words it contains. A likeness of vectors indicates a likeness or similarity of documents. In order to identify this likeliness in documents, a method of elimination known as hashing (a DNA-like sequence that allows computers to sequentially search for, identify, segment and cluster duplicates) is applied. The archive of hashes—undiscerning of the fact that the archive is haunted by what it excludes (Derrida 1996)—is then used to exclude certain categories of communication that are usually regarded as offensive, hateful or simply inconvenient as is the case with spam filters (Cohen 1996).
NLP technologies have been used in software such as Impero Education Pro, an Internet monitoring software used in over 40% of secondary schools in the UK. In this particular context, NLP technologies like Impero have been developed in response to the Prevent strategy and its duty of care placed on schools in the 2015 Counterterrorism and Security Act, which provides that:
Specified authorities will be expected to ensure children are safe from terrorist and extremist material when accessing the Internet in school, including by establishing appropriate levels of filtering (HM Government 2015, p. 12).
Of Iterable Keywords and Software
Impero comes with a radicalisation library (i.e. a list of over 1000 phrases, words and word combinations) that filters the Internet to indicate whether a student is proactively seeking extremist content (Impero 2015). The functional logic of an NLP programme like Impero is that it helps to forestall ‘harmful’ expressions by detecting and identifying ‘harmful’ cited keywords, as used in the context of other words. Nevertheless, its aims are somewhat undecidable, as I will attempt to unravel henceforth.
First, from a psychoanalytic lens, the inclusion of banned words into a glossary creates an incalculable absence/presence, an (unheimlich) uncanniness or impression that frustrates the regulatory and repressive structure of the singular archive, or the familiar/familial/filial whole that seeks to impose form, castrate, inscribe, cancel and put it in the out of memory. Therefore, in a kind of ineluctable catachresis, excluded or closed-off words inevitably inhabit an encrypted dystopic space of power, a space of incomplete powerlessness (encoded secretly already in the inside) that haunts the very process of their predetermined meaning, closure, spatiality and regulation.
Further, because NLP technologies and such software technologies work within a system of rule and word learning, they carry with them the trace of communication and writing. On this account, in the library of words with(in) which NLP’s work, there is always a return to citational writing i.e., there is always a referring to and a cross-referring to of signs and their significations. This is done through a process of word navigation, combination and translation that embodies an intertextuality of differing irresolvable representations and tensions. The significance of this is due to NLPs functioning within a process of translation. They are always susceptible to an ‘infinity of loss’ (Derrida and Venuti 2001) with regard to the interpretational originality, legibility and stability of meaning. Put differently, with NLPs there is always an iterable process of experimentation that confuses and frays meaning. NLPs inevitably traverse a complex system of roots (Deleuze and Guattari 1988) and are enveloped in coils of ‘borrowed pieces’ (Derrida 1997, pp. 101–102) folded within limits/defects/inadequacies that cross a multitude of singular scenes of utterance, and further possible non-linear scenes of utterance. Thus, an acronym like say ‘YODO—you only die once’ when detected by NLP software, for example, can complicate interpretation and translation cryptically because it undoes singularities of meaning and context. On the one hand, YODO can be used in communications involving health activism by organisations such as the Dying Matters Coalition during Dying Matters Awareness Week and, on the other hand, it can be appropriated by militants from Daesh to disseminate their propaganda (Religious leader 2015). The acronym hence drifts indeterminably, destabilising its own limits. It overlaps, and begins to acquire new meanings and functions even those for which (we think) it was never intended (Butler 1997).
Moreover, because NLP software and most filtering and algorithmic software are programmed to function in a predetermined (albeit ever-changing) predictive upcoming sequence of grammars and linguistic structures, they still ‘learn on the job’. Thus, they have to deal with word situations that do not ever occur in their initial programming or training (Jurafsky and Martin 2017, p. 45). As such, there is always an informational void, a slippage, a probability of ‘blindness’ (i.e., a delay or deferred belatedness) in their intention to grasp, estimate and encode meanings proximate, sparse, and exterior to them i.e., heterogeneous meanings within evolving polyphonic/polyrhythmic communicatory conventions and contexts. For this very reason, these software technologies are susceptible to filtering out content randomly (e.g. in the case of innocuous content), hence compromising and complicating their very computational/regulatory usefulness.
To illustrate this, let us consider the following examples of Facebook’s filtering moderation policy, which is based on a ‘combination of the processing power of computers’ (algorithmic software) with the ‘nuanced understanding provided by humans’ (Cruickshank 2017).
In September 2016, Shaun King—a writer for the New York Daily News, who frequently writes stories about police brutality and runs a community page with over 800,000 members—posted on his Facebook page a screenshot of an email that twice called him the N-word, saying: ‘FUCK YOU N*****!’ Within a matter of a few hours, the Facebook software filters banned him temporarily, claiming that he had violated its ‘community standards’ (Breitenbach 2018). Crucially, the stability and legitimacy of the phrase ‘community standards’ is something elusive, divergent and in a constant questioning of itself, especially in the heterogeneous context of online communication. Yet again, it presents us with all the ever-recurring problems, tetherings and tensions of writing i.e., iterability, différance, destinerrance, I/other, presence/absence, inside/outside, etc.
Another example of a censorship incident ‘gone wrong’ is Facebook’s censoring of an image of the prehistoric Venus of Willendorf figurine, a fertility symbol and masterpiece of the Palaeolithic era (Breitenbach 2018). This incident, and the controversy surrounding it, began in December 2017 when Italian activist Laura Ghianda posted a ‘viral’ picture of the figurine on Facebook. Subsequently, Facebook censored the image based on the grounds that the depiction of the figurine implied nudity and violated its community standards. By doing so however, Facebook upset members of its very community. An outraged Christian Köberl, director of the Natural History Museum in Vienna where the figurine is displayed, for example, commented saying:
Let the Venus be naked! Since 29,500 years she shows herself as prehistoric fertility symbol without any clothes. Facebook censors it and upsets the community. (Breitenbach 2018)
Facebook apologised subsequently in reaction to the ensuing public outrage. The company’s spokesperson explained that Facebook’s policies did not allow depictions of nudity:
However, we (i.e., Facebook) make an exception for statues, which is why the post should have been approved. (Breitenbach 2018)
For another example of Facebook’s censorship regime and how it reproduces false positives that conflict with the views of its community, one should consider the case of Celeste Liddle (Graham 2016), an Aboriginal feminist activist in Australia, who had her account suspended (not for the first time) on the grounds of nudity after posting pictures of two older Aboriginal women performing an ancient ceremony whilst topless. Later in this case Liddle launched a petition, which gathered more than 15,000 signatures in less than 2 days, demanding that Facebook review its community standards.
At any rate, these incidents were mistakes or ‘false positives’ on the part of the detection software, or Facebook’s moderation policy, or both. From our point of view, it is impossible to tell how these false positives occurred with clarity because the whole process of moderation and algorithmic use remains invisible and not well accounted for (Diakopoulos 2015; Bucher 2017). In fact, seeing that there is always a temporal deferral and a human/machine or human/AI disjunction in any process of Internet content regulation and reactive/proactive filtering, I doubt that such processes can or could ever possibly be ‘well accounted for’ or ‘accurately’ investigated—but such a discussion is beyond the scope of this article.
What is clear however, is that examples of self-defeating ‘mistakes’ or ‘false positives’ i.e., situations where seemingly innocuous content is wrongly censored, where technological tools and software virally mutate and ‘auto-destruct’ our impulse to censor and regulate in today’s age of technological and absolute warlike militaristic dominance—a ‘finitve [finitrice] technē’? (Nancy 2000, p. 132; Joque 2018)—are recurringly endemic.
This then begs the question: are automatic false-positives really avoidable?
Perhaps, we should not blame these ‘tools’, technologies or software because as Heidegger (1977) suggests, they are only ‘revealing’ the inevitable realities (i.e., the limitations, iterations, absences, destinerrance, as well as the inherent openness to the viral and pathogenic contamination) of communication in nature, in the real world. Perhaps these technologies and software are simply deconstructing code, communication and linguistics in an ‘other’ incalculable uncanny register, in a language unfamiliar to us, in a spectral play upon play of différance, in a ‘speech coming from the other, a speech [or call] of the unconscious as well’? (Derrida 2005, p. 23).
Derrida once again elaborates:
I don’t know—how the internal demon of the apparatus operates. What rules it obeys. This secret with no mystery frequently marks our dependence in relation to many instruments of modern technology. We know how to use them and what they are for, without knowing what goes on with them, in them on their side and this may give us plenty to think about with regard to our relationship with technology today – to the historical newness of this experience. (Derrida 2005, p. 23)