Framing the Hopeful Monster

Although much has been written about cyborgs for the past 40 years, especially in light of Donna Haraway’s (1985/1991) Socialist Review piece, ‘A Manifesto for Cyborgs’, Aleksandra Lukaszewicz Alcaraz’s (2021) Are Cyborgs Persons? is arguably the first book to consider cyborgs understood as a literal reality from a broad philosophical standpoint. It is often forgotten that when Haraway (1985/1991) invoked ‘cyborgs’ in her famous manifesto, she meant it as an extended metaphor for her vision of feminists reappropriating science and technology for their own emancipatory purposes. The power of her rhetoric was felt at the time because many feminists had seen science and technology as the ultimate enemy to their cause. Yet most of Haraway’s references to cyborgs were from science fiction, not science fact. Moreover, as the years have passed, Haraway has distanced herself increasingly from any literal reading of cyborgs, which she believes could result in a stealth ‘re-masculinization’, as in the case of (so she says) contemporary transhumanism (Gane and Haraway 2006). Nowadays, one needs to turn to Katherine Hayles (1999) for sustained interpretations of the cyborg as an emergent cultural phenomenon. Lukaszewicz Alcaraz’s distinctive contribution is a systematic investigation into the ontological location of the cyborg. Her discussions of Charles Sanders Peirce’s semiotics and Joseph Margolis’ historicist understanding of personhood provide strong clues in that direction, which are greatly enhanced by something not normally found in a philosophical work, namely, interviews with several high-profile people who have become cyborgs in the name of art, science and politics. In what follows, I focus especially on Neil Harbisson as part of a general framing of how the cyborg might be seen as the ‘hopeful monster’ capable of thinking about what it means to be human altogether.

The Philosophy of Glorified Thermostats

It is now well known that ‘cyborg’ is a contraction of ‘cybernetic organism’. The vision behind the word was that of cybernetics founder, the mathematician Norbert Wiener, who believed that the classic metaphysical dichotomy of ‘natural’ versus ‘artificial’—or its modern version, ‘organism’ versus ‘machine’—can be regarded as alternative realizations of a common set of mathematical equations Wiener’s (1948/1961). Those equations specify the conditions under which an entity remains in equilibrium with its environment—or ‘retains its autonomy’, metaphysically speaking. Here ‘equilibrium’ should be understood in dynamic terms, relative to how the entity adjusts the pursuit of its ends in light of a changing environment. Stated so simply, cybernetics was open to the reductio ad absurdum that organisms, including humans, are basically glorified thermostats. (Nevertheless, the late analytic philosopher Fred Dretske designed an entire theory of knowledge around that idea.) The rise of ‘second-order cybernetics’, popularly known as autopoiesis, is meant to respond to that objection. Here we are to imagine cybernetic systems that manage to immunize themselves from extinction by developing a capacity to shift fundamentally their rules of engagement with the environment without losing a sense of identity with their previous incarnations.

This last point is worth bearing in mind when thinking about whether cyborgs are simply enhanced humans or something other than human. In the balance hangs the difference between, respectively, the transhuman and the posthuman. How the matter is decided depends on whether the relevant ‘sense of identity’ is maintained as one undergoes some sort of ‘cyborganization’. But before proceeding further, it is worth recalling that this is not how Wiener’s original audience thought about the matter. They were more focused on his main example: the behavioural synchronisation of Cold War military aircraft and its human pilot, courtesy of a properly designed interface, or ‘dashboard’, that enabled their integration into a single operating unit. In the late 1940s, this unified perspective of human and machine started a radical shift from the image of, say, Charles Lindbergh, whose heroic solo transatlantic air crossing only 20 years earlier was widely portrayed as demonstrating mastery of both an uncertain technology and an uncertain nature. Wiener’s (1948/1961) neologism of ‘cybernetics’ makes sense in this context. It etymologically recalls the human pilot’s relationship to a ‘ship’—in this case, an air rather than a sea vessel. So, in what sense can a cyborg be understood as possessing ‘personhood’—perhaps even as in the case of Wiener’s synchronised aircraft, inheriting the personhood of its human pilot?

At the outset, it is useless to think in terms of cyborgs either as the next stage of human evolution or as a stage of evolution that goes beyond the human, which is how transhumanists and posthumanists, respectively, tend to understand cyborgs. There is a genuinely open question about what it will mean to be ‘human’ in the future, and the existence of cyborgs provides a genuinely open space for considering the matter. At a strictly biological level, neither Lamarckians nor Darwinians hold that homo sapiens—or any other species—evolves. Only life itself evolves, and species are simply the conventional names we give to space–time slices of that evolutionary process. Even at the genetic level, there is no clear cut-off point between the ‘human’ and the ‘non-human’. In this respect, modern evolutionary theory is indifferent to the trans-/post-human interpretations of ‘cyborg’. Indeed, ‘human evolution’ is itself a social scientific turn of phrase that owes more to Enlightenment doctrines of progress than anything specifically from biology. However, it does not follow that the quest for the relevant ‘sense of identity’ is mysterious or even very futuristic. In fact, it is no more than a revamped statement of John Locke’s 300+-year-old definition of personal identity in chapter 27 of the Essay Concerning Human Understanding (Locke 1690).

Locke discussed personal identity in terms of the backward extension of consciousness. He is nowadays read as meaning an individual remembering his or her past, which in turn informs how s/he experiences the world now. Philosophers routinely call Locke’s approach ‘psychological’, but he was really talking about something much more abstract, which he himself called a ‘rational being’. Indeed, Locke went so far as to call personhood as a ‘forensic’ quality, namely, one related to accountability, which has both a private and public character. I can feel accountable by identifying with some past event or action, and others can hold me accountable by getting me to agree to such a self-identification. Personal identity is formally established when the two judgements coincide. This typically happens in a court room during a trial, say, when one is made to admit to a crime. But at a deeper level, it is related to the etymology of persona, the masks worn in Greek drama to indicate an actor’s character. In this respect, what happens on stage is simply much more prescribed than in a trial.

Here it is worth recalling that Plato’s main objection to drama was that the use of masks licensed an alternative version of reality, just as some people today object to the use of ‘avatars’ in the online world as absolving them of moral responsibility. Locke’s orthogonal view is that the mask—that is, the person—always needs to be constructed in situ: One does not begin by presuming a distinction between a ‘real’ and a ‘fake’ self. This provides the key to what Locke really meant when he described the mind as a tabula rasa (‘blank slate’). He was drawing attention to the ongoing situated nature of the mind’s constitution, even at its most fundamental level. In this context, his seemingly paradoxical appeal to ‘innate ideas’ is best understood in today’s terms as simply referring to the ‘platform’ on which many different programmes might be executed.

Locke’s original audience would have been struck by the novel idea of personhood as something that is not legally prescribed but rather is the outcome of a struggle over attribution between an embodied individual and a larger context of action in which the individual is embedded. Before Locke, personhood had been seen as primarily an attribute not of physical individuals as such, but of their role in, so to speak, society’s ‘meta-drama’. One’s legal identity had been largely defined by membership in a collective or corporate entity. You were a lord/peasant or citizen/priest, depending on whether you entered by birth or by examination, respectively. (Keep in mind that most ‘citizens’ in Locke’s day were residents of city-states, many of whom were what we would now call ‘highly qualified immigrants’.) Under those circumstances, it was still common in Locke’s day for the wrong individual to be convicted for a crime simply because the evidence pointed to someone of that person’s class as having committed the crime. The great modern legal innovation that is the doctrine of habeas corpus was its requirement that a specific individual is tied to a specific crime before that individual is formally charged.

It is also worth recalling that ‘consciousness’ had been coined only a few years prior to Locke’s discussion by the Cambridge Platonist philosopher Ralph Cudworth, who defined it as the aspect of our humanity that interfaces with God, which is experienced as a capacity to observe oneself from the outside, an idea closely associated with ‘conscience’, still the French word for consciousness (Fuller 2015: 7–8). Consciousness is thus the ‘internal courtroom’ in which the struggle over personal identity is conducted. The paradigm case of this conception of consciousness was Martin Luther’s electrifying 1521 self-presentation at the Diet of Worms. Sociologists following George Herbert Mead would later speak in secular terms of justifying yourself in the context of seeing yourself as others see you. The difference between Cudworth and Mead is, so to speak, a 90-degree axial rotation of what it means to ‘stand outside’ yourself: instead of standing above (Cudworth), you stand alongside (Mead), as befits a secular democracy in which you are judged by your peers rather than by an overseeing deity. And of course, from the cybernetic standpoint, one might think of a pilot’s encounter with his or her airplane’s dashboard as the site of ‘consciousness’ in the cyborg world.

Substrate-Neutral Consciousness and Cyborg Identity

Locke’s redefinition of personal identity had many profound implications that over the next two centuries slowly reshaped our understanding of human psychology. Pierre Janet and Sigmund Freud could turn ‘false memory syndrome’ (what Ian Hacking (1995) provocatively re-styles as ‘false consciousness’) into a topic of investigation at the end of the nineteenth century because at that point Locke’s ‘liberal’ sense of personhood been fully realized in social, economic and legal institutions. ‘False memory syndrome’ would not matter if the memories in question were ones that someone in the individual’s position would likely to have had, regardless of whether the specific person on trial actually had them. To be sure, even today, under a variety of circumstances, including legal settings, people are allowed to justify themselves by appealing to what other real or imagined others in their position would have or have not done. Indeed, the great sociologist Emile Durkheim’s nephew, Marcel Mauss, argued that the relative indifference to the truth or falsehood of memories has been the default norm across most cultures, providing the psychosocial basis of mythmaking.

Mauss’ observation reveals the truly revolutionary import of Locke’s proposal that one’s sense of identity is neither an inherited status nor even a prerogative of office, but a different sort of social achievement, one ascribed to a specific individual. The very idea of ‘cyborg persons’ revisits this proposal with a vengeance. Psychologists today easily grant that our sense of personal identity depends on the sort of story we are allowed to tell about ourselves, since we cannot ever be sure that a particular memory corresponds to something that really happened, let alone happened to us as unique individuals. In the end, it boils down to ‘owning’ the memory. But this Lockean insight immediately raises the question of jurisdiction of ownership, or the ‘boundary of the self’, in metaphysical terms. The late Oxford philosopher Derek Parfit has done the most in recent times to register this insight in a new key, especially in his book, Reasons and Persons (Parfit 1984). It inspired the seminal transhumanist theorist Max More’s (1995) doctoral thesis, The Diachronic Self, which opened the door to a generalized sense of ‘morphological freedom’ with regard to personal identity.

Common to Locke, Parfit and More is the intuition that consciousness is, at least in principle, ‘substrate-neutral’, which is to say, executable on a variety of platforms. Thus, the same consciousness could be embodied in multiple forms in temporal sequence, as in the reincarnation of an immortal soul. But equally, there could be simultaneous multiple embodiments, at least for a brief period—say, if one’s consciousness were downloaded into a machine, while the biological source of that consciousness remained intact through the translation process. Of course, the properties of the specific substrate supporting the consciousness—the mechanical and the organic—would be expected to bias the two versions of the same consciousness differently as their respective lives proceed, resulting in different identities over time. This point is relevant to the distinction between virtual reality and augmented reality, which is at the core of what might be called ‘cyborg consciousness’. I shall return to it shortly, in the context of Neil Harbisson, a performance artist and prominent cyborg rights activist, whom Lukaszewicz Alcaraz interviewed.

In a state of morphological freedom, personal identity turns on self-identification, understood as an extreme form of autopoiesis. We normally think of self-identification as involving a relatively stable, continuously existing physical body that is coextensive with the ‘self’. But the Lockean suspends this assumption. Indeed, More radicalises a tendency implicit in Parfit’s thought by basing self-identification simply on informational continuity, which in turn allows for, say, a resurrected dead being to resume their former life if it agrees to identify with it. However, if the resurrected entity somehow refuses to identify with their former life, then it becomes a different person. More’s subsequent work for Alcor, the iconic US cryonics firm, suggests that he takes this argument as mainly licensing the resumption of the same life even after the most radical biological disruption of all, death. But of course, the same argument can justify the resurrected entity deciding to become someone else by revaluing their past so as to effectively ‘disown’ their old self. Moreover, such ‘disownership’ needs not to rely on the amnesia-like effects that characterise a ‘dissociative’ personality of the sort that concerned Janet and Freud. Rather, it could simply involve a radical reinterpretation of one’s previous experience, otherwise perfectly recalled. Indeed, a striking feature of More’s thesis is a comprehensive critique of the philosophical sense of ‘memory’—that is, as a relatively passive repository of ideas, images and sensations that happen to come to mind whenever we engage in conscious thought—as the psychological basis for establishing personal identity.

More effectively pushed Locke’s ‘forensic’ conception of personhood to the limit, rendering self-identification literally a matter of ‘selective memory’ of what one wants to associate with oneself. This sounds very Nietzschean—and it is. But it is also familiar from the earliest days of Christian theology. Origen of Alexandria made a point of linking the Biblical episodes of Jesus’ Transfiguration (when he first becomes aware of his divinity) and Resurrection (when he returns in his divinity as ‘Christ’). The hermeneutical point of this connection was to demonstrate that Jesus’ Crucifixion was not some colossal human error on the part of those who convicted and sentenced him, but an essential feature of how Jesus became Christ. It was the ultimate ‘conversion experience’, a physical rupture so severe as to enable Jesus to reorganize his life, resulting in a Gestalt shift in how Jesus subsequently went forward in the world. To be sure, there was ‘informational continuity’, in More’s sense, between Jesus’ Transfiguration and Resurrection. But the significance of that continuity had been radically altered—‘transvalued’, Nietzsche would say—by the seriousness of the alteration made to the platform conveying it, namely, the Body of Christ.

Unsurprisingly perhaps, Origen remains a controversial figure in Christian theology. He has been understood (perhaps rightly) as suggesting that Jesus simply in his human form was capable of becoming Christ, with God doing little more than allowing it to happen, but which in turn sets an example for all humans to follow. The main legacy of this line of thought was the Arian heresy, a red thread that runs from Europe’s early modern Scientific Revolution to contemporary transhumanism. It has played out in popular culture as the story of Faust, which has been subject to several different endings, Goethe’s classic version being among the more hopeful. Presupposed in this entire trajectory has been the human need for self-enhancement in order to recover the loss of divinity that resulted from Original Sin, which following St Augustine was taken to be a spiritual defect that is genetically transmitted across all human generations, starting with the fallen Adam and Eve. The late US historian David Noble (1997) took the measure of this development in a book shrewdly entitled The Religion of Technology, the scope of which extended from the mechanical and disciplinary regimes of medieval monasticism to the pharmaceutical promises of today’s genomic medicine. An intuitive interpretation of Noble’s thesis which transcends that sacred/secular divide is that, no matter how healthy and successful we may appear, we see ourselves as ‘always already’ disabled. It is this self-understanding of humanity that makes the cyborg an attractive figure.

The Biotech Convergence: Harbisson’s Antenna

Neil Harbisson is a pivotal figure in this context. He was born unable to distinguish colours and became a cyborg once an antenna was surgically implanted in his skull to enable him to translate light into sound. He nowadays earns a living by performing a distinctive form of music that reflects his acquired synaesthetic capacity. It would be fair to describe Harbisson’s access to reality as both virtual and augmented. It is ‘virtual’ in the sense that the antenna allows him to distinguish colours that others would normally do with their eyes. But Harbisson’s access is also ‘augmented’ in the sense that he is now able to distinguish ‘more colours’, so to speak, which has fed into his musical compositions, generating new visual and auditory experiences for his audiences. Thus, Harbisson’s antenna straddles between simulating a lost sense to restore its natural function and extending his sensory capacities into realms of experience not normally had by others. Moreover, it would be difficult to separate the therapeutic and enhancement functions of the antenna unless he were legally forbidden from producing music—that is, prevented from converting the virtual into the augmented, and thereby deprived of both his self-expression and a unique livelihood. This point confirms Harbisson’s genuine cyborg status.

The duality of the virtual and the augmented in Harbisson’s cyborg existence epitomizes a schism that has always existed in the histories of both biology and technology. A convenient way to characterize the transition from Aristotle to Darwin in the history of biology, which of course did not happen straightforwardly, is in terms of a shift from thinking about organisms as purpose-made to thinking about them as works in progress. In the Aristotelian worldview, one observes how organisms normally function, which in turn sets the normative standard by which organisms are judged. This also suited a specific—not necessarily correct but nonetheless very influential—Christian understanding of divine creation in which God designed each type of creature with its own function as part of maintaining some common world order. Linnaeus’ coinage of homo sapiens in the mid-eighteenth century still reflected this sensibility. On this view, a cyborg is understood purely in therapeutic terms. Thus, Harbisson’s antenna is regarded as remedying a disability, with the music he produces seen as ‘unnatural’ or perhaps even not really music.

Those familiar with natural law theory will recognize this attitude as influencing judgements in such other ‘artificial’ interventions in the life process as contraception, abortion, stem cell research and euthanasia. In contrast, Darwin effectively treated organisms as raw material with which nature experiments (or ‘selects’) through a process of elimination (aka death). Nothing is ever truly complete or adequate from the standpoint of nature. Rather, the adaptiveness of organisms is always being tested under new conditions. Although Darwin knew no more about genetics than Aristotle, his worldview better suited biology’s ‘Newtonian’ moment, which came when the Augustinian monk Gregor Mendel proposed that organisms were composed of combinable factors, which suggested a more experimental, even lottery-like view of how biological inheritance relates to survival.

To see the connection between biology and technology, consider the role of ‘mutation’ in this lottery-like conception of life. Once life is understood as occasionally throwing up highly improbable organic forms, there is a question as to whether they die, adapt or set a new standard to which others then either adapt or else die. Geneticist Richard Goldschmidt (1940) coined the term ‘hopeful monster’ to characterise a mutation that manages to ‘set a new standard’ by occupying an ecological niche previously inhabited by another species and refashioning it for the purposes of itself and its future generations. The organism’s distinctive ‘mutant’ features, which at first put it at a disadvantage, turn out to be its long-term strength, given the vicissitudes of natural selection. To be sure, Goldschmidt’s work has been largely marginalised by modern evolutionary biology, yet it has provided inspiration for thinkers ranging from Karl Popper to Donna Haraway.

However, a problem with Goldschmidt’s reception, especially after Hiroshima, was that the very idea of mutation—now understood as the product of nuclear radiation—was subject to a protracted political debate. Science fiction movies of the period routinely dramatized the stakes. On the one hand, US Cold War strategist Herman Kahn (Stanley Kubrick’s model for Dr Strangelove) claimed optimistically that the range of mutations of homo sapiens capable of surviving a nuclear war might finally serve to eliminate various persistent forms of social discrimination. On the other hand, most evolutionary biologists cautioned against any easy celebration of mutation, with some following US ecologist-activist Barry Commoner in calling for a ban on nuclear weapons altogether. We live with a more subdued version of this debate today, as Kahn and Commoner anticipated the terms of reference in today’s the trans-/post-human debate—at least in terms of their respective attitudes toward risky innovations: Kahn the more ‘proactionary’ (i.e. transhumanist) and Commoner the more ‘precautionary’ (i.e. posthumanist). Arguably, the nuclear ‘doomsday’ scenario that originally concerned Kahn and Commoner, and dominated the international relations imaginary from the late 1940s to the late 1980s, marked the first time that the potential for universal cyborganization was countenanced—albeit as the radiation fallout of a nuclear confrontation.

Recall that until the mid-nineteenth century, ‘innovation’ was normally taken to mean ‘monstrosity’ in the pejorative sense of, say, Mary Shelley’s Frankenstein. It was only once industrial innovations started to boost the Europeanized world to unprecedented levels of productivity and wealth that the relevant value reversal took place. Indeed, what Marx and Schumpeter called the ‘creative destruction’ of markets was no longer an unwelcomed disturbance to traditional ways of life. On the contrary, it became the driving force of the capitalist economy. Characteristic of these now positively valued ‘innovations’ was their novel combination and repurposing of already existing technologies. For example, Henry Ford managed to replace the horse as the primary form of personal transport in the early twentieth century by attaching a gasoline engine to a four-wheeled bicycle. The legacy of that profound shift in the physical constitution of the vehicle has been our increased reliance on petroleum, our increased carbon emissions and our increased segregation from the environment. For better or worse, Ford perhaps did the most to show in concrete terms how humans might acquire an independence and mastery of nature. Such an achievement certainly qualifies as a ‘Faustian Bargain’.

We might think of the combinatorial mentality of Ford—and indeed that of his first employer Thomas Edison—as akin to Mendel’s original pea experiments, which involved presuming that an organism’s traits are the outcome of a trial and error process; the underlying algorithms set the research agenda of the science we call ‘genetics’. The word ‘bricoleur’ is not out of place in this context: Indeed, in a famous Science article, ‘Evolution and Tinkering’, the great French molecular biologist François Jacob (1977) himself adapted the term from the anthropologist Claude Lévi-Strauss to describe the workings of natural selection. Thus, it is not unreasonable to regard the cyborg as literally a ‘hopeful monster’ in both the biological and technological senses: it is a mutation and an innovation. The best way to think about the implications of this point is through a question that is familiar from Lukaszewicz Alcaraz’s home discipline of aesthetics: Does form follow function, or function follow form? We can think of the history of Western art as the movement from presuming the former to the latter answer—from ‘classical’ to ‘modern’, as exemplified by Aristotle’s Poetics and Lessing’s Laokoön, respectively. In this respect, the cyborg epitomizes the modern attitude—very self-consciously, in the case of Neil Harbisson.

A ‘Strong Attractor’ or the ‘Enemy Within’?

The classical approach to art assumes that one grasps the purpose of something in nature and the point of art is to reproduce the original, perhaps with improvements but within the design of the original. On this view, art is not in the business of converting the natural into the ‘unnatural’. Indeed, artifice is generally regarded as something best left hidden (e.g. not seeing a painting’s brushstrokes, an actor’s makeup), lest it be regarded as ‘monstrous’. For the classicist, ‘mastering’ an art amounts to exploiting a medium—but only within its ‘natural’ expressive limits. Unsurprisingly, classicists regard aesthetic perception as a refined sense of pleasure, a domestication of natural sensory experience. The preoccupation with ‘taste’ in the eighteenth-century writings of Shaftesbury and Hume was probably classicism’s last and most sophisticated stand—at least until its recent revival by evolutionary psychology. (But that is a story for another day.) In any case, that was the attitude interrogated by Lessing, who concluded that it was silly to think that some media are inherently better suited than others for expressing certain themes. For him, the point of art is to realize potentially any theme in the medium’s terms, with the extent of the artist’s actual achievement resting on the outcome of that struggle.

Modernists following Lessing increasingly demonized the ‘classical’ attitude as ‘representationalism’ for failing to take advantage of a medium’s full expressive capacity, which in the hands of a genuinely ‘creative’ and ‘original’ artist could result in works that prompt experiences that would never happen naturally. Such experiences may be described as ‘counter-intuitive’, ‘shocking’ or even ‘sublime’. (Think Picasso, Joyce, Schoenberg.) If the artist is trying to reproduce anything, it is the sense of awe associated with one’s experience of God. This was the context in which the artist as ‘genius’ came to the fore in early nineteenth century Romanticism. Put crudely, when function started to follow form in art, artists became more concerned with how their works looked to them (as proxies for their audiences) than how the works looked in relation to their putative objects. Thus, the viewer as a potential creator of the artwork became constitutive of the work’s interpretation and value. This shift in perspective gave unprecedented scope for ‘critics’ who grant the prima facie validity of the artist’s chosen theme and medium, while possibly concluding that the work fails ‘on its own terms’. No reference to ‘nature’ is needed to reach either a positive or a negative critical verdict on the artwork. On the contrary, artists are judged by the power of their artifice—that is, whether they exploited their chosen medium to maximum effect (Fuller 2020b).

From this standpoint, Harbisson is the ultimate modernist artist, the full-bodied exemplification of Locke’s forensic conception of the self. His cyborg existence is literally a collaborative project, the platform for which happens to be biological. The nature of the struggle for ‘cyborg rights’ that he has spearheaded with Moon Ribas is itself a work in progress. If cyborgs come to be legally entitled to rights, will they be as disabled or enhanced humans? Of course, cyborgs might also achieve rights as a distinct class of non-human entities, which would be very much in a ‘posthumanist’ spirit. But that then invites the prospect of ‘fully abled’ humans voluntarily migrating to cyborg status. After all, if someone admires the art that Harbisson has achieved with his antenna, why should not others try to emulate it, perhaps by disabling/enhancing other parts of their bodies in the name of ‘new media’? Under such fluid ontological conditions, will the cyborg be a ‘strong attractor’ that sets a trend for others to follow or will it be seen as the ‘enemy within’ that needs be persecuted and avoided? This question poses a much more realistic challenge to humanity’s future normative order than the simplistic ‘them vs. us’ currently portrayed in dystopic fantasies of superintelligent artificial intelligences. In the end, we should take the idea of ‘cyborg persons’ seriously because ‘we’ are becoming ‘them’ (Fuller 2020a: chap. 2).