Introduction

Derived from stem cells, organoids are laboratory-generated structures that display various features of particular organs. Although a wide variety of organoids have been created, it is human neural organoids that are of most interest to bioethicists and those who study consciousness. Variously described as ‘mini-brains’ [1,2,3,4], ‘lab-grown brains’ [5], or ‘brains in a dish’ [6, 7], neural organoids exhibit significant structural and functional commonalities with the developing human brain. In particular, neural organoids have been found to recapitulate neurogenesis and the formation of rudimentary cytoarchitecture [8, 9], exhibit synchronised spiking activity in organoid-derived neural populations [10], perhaps most strikingly, produce complex oscillatory patterns that resemble those found in developing, preterm brains [11].

The commonalities between neural organoids and the developing brain are robust enough to raise questions about the potential for organoid consciousness [5, 12,13,14,15,16,17,18,19,20]. As Andrea Lavazza [21] has put it, “The fact that in vitro brain organoids can recreate the cytoarchitectonic development typical of a human brain and manifest a complex functional activity, including coordinated electrical activity, suggests that human cerebral organoids, if further developed for medical research and patient care … may also develop some form of rudimentary consciousness.”

The question of organoid consciousness is important for a number of reasons. For one thing, knowing whether (certain types of) neural organoids are conscious would provide a useful data point for theories of consciousness. In the same way that a complete theory of consciousness should explain why consciousness emerges in human development when it does and why it is distributed across the animal kingdom in the ways that it is, so too it ought to explain why certain types of organoids do—or, as the case may be, do not—possess the capacity for consciousness. Identifying organoid types with the capacity for consciousness would also provide us with a novel range of model systems on which to intervene and manipulate.

However, it’s the perceived ethical consequences that have done most to drive interest in the question of organoid consciousness. The capacity for consciousness is widely taken to have normative significance, bestowing on entities that possess it a basic kind of moral status [22,23,24]. Although ascribing consciousness to organoids wouldn’t necessarily generate moral or legal prohibitions on their production or use in research—after all, we already allow other entities that are widely regarded as conscious (such as rats) to be used for scientific research—it would introduce into debates about organoids a range of considerations that would not be appropriate absent appeals to consciousness. With this in mind, Niikawa et al. [25] have argued that we should adopt a precautionary approach to the creation and treatment of neural organoids, proceeding (pro tem) on the assumption that current neural organoids are conscious and can undergo a range of experiences. A number of other authors have suggested that although current neural organoids are not likely to be plausible candidates for consciousness, near-future advancements in organoid size, maturity, and complexity would motivate the adoption of a precautionary attitude [12, 23, 26]. Evaluating such claims is clearly both an urgent and important matter.

What is required for the justification of a precautionary approach towards some class of entities? We certainly don’t require evidence beyond reasonable doubt. After all, precautionary approaches are widely accepted in a number of other domains in which questions about the presence of consciousness are highly contested, such as those that involve infants, patients suffering from severe brain damage, and various species of non-human animals [27]. At the same time, the mere fact that we can’t rule out the possibility that a certain class of entities might enjoy the capacity for consciousness is insufficient to motivate a precautionary approach to their creation or treatment. If it were, the scope of the precautionary approach would need to be extended far beyond what’s plausible, for there are accounts of consciousness which take seriously the possibility that plants, large language models and even expander graphs might be conscious. Instead, what’s needed is ‘reasonable suspicion’ that the relevant system has the capacity for consciousness. We take a precautionary approach to be appropriate only if the ascription of consciousness to the members of the relevant class is a live possibility given assumptions that command broad assent from within the community of experts. This condition is met by neonates, certain types of severely brain-damaged patients, and a wide variety of non-human animals; thus, a precautionary approach to their treatment is appropriate. The question here is whether this condition is also met by neural organoids.

The paper proceeds as follows. We begin with a more detailed look at the variety of neural organoids, focusing on disembodied neural organoids (DNOs) as our central object of investigation. We then consider and reject ‘theory-first’ approaches to the question of consciousness in DNOs before developing two (relatively a-theoretical) arguments against the possibility of consciousness in DNOs. The first argument appeals to the importance of embodiment and environmental embeddedness for consciousness; the second focuses on the role that representations play in underpinning consciousness.

The Varieties of Neural Organoids

Crucial to any discussion of consciousness in organoids is a recognition of the diversity of organoid types. Following Pașca et al. [28], we distinguish between regionalised neural organoids, unguided neural organoids, and assembloids.

A regionalised neural organoid is a lab-grown neural structure that recapitulates only a determinate brain-like region. To achieve this, specific signalling cues are externally introduced to the neural stem cells to guide differentiation towards particular cell fates. Since optic-cup organoids were first created in 2011 [29], regionalised organoids have been produced that approximate rudimentary features for the pituitary gland [30], forebrain [31], hippocampus [32], cerebellum [33], ganglionic eminence [34], midbrain and hypothalamus [35], thalamus [36], and the choroid plexus [37]. The main advantage of regionalised neural organoids is their relative replicability, which is superior to that displayed by neural organoids formed by unguided protocols. However, the very features that contribute towards their replicability also prevent them from sharing the complexity of their in vivo counterparts, making them less-than-comprehensive models of the brain structures that they are targeting [1, 2]-b).

An unguided neural organoid is a multi-region neural structure that is promoted to grow as a whole self-organising unit [8, 9, 38]. In contrast to the guided protocols of regionalised neural organoids, unguided organoids are governed by undirected protocols, in which neural stem cells are provided with a culturing medium that encourages them to direct their own differentiation. Sometimes referred to as ‘whole-brain organoids’, unguided organoids more closely recapitulate the complexity of the developing in vivo-like brain than regionalised organoids do. For example, unguided organoids develop internal divisions that are akin to those found between distinct brain regions. At the same time, unguided organoids develop idiosyncratically. For instance: cell types appear in random locations and proportions; the same type of region develops more than once; and certain regions or cells may be completely missing [39, 40]. The heterogeneity of unguided organoids undermines their utility in many research contexts, which is why many labs focus on regionalised organoids.

Neural assembloids (hereafter: “assembloids”) are created by combining separately generated regionalised organoids with each other or with various kinds of specialised cell types [41, 42]. For example, a dorsal forebrain-like organoid can be fused with a separately generated ventral forebrain-like organoid to create a more complete forebrain-like assembloid [34]. Assembloid protocols offer almost unlimited potential for variation and augmentation, allowing neural organoids to be integrated with non-neural organoids—such as spinal organoids [43], vascular organoids [44], and retinal organoids [45]—thus equipping neural organoids with sensorimotor capacities. The potential of assembloid protocols extends to the possibility of integrating neural organoids with robotic or computer interfaces, thus generating cyborg assembloids of various kinds.

Although questions about the possibility of consciousness in organoid-involving systems with sensorimotor capacities are important, the issues that they raise are fundamentally different from those raised by the possibility of consciousness in regionalised and unguided neural organoids, and we will set them to one side in order to focus on the possibility of consciousness in disembodied neural organoids (DNOs). Consciousness in a DNO would be utterly unlike any kind of consciousness with which we are currently familiar. It would constitute an ‘island of awareness’—a stream of consciousness “whose contents are not shaped by sensory input from either the external world or the body and which cannot be expressed via motor output” [46]. Are islands of awareness possible? Indeed, could we ever have grounds for thinking that an organoid supported an island of awareness?

How should we approach the question of DNO consciousness?

Although the question of organoid consciousness is in some sense as new as organoids themselves, it is a version of a very old problem: the problem of other minds: How can we determine which kinds of systems have an experiential point of view and which kinds of systems don’t? Ascriptions of consciousness to other human beings are typically based on their intentional behaviour—especially (but not exclusively) their verbal behaviour. We rely on people’s actions to figure out not just what they are conscious of but whether they are conscious at all. However, appeals to intentional behaviour are of little help when it comes to the question of DNO consciousness, for DNOs lack the capacity for behavioural output. Given the DNO predicament, how might we figure out if certain types of DNOs have the capacity for consciousness?

One response to this question would be to take a theory-first approach. According to Giulio Tononi and Christof Koch, in order to address questions about the distribution of consciousness “we need not only more data but also a theory of consciousness—one that says what experience is and what type of physical systems can have it” [47]: 1). Peter Carruthers makes a similar claim in the context of debates about animal consciousness: “one needs to know what consciousness is before one can ask about its distribution across the animal kingdom” [48].

It might be argued that the theory-first approach puts the cart before the horse, and that we need to first identify the distribution of consciousness before we can develop an adequate theory of it.Footnote 1 Applied to DNOs, the worry here is that unless we already know whether DNOs are conscious, we won’t be able to develop a comprehensive theory of consciousness—or at least, a theory that applies not just to human beings but to DNOs. Instead of starting with a theory of consciousness, we need to settle the question of DNO consciousness on independent grounds and then use that answer to constrain our account of consciousness. Call this the ‘cart/horse objection’.

In response to the cart/horse objection, one might argue that it is possible to justify a theory of consciousness without making assumptions about the distribution of consciousness. The account of consciousness defended by Tononi and Koch—the Integrated Information Theory (IIT) [49]—adopts precisely that approach. Tononi and Koch argue for IIT on the basis of what they call ‘phenomenological axioms’—claims about the structure of experience that are introspectively accessible and self-evidently true of all possible forms of consciousness. As one of us has previously argued [50], this approach is unlikely to succeed, for the alleged examples of phenomenological axioms to which IIT appeals are not plausibly treated as axiomatic. More generally, it seems unlikely that an account of consciousness could be justified along axiomatic lines, for that’s simply not how accounts of biological or psychological phenomena are justified.

In our view, the cart/horse objection goes wrong not because it is possible to develop a comprehensive theory of consciousness independently of claims about the distribution of consciousness, but because it fails to recognise that developing a theory of consciousness and addressing the distribution question go hand-in-glove: our account of how consciousness is distributed ought to be informed by our best account of its nature, and our best account of the nature of consciousness ought, in turn, to be informed by our best account of its distribution. There is a circularity here, but it is virtuous rather than vicious. In Hasok Chang’s [51] useful phrase, what’s required here is a process of ‘epistemic iteration’. We begin with a tentative account of the distribution of consciousness (‘these entities are conscious’, ‘these entities are not conscious’) and revise that account as we gain a better understanding of the nature of consciousness (and thus of its markers or indicators). Revising our conception of the distribution of consciousness leads, in turn, to revising our account of its nature, and so on [52].

A further problem with the theory-first approach is that no single theory—or even theoretical framework—commands general assent. A recent review identified 22 neurobiological theories of consciousness [53, 54]. Although some of these theories have significant overlap and might have the same implications with respect to the distribution of consciousness, other theories have distinct implications for which kinds of organoids (if any) would have the capacity for consciousness. The embarrassment of theoretical riches that we face here might not be so problematic if the trend in consciousness science was towards integration, cohesion and convergence. Unfortunately, that seems not to be the case; indeed, if anything, the field is witnessing a proliferation in theories of consciousness [55].

Of course, there is nothing to prevent individual researchers from appealing to their favoured theory of consciousness in order to motivate an account of DNO consciousness (see [16, 20], but because no theory of consciousness commands widespread endorsement, appeals to theory won’t provide the kind of well-grounded suspicion that is needed to trigger the application of the precautionary principle.

Another approach to figuring out whether DNOs fall within the distribution of consciousness focuses on the (putative) non-behavioural markers of consciousness that have been developed in recent years, and which have been applied to questions of consciousness in infants, brain-damaged individuals, and non-human animals. Unfortunately, few of these markers can be applied to DNOs. For example, the capacity to produce visual imagery [56] or allocate attention [57] on command has been employed as a marker of consciousness in brain-damaged individuals, but DNOs lack the capacity to comprehend commands. Similarly, the presence of a reliable ERP response (the so-called ‘P300’ or ‘P3b’ response) to second-order auditory oddballs has been employed as a marker of consciousness in severely brain-damaged patients [58] and infants/foetuses [59, 60], but again, this test cannot be applied to DNOs which (by definition lack the machinery required for processing sensory input.

There are, however, avenues for probing consciousness in DNOs that are potentially more promising. Perhaps the most prominent of these is the perturbational complexity index (PCI), developed by Marcello Massimini and his colleagues [61]. Here, the cortex is directly perturbed with transcranial magnetic stimulation (TMS), and the complexity of the neural response to that perturbation is measured with electroencephalography (EEG) and then mapped onto a perturbational complexity index. Results to date indicate that PCI responses correlate well with pre-theoretical judgments regarding the distribution of consciousness across a wide range of conditions [62, 63]. Because the PCI doesn’t require any kind of perceptual contact with an organism, it could, in principle, be applied to DNOs [20, 46].

That said, employing PCI to address the question of DNO consciousness is far from trivial. For one thing, new methods will need to be developed in order to apply PCI to DNOs (although see [64]. More significantly, it is unclear how the perturbational complexity that DNOs might exhibit ought to be interpreted. In the case of humans, perturbational complexity can be benchmarked by relying on our pretheoretical measures of consciousness (such as the capacity for verbal report and intentional behaviour), but given the wide-ranging and radical differences between humans and DNOs, it is unclear whether we can rely on those same benchmarking studies when it comes to identifying the level of perturbational complexity that might be indicative of consciousness in DNOs. Here, it is important to recognise that it is not just false negatives that should be avoided—we also need to avoid false positives when it comes to the ascription of consciousness. And, to avoid false positives, we would need to know what level of perturbational complexity a system might require in order to be a plausible candidate for the ascription of consciousness. To date, we do not have a well-grounded answer to that question.

DNOs and two constraints on consciousness

Rather than viewing the problem of DNO consciousness through either a theory-based lens or a marker-based lens, we suggest that at present it would be more appropriate to view it through a constraint-based lens. The idea, in other words, is to consider what broadly-accepted constraints on consciousness might suggest with respect to the prospects of consciousness in DNOs.

The remainder of this paper explores two such constraints: the first focuses on embodiment and the role of the environment in enabling consciousness; the second concerns the centrality of representations in grounding consciousness. We argue that both constraints are entailed by a wide range of theories of consciousness, and that each provides significant reason to be skeptical of the idea that DNOs could qualify as members of the ‘consciousness club’.

The embodiment constraint

One of the central debates in the study of consciousness over the last three or so decades has focused on the role played by the body and the wider environment in grounding or explaining consciousness. We can group theorists into two broad categories: internalists and externalists. Roughly speaking, internalists about consciousness downplay the relevance of the body and the wider environment in understanding consciousness and hold that the factors to which a theory of consciousness ought to appeal are purely intracranial (indeed: typically intracortical). Externalists, by contrast, emphasise the importance of the body and the wider environment, holding that consciousness is constitutively related to factors that are ‘outside the head’. The internalist/externalist debate clearly bears on the question of DNO consciousness, but what’s less clear is the precise nature of that bearing. We begin by identifying certain influential forms of externalism that don’t speak directly to the question of organoid consciousness before turning to a form of externalism—‘externalism-lite’, if you will—that does.

One strand in the externalist literature focuses on the idea that the material basis of consciousness isn’t restricted to the brain but can loop out into the body and the perceptual environment. The advocates of this view deny that the brain (or skin) is a ‘magical membrane’ (in Susan Hurley’s memorable phrase) and hold instead that ‘in principle what explains phenomenal qualities can be distributed within the brain, among brain and body, or among brain, body, and embedding environment, depending on the explanatory dynamics’ [65]: 116,see also [66, 67].

However, the relevance of this version of externalism to the DNO question is, at best, indirect, for in order to put pressure on the idea that DNO consciousness is a live possibility, we need reason to think not just that consciousness can extend beyond the brain, but that it must do so. No such argument is made by those who defend this form of phenomenal externalism.

A very different form of externalism focuses on the idea that consciousness involves a relation between an organism and its environment. Consider the following passage from Thompson and Cosmelli ([68]: 176):

“…if perceptual consciousness is a certain kind of interactive relationship between an organism and its environment, then a disembodied brain going through the same sequence of internal states as an embodied brain is like a disembodied stomach going through the same sequence of internal states as an embodied one. The disembodied stomach isn’t digesting and the disembodied brain isn’t experiencing, because the necessary contexts of the body and the environment are missing.”

This form of externalism is directly relevant to the organoid question. If true, it would appear to undermine the very possibility of consciousness in DNOs, for DNOs have no interactive relationship with their environments.

The question, of course, is whether it is true. It is one thing to grant that perception itself is relational and that one cannot perceive an object without being suitably related to it. It is, however, quite another thing to grant that perceptual experience—understood to include not only perception proper but also hallucination and illusion—is relational. Relational accounts of perceptual experience certainly have their advocates, but they are very far from being the only games in town (see e.g. [69, 70]. To take just one influential alternative, many hold that perceptual experience is a kind of representational state, in which the organism represents the world as being thus-and-so (e.g. [71,72,73]. If representationalism is true, then perceptual experience isn’t much like digestion.

Moreover, even if perceptual consciousness is best understood as a relation between an organism and its environment, there are other forms of consciousness to consider, such as sensory imagery, conscious thought, and dream experience. Although relational accounts of these phenomena are sometimes offered, they lack even the prima facie plausibility that characterises relational accounts of perceptual experience. Thus, even if relational considerations were able to show that organoids aren’t perceptually conscious, they would leave open the possibility that DNOs might enjoy non-perceptual forms of consciousness.

So, neither of the two versions of externalism that we’ve considered thus far appropriately engage with the question of DNO consciousness in a satisfactory manner: the former leaves the question of DNO consciousness unanswered; the latter rules out the possibility of DNO consciousness, but only by appealing to a tendentious theory of perceptual experience. What’s needed, then, is a thesis which speaks directly to the question of organoid consciousness, but does so in a way that is appropriately non-partisan and would—or at least should—be acceptable to the vast majority of theorists.

The following constraint, we suggest, fits the bill:

Embodiment Constraint (EC): Only a brain with a history of embodiment and sensorimotor interaction with the world has a genuine chance of supporting consciousness.

EC speaks directly to the question of DNO consciousness, but is it also acceptable to the vast majority of theorists? It should certainly be embraced by externalists of all stripes, but we suggest that it should also be embraced by many versions of internalism as well. In fact, although EC is a kind of externalism, one might think of it as ‘externalism-lite’, for it is consistent with many internalist approaches to consciousness.

To see why, consider what is arguably the classical form of internalism: the mind-brain identity theory [74,75,76]. First advanced in the late 1950s (e.g. [77,78,79], identity theorists draw inspiration from familiar a posteriori identities in the special sciences to argue that conscious states are identical to brain states. For example, an identity theorist might identify pain with particular patterns of activation in the so-called ‘pain matrix’, a network involving the primary and secondary somatosensory cortices, the insula, and the anterior cingulate cortex that is typically active when people experience transient pain. On the face of things, the identity theory might seem to suggest that DNOs ought to be plausible candidates for consciousness. After all (one might think), if conscious states are brain states, and if (as seems plausible) DNOs have brain states, then they could surely have conscious states too.

The problem with this line of thought is that it avoids the key question of whether DNOs have the right sort of brain states. Any remotely plausible version of the identity theory should accept that many (perhaps most?) brain states don’t support mental states of any kind, let alone conscious states. The kinds of brain states seen in the context of epileptic seizures aren’t generally regarded as consciousness-supporting, nor are those which characterise dreamless sleep, deep sedation, or certain forms of severe brain damage, such as the vegetative state (also known as the ‘unresponsive wakefulness syndrome’). But given that only some brain states are consciousness-supporting, we need a reason for thinking that DNOs could have brain states of the right kinds before thinking that they might be viable candidates for consciousness.

And here, we would argue, the identity theorist faces a difficult task, for any account of which brain states are consciousness-supporting and which aren’t must begin with the brains of the only systems that we know to be conscious—that is, organisms, with a history of sensorimotor interaction with their environment. (To the best of our knowledge, no one has been able to procure any kind of behaviour that might be indicative of consciousness from a disembodied brain.) This doesn’t entail that EC is false, but it does shift the burden of proof onto those who reject EC. An analogy with reading might be useful. Although it is conceivable (and perhaps even nomologically possible) that a creature who has no history of interacting with text could acquire the ability to read, as a matter of fact, the ability to read is found only in organisms with an extended history of interacting with words (or proto-words). Similarly, although it is conceivable (and perhaps even nomologically possible) that a disembodied brain might support consciousness, we suggest that, as a matter of fact, the capacity for consciousness is likely to arise only in the context of embodiment and sensorimotor interaction. Organisms that have lost the capacity for sensorimotor engagement with the world might continue to support an ‘island of awareness’ [46] for a period of time, but—we suggest—without feedback from the body and wider perceptual environment the neural organisation required for consciousness would be extremely unlikely to occur.

The general point here is that the brain and the body develop together as parts of an interdependent, life-sustaining system. With the notable exception of organoids, brains are always components of organisms. Although we don’t know at what stage (if at all) the human fetus acquires the capacity for consciousness [80, 81], we do know that the fetal brain is shaped by the body and its uterine environment in fundamental ways. Cellular and regional specialisation occurs in concert with, and response to, connections and feedback from developing organs, and sensory and motor capacities [82, 83]. The genetically determined neural connectivity that occurs in embryonic development requires pruning and strengthening, and those processes occur as the brain engages in bodily, sensory, and motor interaction [84]. It is well-documented that deficiencies in sensorimotor stimulation lead to developmental deficits [85]. This point can be illustrated with reference to cataracts. Patients who have congenital binocular cataracts removed after the age of 10 experience permanent deficits in visual acuity and have difficulties perceiving shape and form. However, when cataracts develop in adulthood and are removed decades after they form, normal vision returns immediately [84, 86].

The development of a DNO, of course, is not shaped by its environment in the ways in which the development of an ordinary human brain is. The trajectory of its structure is not informed by the demands of regulating a body or by a history of sensorimotor exploration.Footnote 2 The thoroughgoing nature of this isolation suggests not merely that DNOs are unlikely to share the kinds of conscious states that we have, but that they are unlikely to acquire the capacity for consciousness at all.

Although we’ve developed the case for EC with reference to the identity theory, the point generalises beyond the identity theory to many other versions of internalism. It applies to those versions of functionalism that identify conscious states with ‘short-arm’ (that is, internally-individuated) functional roles; and to those versions of internalism which refuse to identify conscious states with internal states, but hold only that conscious states are constituted by or supervenient on internal states of some kind.Footnote 3

In response to EC, a critic might point to dreaming, sedation or hallucination as evidence for the claim that neither embodiment nor environmental engagement is required for consciousness. But this line of objection is far from compelling. It is certainly true that the brain can continue to support consciousness even when largely (and perhaps wholly) insulated from input from the body or wider environment. But (to the best of our knowledge), the brains that have such capacities are brains that have also had a long history of embodiment and environmental interaction. The fact that ordinary (that is, embodied and environmentally embedded) brains might have the capacity to generate experiences that are endogenously triggered doesn’t provide any reason to think that extraordinary “brains” (that is, brains that are neither embodied nor environmentally embedded) might also have that capacity.

Similar points hold with respect to the claim that EC might be false because direct stimulation of cortical areas can elicit experiences of various kinds. Activity in the fusiform face area (FFA) of an ordinary human being may be “sufficient” for a visual experience of faces [87,88,89], but as far as we know, FFA activity underpins visual experiences of faces only when suitably integrated with a wide range of cortical, sub-cortical and perhaps even bodily processes. We certainly have no evidence that activating a section of FFA that had been excised from a brain and placed in a petri dish would generate any kind of experience, let alone an experience of faces.

What, then, of the fact that organoids have been observed to produce neural activity reminiscent of the trace discontinu seen in human preterm infantsregular periods of inactivity punctuated by synchronised, high-amplitude oscillations [11]? Is this not evidence of consciousness in the relevant class of organoids?

It is certainly a striking finding, but we would argue that it provides little evidence in favour of organoid consciousness. First, we lack reason to think that the trace discontinu is directly implicated in consciousness, as opposed to a more general kind of neural activity that is the precondition for cognitive development. Second, it’s important to recognise that the organoids employed in the Trujillo study were cortical spheroids, a kind of highly simplified, regionalised neural organoid. Had such findings been obtained in organoids with higher degrees of structural complexity then it might be more reasonable to treat them as providing evidence of consciousness, but in the absence of such complexity it’s unclear what the role of high-amplitude oscillations might be.

The more pressing question raised by the Trujillo study is whether this kind of finding could, in principle, provide evidence of consciousness in DNOs.Footnote 4 We allow that it could—after all, it is difficult to say with any certainty what the study of consciousness might reveal. But we can grant that the science of consciousness could, in principle, show that DNOs have neurofunctional properties that suffice for consciousness without granting that we currently have grounds to take seriously the possibility of DNO consciousness.

We conclude this section by drawing attention to an assumption that we have implicitly granted—namely, that DNOs are brains, and that the kinds of internal states they have are legitimately treated as brain states. This assumption should not be uncritically accepted. Consider the once-influential suggestion that visual experience involves oscillations of around 40Hz in the visual cortex [90]. Clearly, only a system with a visual cortex is capable of instantiating that property. Could a DNO have a visual cortex? That is far from clear. A DNO could, of course, contain a part that is visual-cortex-like, but to be visual-cortex-like is not itself to be a visual cortex. Arguably, only that which has developed as part of an embodied and environmentally embedded brain could be a genuine visual cortex.

Let’s take stock. Although it is evident that the brain is intimately involved in consciousness, and that some parts of the brain are more intimately involved than others, there are good reasons to hold that only neural states that have had to accommodate themselves to the task of regulating a body or tracking the contingencies of its environment have the capacity to support consciousness. By definition, DNOs face neither of these demands, and thus, there are good grounds to deny that they might be conscious. Importantly, this conclusion goes through even if we assume that consciousness supervenes solely on what’s ‘in the head’.

The representationalist constraint

Although there is much disagreement between contemporary theories of consciousness, a surprisingly wide range of theories have a shared commitment to what we will call a ‘representationalist’ conception of consciousness. Although representationalism in the study of consciousness is typically understood as the thesis that conscious mental states have no conscious properties other than those that are fixed by their representational properties [91, 92], that is not how we are using that term here. Instead, we use ‘representationalism’ to refer to the claim that conscious states (experiences) are representations, and (thus) that consciousness is restricted to representational systems.Footnote 5 This notion of representationalism is weaker than the standard notion of representationalism, for one could (and indeed many do) hold that conscious states are representations but deny that their conscious properties are fixed by their representational properties.

The widespread endorsement of representationalism (in our sense of the term) is partly obscured by the fact many of the most high-profile debates concern the question of how representations are implicated in consciousness, and the participants in these disputes have shared (albeit rarely articulated) commitment to the idea that consciousness is fundamentally representational. The main divide here is between first-order representationalists, who hold that conscious states are representations of a certain kind, and higher-order representationalists, who hold that a mental state is conscious in virtue of being represented (in a certain kind of way). Some versions of first-order representationalism focus on the ways in which representations are available to guide cognition and behaviour (e.g., Dretske [93], Tye [94], Baars [95] and Dehaene & Changeux [96]. Others focus on the ways in which representations are integrated with each other. For example, Lamme’s [97] local recurrency theory holds that visual experiences are generated when feature-specific representations are bound together to form representations of integrated visual objects by recurrent processing within perceptual cortices, while Merker’s [98] account appeals to the role of mid-brain structures in supporting the integration of representations that capture both an organism’s physiological state and motivations with representation of its perceptual environment. Another version of first-order representationalism is the Attended Intermediate Representation (AIR) theory of consciousness. First advanced by Jackendoff [99] and defended at length by Prinz [100], the AIR theory holds that consciousness arises when intermediate-level representations are modulated by attention. Again, the view treats representations as crucial to consciousness.Footnote 6

Higher-order representationalism also comes in a variety of forms. Some higher-order representationalists hold that consciousness arises when mental states are monitored by a quasi-perceptual process [101, 102], others argue that it arises when mental states are monitored by a thought-like process [103], and still others argue that consciousness involves information-processing forms of meta-representation that are arguably neither perception-like nor thought-like [104, 105]. Orthogonal to the debate about the nature of the monitoring process required for consciousness is a debate about how the monitoring state and the monitored state can (or must) be related. Although most higher-order representationalism requires that these states are distinct, self-representationalist versions of the view hold that they must be identical, and that consciousness arises when a mental state takes itself as its own intentional object [106,107,108].

Although first-order and higher-order representationalists disagree about the kinds of representational structure that are required for consciousness, they are all implicitly committed to the following:

Representationalist Constraint (RC): Only representational systems are candidates for being conscious.

We suggest that RC grounds a powerful argument for denying consciousness to DNOs, for it is implausible to think that DNOs could qualify as representational systems.

There are two ways of arguing that DNOs are unlikely to qualify as representational systems. One line of argument focuses on the conditions required for representations to be contentful. According to content externalists, the contents of a representation are fixed by a certain subset of an organism’s relations to its environment (e.g. [93, 109, 110]. Given the extremely coarse-grained nature of the relations that obtain between a DNO and its environment, content externalism would seem to imply that whatever representations a DNO might have, they could not have any content. But surely contentless representations–if indeed such states are possible–could not support consciousness by the lights of any version of representationalism.

While we are sympathetic to the line of argument just sketched, it won’t convince those who harbour doubts about the tenability of content externalism. A more powerful line of argument, we suggest, focuses not on what fixes a representation’s content but on what makes it the case that a particular feature of the physical world qualifies as a representation in the first place.

The key point here is that physical states qualify as representational only insofar as they (are poised to) play a representational role. The point is perhaps most evident with respect to public representations. An arrangement of seaweed on the beach might spell out the word “dragon”, but it won’t be an instance of that word unless it is incorporated into a system that treats it as an instance of that word. A pattern of sounds produced by the wind knocking one branch against another might share the acoustic properties of a particular Morse code message, but those sounds won’t be representational in the way that the message in Morse code is unless they are incorporated into a representational system. In effect, something qualifies as a representation only if there is something that treats it as a representation. Representations need not only to be produced, they must also be consumed.

Although we know of no reason to rule out the possibility that DNOs could acquire the capacity to consume representations, we think it exceedingly unlikely that current generation DNOs have such capacities. Further, we doubt that future generations of DNOs could acquire such capacities in the absence of the kind of input from the body or the environment that, by definition, DNOs lack. Representationalists should be more concerned about the possibility of consciousness in systems that are clearly representational—such as laptops or self-driving cars—than they are about the possibility of consciousness in DNOs.

In response, it might be argued that DNOs have representational capacities not in virtue of their direct relational features (as it were), but in virtue of the fact that homologous structures are representational when instantiated in us (or other species). Consider the kinds of somatotopic maps found in the primary somatosensory and motor cortices. Here, one might argue that neural activity in the ‘primary somatosensory and motor cortices’ of a DNO is representational in virtue of the fact that activity in homologous structures of an ordinary (embodied, embedded) is representational. (Activity in the somatotopic map of a DNO might, of course, be misrepresentational, but there is nothing in RC which requires that the kinds of representations that underwrite consciousness are veridical.) The idea, in other words, is that the representational properties of DNOs are inherited from the representational properties of the systems from which they derive—that is, human beings.Footnote 7

This proposal is an intriguing one, but we don’t find it convincing. For one thing, we doubt that a DNO might develop neural structures that are truly isomorphic to somatotopic maps. While it is true that somatotopic “protomaps” are initially directed by genetic mechanisms, these protomaps require connection and functional activity from the body to appropriately develop and form mature functional networks [111]. Moreover, given the complete absence of somatic stimulation seen in the case of DNOs, we would expect synaptic and axonal pruning to disrupt any prepotent drive there might be towards the appropriate maturation of somatotopic maps. More fundamentally, the proposal doesn’t engage with the requirement that genuine representations must be usable as such, but merely side-steps that constraint. A model of the human primary somatosensory and motor cortices produced by a 3D printer would share the structure of a somatotopic map, but it wouldn’t support representations of any kind—let alone representations of body parts—unless it was used in an appropriate manner. We conclude that those sympathetic to representationalist conceptions of consciousness have good reasons for scepticism about the prospects of DNO consciousness.

There are, of course, non-representationalist conceptions of consciousness. We have already mentioned one such view: the mind-brain identity theory. Other non-representational approaches to consciousness include the Integrated Information Theory [49], quantum accounts [112, 113], and various forms of dualism [114] and Russellian Monism [115, 116]. As best we can tell, each of these approaches leaves open the possibility that certain kinds of DNOs might be conscious. Indeed, Christof Koch—one of IIT’s leading advocates—has suggested that certain DNOs would be likely to have a measure of informational complexity (or ‘phi’) that is indicative of consciousness [117].

However, the implications of this for precautionary approaches to consciousness are far from straightforward, even from the perspective of these approaches to consciousness. Crucially, these approaches also leave open the possibility that consciousness might characterise all manner of systems. For example, IIT is explicitly committed to the possibility of consciousness in thermostats and expander graphs [47], while Russellian Monists ascribe consciousness—or ‘proto-consciousness’, as they sometimes put it—to the fundamental components of the physical world. If the proponents of these accounts are prepared to adopt a precautionary approach to DNOs, then—we would argue—they should also adopt a precautionary approach to a wide range of other entities, such as thermostats, expander graphs and quarks.

At this point, we are clearly a long way from the domains in which precautionary approaches are most compelling, such as those involving the treatment of non-human animals, human infants, and individuals who have suffered from severe brain damage. The mere fact that serious theories of consciousness allow for the possibility of DNO consciousness isn’t enough to motivate the adoption of a precautionary approach to them, for if it were, then we would have to take a precautionary approach to a bewildering array of non-living entities. One person’s modus ponens is, of course, another’s modus tollens, but in this case the tollens strikes us as singularly unattractive.

Conclusion

The main aim of this paper has been to pour some cold water on speculations regarding consciousness in DNOs. Although we don’t claim to have ruled out the possibility of DNO consciousness, we do think that the considerations outlined here provide strong reasons for treating such suggestions with a great deal of suspicion. Not only is there little reason to ascribe consciousness to the current generations of DNOs, there are also grounds for thinking that DNOs as such are not viable candidates for consciousness.

It is important to recognise that these conclusions apply only to disembodied neural organoids, and not to other types of organoid-involving systems, such as animals into which organoids have been implanted or cyborg assembloids that integrate neural organoids with some form of robotic or computer interface. Systems of these kinds need not flout either the embodiment or representationalist constraints, and there may be a strong case for adopting a precautionary approach to their creation and use. (Of course, determining whether such systems actually possess the capacity for consciousness would be a further question.) The development of neural organoids may well lead to the creation of the first synthetically conscious systems, but the systems in question are unlikely to be islands of awareness.