Introduction

40 years ago, Hilary Putnam [1] presented the now-famous “brain in a vat” thought experiment:

A human being [...] has been subjected to an operation by an evil scientist. The person's brain [...] has been removed from the body and placed in a vat of nutrients which keeps the brain alive. The nerve endings have been connected to a super-scientific computer which causes the person whose brain it is to have the illusion that everything is perfectly normal. [...] The computer is so clever that if the person tries to raise his hand, the feedback from the computer will cause him to 'see' and 'feel' the hand being raised. [1], pp. 5-6)

In the 1980s, this was science fiction—there was no technology to keep a disembodied brain alive in a vat at that time. However, the recent development of bioscience has made “brains in a vat” possible, at least in a certain sense. This is not in the sense that one’s brain can be safely removed from their body and kept alive in a vat for some substantial amount of time; that scenario still belongs to science fiction.Footnote 1 Rather, it is in the sense that three-dimensional cortical neural tissues—so called “human brain organoids” (HBOs) —have successfully been artificially created and cultured in vitro. It is, moreover, an ongoing project to develop and use such HBOs for scientific and medical purposes [24].

There are many ethical questions about HBOs. Should we give moral consideration to HBOs? Is it morally permissible to treat HBOs as a mere means for our benefits? Is it even morally permissible to create HBOs in the first place? How far are we permitted to develop HBOs? Having noticed that these are urgent ethical issues, many scholars have begun to conduct ethical research on HBOs [59]

In addressing such ethical issues concerning HBOs, it matters whether the HBOs can “feel” or “think”; that is, whether they can have conscious experiences,Footnote 2 since it is widely accepted that consciousness is morally significant in some way [1012]. Thus, much attention has recently been devoted to the possibility that HBOs have consciousness and to the ethical implications which follow from this possibility [1318].

However, the question of whether HBOs have, or can have, consciousness is difficult to answer. This is in part because, while we can observe the shape, size, and structure of an HBO, we cannot directly observe the presence or absence of consciousness in them. There is no consensus regarding an objective standard for detecting the presence of consciousness in creatures [19] and this lack of consensus extends to HBOs [16, 18, 20]. In addition, there are many competing theories of consciousness, which provide different predictions as to whether HBOs (can) have consciousness (see Sect. 3). We, thus, find ourselves in an epistemological predicament: we do not know how to determine whether HBOs (can) have consciousness. Even if we need to give moral consideration to HBOs if they have consciousness, we cannot establish an ethical protocol to regulate the creation and use of HBOs in bioscience because we do not know whether the conditional holds.

Against this background, this paper proposes a schema for engaging in a productive discussion of ethical issues regarding HBOs. There are three significant characteristics of this schema:

  1. 1.

    It adopts a precautionary principle about consciousness, which requires us to assume that HBOs have consciousness.

  2. 2.

    It shifts focus from the original question of whether HBOs have consciousness to the question of what kinds of conscious experiences an HBO can have, where the latter question is much more tractable than the former question.

  3. 3.

    It explores how we should treat HBOs on the basis of what kinds of conscious experiences they may have.

The biggest advantage of this schema is, as this paper will argue, that it enables us to discuss the creation and use of HBOs in bioscience in a relatively simple ethical framework, which can potentially be applied to other cases, such as the experimental use of human embryos and fetuses.

This paper shall proceed as follows: In Sect. 2, we briefly explain what HBOs are and how much they can plausibly be developed in the near future. In Sect. 3, we review the recent debate over how we can know whether an HBO is conscious and how this leads to an epistemological predicament. Section 4 presents the precautionary principle about consciousness as a way to escape the predicament, suggesting a new question about consciousness of HBOs to ask: what kinds of conscious experiences they have? In Sect. 5, we directly examine this question. Although the discussion in Sect. 5 is tentative and speculative, it serves as a useful demonstration of how to determine the kinds of conscious experiences which HBOs may have. Finally, Sect. 6 provides several positive suggestions—in light of our discussion in the preceding sections—concerning how we should restrict the creation and use of HBOs from an ethical perspective. In this section, we also discuss our suggestion’s implications for experimental uses of embryos and other animals such as flies, mice and macaques.

Human Brain Organoids

A Japanese research group successfully created three-dimensional cortical neural tissues by inducing them from human pluripotent stem cells for the first time in 2008 [2, 3]. Such cortical neural tissues have been named “cerebral organoids” [4], though we shall call them HBOs. HBOs have been produced in such a way that they mimic various parts of the brain, including the cerebrum, midbrain, hypothalamus, pituitary gland, and hippocampus. Since these region-specific brain organoids can reproduce neurogenesis in vitro, they are expected to be used not only in basic research to elucidate the process of neurogenesis but also in applied research for neuro-related diseases and in clinical applications such as drug discovery [21, 22].

Despite these benefits, however, there are still many limitations to using HBOs. Current HBOs are different in size, number of neurons, and maturity from the normal brain and they also lack sensory input and behavioral output. In particular, although electrophysiological activities occur in HBOs during their development, those activities are not initiated by sensory input and do not produce any behavioral output. Further, in terms of size, a typical HBO is much smaller than the brain of a mouse and is, at best, about the size of a pea or the brain of a honeybee. In terms of maturity, current HBOs do not mimic normal neurogenesis, even when cultured in vitro for long periods of time, because they do not have blood vessels. Moreover, current HBOs differ in structure from whole human brains, though some region-specific brain organoids have structural similarities with those regions of human brains and recently-induced brain organoids have multi-layered neural structures as also seen in human brains [23].

Recent studies on HBOs have been directed toward overcoming these limitations. Some studies have shown that HBOs could vascularize by being transplanted into the brains of mice and non-human primates and that HBOs could further develop inside these creatures [2426], though there is no evidence showing that they can develop toward becoming normal mature human brain parts. Another study reported that HBOs that involve direct synaptic connections between cerebral neurons and photoreceptor cells responded to the input of external light stimuli [27]. In the future, region-specific brain organoids will likely be more refined and connected with other brain organoids as well as with both living and non-living systems to create more complexly structured brain organoids [28].

The Epistemological Predicament

Let us now turn to the question of whether HBOs have consciousness. One straightforward way to judge whether one is conscious is to ask: if one says yes, one is conscious; if one does not respond, one is not conscious. Of course, this method only applies to healthy human beings with enough understanding of the language and concepts involved in the question. Another natural approach is to examine whether one can perform intentional actions. If a creature can perform intentional actions, this is at least prima facie evidence that it has consciousness [29],otherwise, we may contend that it lacks consciousness. HBOs fail these standards, for they cannot produce verbal or behavioral outputs.

There are, however, several theories of consciousness which allow that HBOs can have consciousness. For instance, Integrated Information Theory (IIT) states that consciousness is grounded in a causal informational structure that can internally generate integrated information [30, 31]. Given that simple systems like a photodiode can have such a causal informational structure, IIT allows that even such simple systems can have consciousness [32]. Even current HBOs show electrophysiological neural activities with some activation patterns [33, 34]. Since this suggests that current HBOs have a primitive form of causal informational structure, IIT would predict that they have consciousness.Footnote 3

More radically, panpsychism states that microphysical entities are conscious [35, 36]. Many philosophers of consciousness have recently taken panpsychism as a serious theoretical option on the basis of reasonable arguments which suggest that it can successfully integrate consciousness into our current scientific picture of the world [37], chap. 4,for discussions of panpsychism, see [38]. Panpsychism, moreover, does not typically assert that every macro-physical object can have distinctive kinds of consciousness but, instead, claims that whether such objects have distinctive kinds of consciousness depends on the arrangements of microphysical entities, which can have consciousness in themselves. Thus, panpsychists do not typically claim that chairs and socks have distinct kinds of consciousness; rather, they hold that the consciousness belonging to such objects is a mere sum of the primitive kinds of consciousness instantiated by microphysical entities. Nevertheless, many panpsychists seem to accept that biological entities including flies, insects, plants, bacteria and amoeba, can have distinctive kinds of consciousness [37], chap.4). Given this, it is plausible to predict that it follows from panpsychism that HBOs would also have some distinctive kind of consciousness.Footnote 4

Furthermore, HBOs are not mere artificial biological entities; they share human genes and mimic the developmental processes of human brains to some extent. Biological naturalists of consciousness may regard this fact as pointing favorably to the potential presence of consciousness, since they emphasize the importance of evolutional and developmental processes for consciousness [39].

In contrast, some theories of consciousness do not allow that HBOs can have consciousness. For instance, the enactive theory of consciousness (ET) states that having a body through which one can skillfully interact with the surrounding environment is necessary for having conscious experiences [40]. Given that in vitro HBOs do not have bodies to interact with the surrounding environment, ET denies the in-principle possibility that in vitro HBOs have consciousness, regardless of how structurally developed they might be.Footnote 5

Further, according to representational theories of consciousness, the answer to the question of whether an HBO has consciousness depends on the degree of its sophistication. For instance, a first-order representational theory states that, if a system can represent what happens outside of it in such a way that the representation is ready for further cognitive processes, the system has consciousness [4143]. A higher-order representational theory states that, if a system can conceptually represent its first-order representational state (which represents what happens outside of it), the system has consciousness [44, 45]. Our brain seems to be developed in such a way that it first gains the capacities to represent what happens outside of it (the first-order [FO] developmental stage and then gains the capacities to further represent the first-order representational states (the higher-order [HO] developmental stage). If an HBO is developed to the extent that its neural structure and activity-patterns are the same as those of a human brain on the FO developmental stage but not on the HO developmental stage [15], pp. 761–762), then it is plausibly considered to have the capacity to represent what happens in the surrounding environment, but not to have the capacity to represent the first-order representation itself. According to the first-order representational theory, then, the HBO in question can be conscious; while, according to the higher-order representational theory, it cannot be. If the HBO is further developed so as to acquire the capacities to represent the first-order representations, the higher-order representational theory would also admit that it could be conscious.

Note that it is possible that there are borderline cases to which the representational theories of consciousness would not provide a determinate answer. For instance, the developmental stage of an HBO may be such that it is unclear whether it has the first-order representational capacity (a related issue is raised by Carruthers [46]. In such cases, the first-order representational theory cannot provide a determinate answer as to whether this HBO has consciousness. There are analogous borderline cases for the higher-order representational theory as well.

The preceding discussion of this section has shown that the answer to the question of whether HBOs can have consciousness depends on what standard or theory of consciousness one adopts. The liberal theories of consciousness such as IIT and panpsychism hold that even currently existing HBOs can have consciousness, while the conservative theories of consciousness such as ET deny this (we owe the labels of “liberal” and “conservative” to Murray [47]. Further, the intermediate theories of consciousness, such as representationalist theories, state that whether HBOs can have consciousness depends on their developmental level (where the higher-order theory of consciousness is more conservative than the first-order one). (Fig. 1 is here).

The obvious next question is to ask what standard or theory of consciousness we should adopt. It is here that we arrive at an epistemological stalemate. There is no consensus as to what standard or theory of consciousness is the most promising and it does not seem that the debate over the correct indicators of consciousness will be resolved in the near future [48]. Thus, we do not know how to determine whether HBOs have consciousness.

Although the question of whether HBOs have consciousness matters for how we ought to treat them, it appears that there is no good way to resolve the question of whether they do possess consciousness. We are, then, left in the unenviable position of being uncertain as to how to proceed with an ethical discussion of HBOs.

The Precautionary Principle about Consciousness

Our proposal for escaping this predicament is to adopt a precautionary principle about consciousness (PPC), according to which, if there is theoretical disagreement over whether X has consciousness—and where treating X as not having consciousness would cause more harm to X than benefit to X— we ought to err on the side of being liberal with attribution of consciousness and assume that X does have consciousness [6, 15, 49], p. 762,[50]. The antecedent of the conditional is clearly satisfied for HBOs. While, as we have seen, some liberal theories of consciousness state that even current HBOs have consciousness, conservative theories of consciousness deny this, and we do not have a principled way to settle this disagreement.Footnote 6 Moreover, in the case where HBOs actually possessed consciousness, treating HBOs as not having consciousness would cause harm to HBOs rather than benefit them. In contrast, in the case where HBOs actually lacked consciousness, treating HBOs as having consciousness would not cause harm to HBOs. As a whole, then, treating HBOs as not having consciousness would bring about more harm to HBOs than benefit to them. Thus, we can legitimately apply the PPC to HBOs and should, then, assume that HBOs are conscious. This assumption enables us to proceed with the ethical discussion concerning HBOs.

This section makes several clarificatory remarks on the PPC and its applications to HBOs. The first remark is regarding the scope of the subjects of potential harms and benefits. Our formulation of the PPC takes only X—the entity whose possession or lack of consciousness is in question—as the subject of potential harms and benefits. Because of this, we do not have to consider the possibility that non-X entities are harmed by X being treated as having consciousness in determining whether we should apply the PPC to X.

However, non-X entities can be harmed by X being treated as having consciousness [51, 52]. For instance, if the use of X for scientific experiments is prohibited due to the assumption that X has consciousness and the prohibition is significantly disadvantageous for pharmaceutical research, treating X as having consciousness may cause much harm to patients who would otherwise benefit from the pharmaceutical research on X.

How should we address these sorts of potential disadvantages for non-X entities? Our proposal is that such disadvantages should be considered in discussing the cancellation conditions of the PPC rather than its application conditions. Whether a precautionary principle is acceptable or permissible depends in part on the proportion of its positive effects to its negative effects [53]. If the negative effects provided by applying the PPC to HBOs are proportionally larger than its positive ones, then we may be forced to call it off. For instance, if it follows that we ought not to create HBOs for any purpose, the negative impact on our well-being, in particular on the well-being of those who (will) have diseases that may be effectively treated by using HBOs, seems to be too large. In this case, we may need to cancel our application of the PPC to HBOs.

Our position is, thus, as follows: we can determine whether we should apply the PPC to HBOs based only on considerations of harm and benefit to HBOs themselves. However, we may need to cancel the application of the PPC if it turns out that applying it causes more harm to other creatures than the benefit that it causes to HBOs. We may be forced to cancel it after further consideration and observation of its consequences. In this sense,

the application of the PPC to HBOs is tentative.

Note, however, that the more likely a creature is to have consciousness, in the sense that more theories of consciousness predict that it has consciousness, the less acceptable it is to cancel the PPC and to treat this creature as not having consciousness, even if the experimental uses of the creature—which were harmful to it—produced large benefits to other creatures like human beings. Whether we can legitimately cancel the PPC depends not only on how much harm its application causes to other creatures but also on how likely the creature in question is to have consciousness (for the implication of this point, see Sect. 6).

Our second clarificatory remark is on the relation between the PPC and ethical treatment. The PPC does not, in itself, determine how we should treat HBOs. The possession of consciousness does not necessarily imply the possession of moral status, which would require us to give moral consideration to X for X’s own sake. One sufficient condition for having moral status is to have valenced experiences, such as pleasant and painful experiences [54]. That is, if X can have painful experiences, then we ought to treat X in such a way that X does not have painful experiences (or, at least, X has less painful experiences). Note, however, that the possession of consciousness does not necessarily mean the possession of the capacities to have sensory valenced experiences, since there may be a primitive form of consciousness that is unable to involve valence. The possession of such a primitive form of consciousness may not be sufficient for having moral status [55].Footnote 7 Whether we should give moral consideration to X is mainly determined by what kinds of conscious experiences X can have, rather than by the mere fact that X has consciousness [12, 18],see also Sect. 6). In other worlds, the possession of consciousness is a “gatekeeper” to get us to the question of what kinds of conscious experiences X has, a question which does have direct moral significance.Footnote 8

Moreover, even if it turns out that we should give moral consideration to X, it still remains open as to how much moral consideration we should give. This is also determined in part by what kinds of conscious experiences X can have (see Sect. 6). Accordingly, without examining what kinds of conscious experiences an HBO can have, we cannot specify whether and how much we should give moral consideration to it. Thus, the PPC does not directly provide any determinate answer as to how we ought to treat HBOs.

Here one may suspect that the PPC does not resolve our predicament at all but, rather, merely pushes it back one level. The line of thought is as follows: Our predicament is caused by the intractability of the question of whether an HBO has consciousness (the whether-question). The question of what kinds of conscious experiences an HBO has (the what-kind question) seems no less intractable than the whether-question. Thus, it seems that we would face the exact same predicament in attempting to address the what-kind question as we did in attempting to address the whether-question.

Our response to this worry is that the what-kind question is more tractable than the whether-question. The intractability of the whether-question lies in the hard problem of consciousness—namely, the deep mystery of what “switches” consciousness on and off. We cannot answer the whether-question without knowing what needs to be present in order to switch consciousness on. In contrast, when we address the what-kind question with PPC, it is presupposed that the consciousness switch is on. Hence, in addressing the what-kind question, we can sidestep the hard problem of consciousness.

It is important to note here that experimental studies of consciousness have productively explored the correlations between conscious experiences and (1) physiological states, (2) sensory stimuli, (3) brain areas, and (4) cognitive functions [41, 5658] and that, through these studies, we have gained much knowledge about their relations. Further, assuming that HBOs have consciousness, we can employ these accumulated scientific findings about consciousness in order to address the what-kind question. That is to say, we can consider what kinds of conscious experiences an HBO may be able to have in terms of (1) what types of physiological states it can be in, (2) what types of sensory stimuli it is sensitive to, (3) what types of neural network structure it has, and (4) what types of cognitive functions it has. For instance, if an HBO is sensitive to optical stimuli [27], it might be plausibly counted as potentially having primitive visual experiences. This is the reason why the what-kind question is tractable (see Sect. 5 for more detailed discussion). The PPC can shift our focus from the whether-question to the what-kind question and thereby opens up a space in which to productively discuss how we should treat HBOs.

The third clarificatory remark is that the application of the PPC to HBOs is primarily for problem-solving rather than problem-finding. This principle was introduced to address the already occurring ethical worries concerning HBOs, rather than to discover novel ethical issues. We are not obliged to consider whether we should apply the PPC to X unless there are preexisting ethical issues regarding X. We ought to do so only when there are preexisting ethical issues concerning X. Although we may be able to uncover new ethical issues by employing the PPC to various entities, it is neither the intended purpose of this principle nor are we obliged to seek such new issues out.

In light of these remarks, let us consider a worry about the unlimited application of the PPC. If we take seriously the most liberal theory of consciousness—namely, panpsychism—the PPC would be applied to every living being and even microphysical entities. However, it seems practically impossible and prima facie implausible to hold that we should give moral consideration to every living being and microphysical entity. Hence, it appears that we should somehow limit the application of the PPC. Yet, we cannot appeal to any specific standard or theory of consciousness for this purpose, for the epistemological problem rears its head again: we do not know how to determine what standard or theory of consciousness to adopt. Thus, it seems entirely unclear how we could limit the application of the PPC.

Our response to this worry is threefold. The first is to appeal to our third point that the PPC is primarily for problem-solving rather than problem-finding. There is no serious ethical issue as to how we ought to treat, for example, microphysical entities or primitive biological entities, such as ameba, for their own sakes. Even though panpsychism implies that they are conscious, there is no obligation to apply the PPC to them and thereby open up a new ethical discourse. Our second response is to notice that, even if we apply the PPC to primitive biological creatures and microphysical entities, it does not follow that we should give them moral consideration. This is because it depends on what kinds of conscious experiences they can have and there is no panpsychist who explicitly endorses the view that microphysical entities are sentient (though it is unclear what panpsychists say about primitive biological entities). The third response is that, if it followed from the PPC that we need to give significant moral consideration to primitive biological entities and microphysical entities, it would be so burdensome to us that it is reasonable to cancel the application of the PPC to them.

Let us here summarize the main line of discussion in this Sect. If an HBO has consciousness but is regarded as not having consciousness and treated accordingly, the HBO may be harmed by this treatment. We should adopt the PPC to prevent such potential harms. Adopting the PPC enables us to address what kinds of conscious experiences HBOs may have. Through addressing the what-kind question, we can examine whether and how much we need to give moral consideration to HBOs. Depending on its results (in particular, the size of the negative impact on human beings), we may need to cancel the application of the PPC to HBOs.

What is It Like to Be an HBO?

The PPC enables us to focus on the what-kind question—that is, the question of what kinds of conscious experiences HBOs have—while leaving aside the whether-question concerning the presence of consciousness itself. This section, accordingly, addresses the what-kind question. However, this section does not aim to provide decisive and comprehensive answers to this question. Rather, it aims to demonstrate how we can infer the kinds of conscious experiences that HBOs are likely to have. For this methodological demonstration, we will present some inferences from observable features of HBOs to the kinds of conscious experiences they may have. Although these inferences are partly grounded in experimental findings of consciousness studies, they also depend on assumptions the reliability of which remains to be examined. In this sense, the inferences presented in this section are highly speculative.

It is important to note that there is no widely shared protocol which specifies what kinds of experiences a creature can have when that creature cannot provide introspective reports in response to verbal questions. For such beings, our ability to specify which kinds of experiences they may possess typically depends largely on their similarity (or lack thereof) to standard human beings. The more dissimilar a creature is to human beings, the more difficult it is for us to infer what kinds of experiences it can have. HBOs stand in an interesting position in this respect, however. When one focuses on their appearances and behaviors, HBOs are much more dissimilar to human beings than non-human animals and perhaps even AI robots. Yet, HBOs share some aspects of gene expression with human brains at developmental stages. Moreover, some HBOs are designed to mimic developmental processes of specific regions in human brain, such as cerebral cortex [2, 3], the hippocampus [59], and mid-brain and hypothalamic [60]. Because of the genetic and developmental similarity to human brains, we can take advantage of the accumulated knowledge of experimental studies of consciousness on human beings to reason what kinds of experiences HBOs may have. That is to say, we can infer what kinds of conscious experiences an HBO may have from (1) what types of physiological states it can be in, (2) what types of stimuli it is sensitive to, (3) what neural network structures it has, and (4) what types of cognitive functions it has (as discussed above in Sect. 4).

Let us first focus on the glucose metabolic conditions of HBOs. HBOs are able to be in different glucose metabolic conditions, which in part depends on culturing conditions. It has been suggested that the state or level of human consciousness varies depending on the glucose metabolic levels of our brains [61, 62]. Assuming that the difference in the level of consciousness correlates with the difference along some phenomenological dimension (perhaps such as feeling lively or drowsy), we might infer that the consciousness of HBOs can differ in that phenomenological dimension from the fact that they can differ in glucose metabolic level.

There are two remarks which we should make about this inference. First, the inference depends on the assumption that the levels of consciousness correlate with some phenomenological difference. However, this assumption can be doubly questioned as follows: (1) The notion of “levels of consciousness” has been challenged on both conceptual and theoretical bases [63] and (2) even if the notion of “levels of consciousness” makes good sense, it is unclear whether their difference corresponds to some phenomenological difference. Further conceptual, theoretical, and phenomenological research on levels of consciousness is needed to address these issues. The second remark which we should make is that the inference in question becomes acceptable only on the precautionary assumption that HBOs are conscious. Given that there can be unconscious organisms that can metabolize glucose, the fact that X is able to be in states which differ in glucose metabolism does not in itself imply that X actually is conscious.Footnote 9 Thus, it is only when we can plausibly assume that X is conscious on a distinct ground (which, in our case, is the PPC) that the difference in glucose metabolism suggests that there would be some phenomenological difference.

To see the second point more clearly, let us use an analogy. On the presupposition that a cup of ice-cream is in a box, the fact that I smell vanilla from the box is good evidence that there is vanilla ice-cream in the box. Without that presupposition, however, the fact that I smell vanilla from the box is not good evidence that vanilla ice-cream is in the box. This is because there may equally be vanilla cookies, vanilla cakes, or some kind of perfume, rather than vanilla ice-cream in the box. Likewise, on the presupposition that HBOs have consciousness, the fact that they can be in different glucose metabolic states is counted as prima facie evidence that their consciousness can differ in levels (and, accordingly, along some phenomenological dimension). Without this presupposition, however, the fact that HBOs can be in different glucose metabolic states does not suggest that their consciousness can differ in levels, since we would not have a principled reason to believe that they have consciousness at all. The same applies to all of the inferences that we will present in this section. Simply put, all of these inferences wholly rely on the precautionary assumption that HBOs are conscious.

With this said, let us now turn to the neural activity patterns of HBOs. It has been found that synchronous firing occurs in two-dimensional neuronal structures that are made by disassembling an HBO into small cells and scattering them on a plate to produce two-dimensional neuronal connections [33]. This is indirect evidence that synchronous firing also occurs in HBOs when they are intact (though it is technically difficult to examine this prediction due to the current limitations of microscopes). Assuming that the presence of synchronous neural activities is a good indicator of the unification of consciousness [64], we might infer from the likelihood of synchronous neuronal firings within HBOs that they have unified conscious experiences rather than having multiple streams of consciousness in “one great blooming, buzzing confusion” [65], p. 462). Note, however, that this inference would be defeated if it turns out that some conditions other than the presence of synchronous neural activities must also be satisfied for neuronally-based conscious experiences to be unified and HBOs do not satisfy them. Likewise, if it turns out that synchronous firing does not occur in HBOs, the inference does not hold. In this sense, this inference is tentative and defeasible.

Let us then focus on the neural network structures of HBOs. Although no HBO has successfully been built that resembles any specific region of mature human brains in structure, we can imagine HBOs with such structural similarity with mature human brains. If an HBO is cultured to structurally resemble a part of human brains that is found to contribute to a kind of sensory information processing, then we might infer that the HBO can have primitive sensory experiences of that kind. For instance, if an HBO structurally resembles a part of human brains which is responsible for visual processing—namely, the visual cortex—then we might reason that the HBO can have primitive visual experiences, such as an experience of a flash of light followed by an experience of darkness.Footnote 10

By focusing on the neural network structure of HBOs, we can also consider what non-sensory experiences they can have. For instance, if an HBO is structured in such a way that it contains neural feedback connections, which seem related to predictions in a broad sense [66], we might reason that it has the primitive capacity of prediction and that it, therefore, can have a primitive cognitive experience of predicting. Furthermore, it is known that cerebral limbic systems play a key role in having emotional experiences with positive and negative valance [67]. Given this, we might reason that an HBO that is designed to structurally resemble cerebral limbic systems can have primitive emotions.

In general, we can reason that, if an HBO acquires more complex structures, then it is likely to have more kinds of conscious experiences. For instance, the capacity to think about oneself verbally seems to be grounded in various brain areas including the anterior cortical midline, the posterior cortical midline, the lateral inferior parietal lobe, the medial temporal lobe—including the insula, amygdala, and hippocampus—and the anterior and middle lateral temporal cortex [68]. If HBOs are designed to resemble the large neuronal networks in those areas, they might be regarded as potentially having the cognitive experiences involving self-representation.

In summary: this section has demonstrated how we can address the what-kind question on the ground of the PPC. We can infer what kinds of conscious experiences current and future HBOs may have from their glucose metabolic conditions, their neural activity patterns, their neural network structures and so on. Although the inferences presented in this section are highly speculative and non-comprehensive, they shall be capable of being further refined based on future neuroscientific findings about consciousness.

An Ethical Framework to Guide HBO Research

Koplin and Savulescu [15] propose an ethical framework to limit the creation and use of HBOs on the basis of a similar framework for animal ethics (e.g., [69]. Their position distinguishes between three kinds of HBOs: (a) non-conscious brain organoids, (b) conscious or potentially conscious brain organoids, and (c) brain organoids with the potential to develop advanced cognitive capacities. For non-conscious brain organoids, they claim that “research should be regulated according to existing frameworks for stem cell and human biospecimen research” (p. 765). For conscious or potentially conscious brain organoids, they propose six additional restrictions (p. 765):

  1. 1.

    The expected benefits of the research must be sufficiently great to justify the moral costs, including potential harms to brain organoids.

  2. 2.

    Conscious brain organoids should be used only if the goals of the research cannot be met using non-sentient material.

  3. 3.

    The minimum possible number of brain organoids should be used, compatible with achieving the goals of the research.

  4. 4.

    Conscious brain organoids should not have greater potential for suffering than is necessary to achieve the goals of the research.

  5. 5.

    Conscious brain organoids must not experience greater harm than is necessary to achieve the goals of the research.

  6. 6.

    Brain organoids should not be made to experience severe long-term harm unless necessary to achieve some critically important purpose.

    For brain organoids with the potential to develop advanced cognitive capacities, they further add (p.765):

  7. 7.

    Brain organoids should be screened for advanced cognitive capacities they could plausibly develop. In general, such assessments should err on the side of overestimating rather than under-estimating cognitive capacities.

  8. 8.

    Cognitive capacities should not be more sophisticated than is necessary to achieve the goals of the research.

  9. 9.

    Welfare needs associated with advanced cognitive capacities should be met unless failure to do so is necessary to achieve the goals of the research.

  10. 10.

    The expected benefits of the research must be sufficiently great to justify the expected or potential harms. This calculation should take into account the implications of advanced cognitive abilities for brain organoids’ welfare and moral status.

Although we basically agree with the proposed ethical framework, we suggest several modifications. First, there should be no option for category (a)—that of “non-conscious brain organoids”. [15], p. 762) assume that “a brain organoid lacks even a rudimentary form of consciousness until it resembles the brain of a fetus at 20 weeks’ development”. As we have discussed, however, the plausibility of this assumption depends on what standard or theory of consciousness to adopt (see also [70]. Some liberal theories of consciousness such as IIT and panpsychism allow that even those underdeveloped brain organoids can have a primitive form of consciousness.

Additionally, we want to make another remark regarding category (b)—that of conscious and potentially conscious brain organoids. As Shepherd [12] argues, the morally significant kinds of conscious experiences are valenced experiences (that is, affective experiences). This is because those beings that can have valenced experiences have experiential interests: positively valenced experiences are intrinsically good for them and negatively valenced experiences are intrinsically bad for them. However, there is no a priori reason to think that every conscious entity can have valenced experiences. At the very least, we can imagine a primitive form of consciousness that does not involve any such valence. Given this, (b) should be further divided into two sub-categories: (b-1) conscious brain organoids that cannot have valenced experiences and (b-2) conscious brain organoids that can have valenced experiences. The first six conditions hold only for (b-2).

Nevertheless, it is not easy to identify the borderline between (b-1) and (b-2). Currently existing brain organoids are unlikely to have typical affective experiences such as bodily pain or pleasure. However, valenced experiences are not necessarily sensory. It is also possible that currently existing brain organoids have primitive valenced experiences. Further research on valenced experiences is needed to distinguish between HBOs belonging to (b-1) and those belonging to (b-2).

There should also be another modification regarding the distinction between categories (b) and (c). We agree that brain organoids with advanced cognitive capacities—those belonging to group (c)—should be given more moral consideration than brain organoids without these capacities, since they can have more complex valenced experiences and, therefore, also have more sophisticated experiential interests. However, there does not seem to be a clear-cut line between (b) and (c); rather, the distinction seems to be a matter of degree. For example, if relatively simple HBOs contain recurrent neural connections, they may be understood as having the cognitive capacity of prediction—though this capacity would be much more primitive than our own. Given that there are such borderline cases between (b) and (c), it seems better to abolish the distinction and to, instead, provide a wide and flexible framework that can accommodate the gradation between (b) and (c).

Given those modifications, we propose a revision to the ethical framework presented by Koplin and Savulescu. For HBOs that can have valenced experiences, we suggest that:

  1. 1.

    The expected benefits of the research must be sufficiently great to justify the moral costs, including the potential harms to these brain organoids. The assessments of such potential harms should err on the side of overestimating rather than under-estimating.

  2. 2.

    HBOs should be used only if the goals of the research cannot be met using materials that cannot have valenced experiences.

  3. 3.

    The minimum possible number of brain organoids should be used, compatible with achieving the goals of the research.

  4. 4.

    HBOs must not experience greater harm than is necessary to achieve the goals of the research.

  5. 5.

    HBOs should not be constructed in such a way as to have greater potential for suffering—namely, they should not be given the capacities to have more sophisticated kinds of valenced experiences—than is necessary for achieving the goals of the research.

  6. 6.

    Brain organoids should not be made to experience severe long-term harm unless this is necessary to achieve some critically important purpose.

  7. 7.

    Brain organoids should be ranked according to how sophisticated the valenced experiences which they can have are.Footnote 11 We should use lower-ranked conscious brain organoids if the goals of the research can be met without using higher-ranked ones.

This revised ethical framework is simpler than the one put forward by Koplin and Savulescu in that it unitarily frames moral limitations based only on valenced experiences and their degree of sophistication. Nevertheless, it still covers all relevant types of HBOs and offers practical guidance concerning how to treat different types of HBOs. We propose that HBO research should be conducted within this unitary ethical framework.Footnote 12

Let us consider, for illustration, two possible ways of applying this framework. First, suppose that existing HBOs have negatively valenced experiences when their conscious level is too low and that levels of consciousness can be successfully inferred from glucose metabolic conditions. In this case, we can apply the fourth and sixth conditions of our framework, claiming that we ought to keep HBO glucose metabolic levels higher, since lower levels are tied to negatively valenced experiences. Secondly, if it turns out that HBOs have negatively valenced experiences by being physically damaged and such negatively valenced experiences become greater when their neural structures become more sophisticated, we can apply the seventh condition of our framework and claim that, if the use of less sophisticated HBOs is sufficient for a given experimental purpose which involve physically damaging the HBOs, we ought not to use more sophisticated HBOs for that purpose.

We want to emphasize that this framework can also potentially be applied to the use and creation of human embryos and fetuses, non-human animals, and even robots for scientific purposes. In order to examine whether (or to what extent) they deserve moral consideration, we should first examine whether some theories of consciousness predict that they have consciousness. If there are such theories, we can then apply the PPC to these entities. This enables us to, in turn, consider what kinds of conscious experiences such entities have and to rank them according to how sophisticated the valenced experiences which they may have are. Based on this, we can consider how we ought to treat these entities on the basis of our ethical framework developed above.

As we have argued in Sect. 4, we are allowed to cancel the application of the PPC to a kind of entity just in case (1) the application produces a large negative impact on our well-being and (2) the majority of theories of consciousness predicts that these entities do not possess consciousness (namely, if only fairly liberal theories of consciousness predict that they possess consciousness). Given this, for example, if it turns out that macaques have conscious experiences which are as sophisticated as those belonging to human beings and that they deserve as much moral consideration as we do, the experimental use of macaques without informed consent ought to be prohibited. Even if this prohibition produces a large negative impact on the advancement of neuroscience and, thereby, on our well-being indirectly, we are not legitimately allowed to cancel the application of the PPC. This is because the majority of theories of consciousness predict that macaques possess consciousness.

In contrast, imagine that a high-tech company produces a new type of caregiving robot, which is exceptionally sophisticated in its functionality, to the extent that care receivers sometimes mistake them for human caregivers. Suppose further that we work the caregiving robots to exhaustion and to the point of bodily damage for the sake of care receivers. If we assume that this kind of caregiving robot has consciousness, we might infer that they have fairly sophisticated valenced conscious experiences, concluding that it is morally unacceptable to exploit them for the sake of care receivers. However, if the majority of theories of consciousness are committed to biological naturalism, which holds that only evolutionarily-selected living beings have the potential for consciousness, then we are allowed to cancel the application of the PPC to the caregiving robots. This is because applying this principle would both produce a large negative impact on care receivers and the majority of theories of consciousness predict that these robots do not possess consciousness.

Let us close this section by acknowledging two limitations of this paper. First, the effectiveness of the ethical framework proposed here depends on the further development of consciousness studies—particularly those concerning the correlation between valenced experiences and the physiological, informational, neural, and functional conditions of our bodies, including our brains. We are in need of much more knowledge about such correlations if we are to reliably address the what-kind question for non-human beings who cannot provide introspective reports. Second, we do not claim that our ethical framework is capable of capturing every ethical issue regarding the use or creation of living beings. For instance, the use of fetuses for scientific purposes may involve some distinctive ethical issues that the use of HBOs does not face and which are not directly related to the potential for having conscious experiences [71]. Our proposed ethical framework is designed to capture consciousness-based moral status, not other types of moral status, and, thus, aims at presenting a new perspective on the use and creation of various kinds of possibly conscious entities.

Conclusion

This paper proposes a new theoretical schema to address ethical issues concerning HBOs. This schema has two key features. The first is to adopt the PPC to shift focus from the question of whether HBOs have consciousness to the question of what kinds of conscious experiences an HBO can have. The second is to rank how much moral consideration HBOs deserve based on the kinds of valenced experiences that we infer that they are likely to have. This theoretical schema enables us to offer a concrete ethical framework to limit the creation and use of HBOs in bioscience—one which is a revised and unified version of the framework which Koplin and Savulescu [15] have proposed. This new ethical framework provides effective guidance regarding how to proceed in, and further develop, HBO research.

Fig. 1
figure 1

Classification of the Various Approaches to Consciousness of Human Brain Organoids