1 Rejecting exclusivity

Morality is at least partly about promoting welfare: increasing the sum total of happiness, interest-satisfaction, or other measures of wellbeing, while decreasing the sum total of suffering, interest-frustration, or other measures of illbeing. Utilitarians think that all of morality is entirely about making more subjects better off and fewer subjects worse off in these ways, but even non-utilitarians tend to think that morality is at least partly about this project.

This endeavor partly involves identifying mental states related to welfare and evaluating them based on factors like intensity and duration. And generally speaking, we assume that each mental state belongs to exactly one subject: different pleasant or unpleasant experiences can vary in intensity, duration, and other such ways, but cannot vary in how many subjects feel them. I feel my pain, you feel yours, and that is it. Call this assumption ‘Exclusivity’.

What if this assumption is wrong? What if, for instance, there could be two or more subjects feeling the very same pain? A variety of arguments independently suggest that Exclusivity might cease to be true in the future, might already admit of exceptions, or might even fail to be true as a norm.Footnote 1 None of these arguments are decisive, and metaphysical defences of Exclusivity merit consideration,Footnote 2 but our interest here is in what would follow for welfare estimates if Exclusivity is false.Footnote 3 In particular, if two subjects feel the same token pain, then is that twice as bad as when one subject feels this kind of pain, all else equal, or is it equally bad as when one subject feels this kind of pain, all else equal? Call these the ‘subject-counting’ and ‘state-counting’ approaches,Footnote 4 and call this puzzle the ‘Value Counting’ question.

The answer to the Value Counting question might bear on the best way to resolve conflicts, allocate rights, or distribute scarce resources in situations where some minds share token mental states and others do not. It might also bear on which kinds of minds we should create or not create in the future. Advances in AI or neurotechnology might make it possible to engineer minds with structures and relationships very different from those typical of humans and nonhuman animals, and determining how to estimate welfare impacts when mental states are shared may become crucial to deciding how to treat differently-structured minds.

In Sect. 2, we review five contexts where welfare-relevant mental states might be shared. Some cases prompt subject-counting intuitions while others prompt state-counting intuitions. In Sects. 3 and 4 we argue that the best way to reconcile these intuitions is to endorse the state-counting approach but explain the subject-counting intuitions by appeal to the holistic nature of many welfare-relevant mental states. Since much or all of the value or disvalue of these states depends on how they contribute to broader mental life, we can explain why subject counting seems to make sense in cases involving relatively little overlap.

Before we begin, we can note that while the Value Counting question is our primary focus, it potentially interacts with at least three other questions that will become more important as AI and neurotechnology advance. First, consider what we call the Hedonic Emergence question: Does hedonic valence in a complex experience entail hedonic valence in parts of that experience?Footnote 5 Or could valence be ‘emergent’, in the sense of belonging to composite experiences no component of which is itself valenced? While we do not address the Hedonic Emergence question in detail, the holistic position that we defend in Sect. 4 suggests a negative answer: the valence of a composite experience depends on its overall structure, and so need not derive from valenced or unvalenced status of its parts.

Second, consider what we call the Hedonic Alignment question: If one valenced experience is made of other valenced experiences, must their valence be aligned? For example, can a collection of unhappy states collectively constitute a happy state (or vice versa), or can they collectively constitute only an unhappy state? Some of the examples that we discuss will depend essentially on this issue. While we do not address the Hedonic Alignment question in detail either, the holistic position that we defend suggests a nuanced answer: alignment is guaranteed only in proportion to the amount of overlap relative to the larger mind, so a composite mind with many very small mental parts might diverge from them sharply in welfare, while one with only a few large mental parts will be more closely aligned.

Finally, the choice between subject-counting and state-counting approaches evokes the debate between person-affecting and impersonal views of welfare.Footnote 6 Can a situation be better even if no identifiable person is better off in it? On impersonal views, the answer is yes: the situation can be better because it contains more happiness, satisfaction, or some other kind of impersonal positive value or less suffering, frustration, or some other kind of impersonal disvalue. On a person-affecting view, the answer is no: the comparative value of situations must be tied to their comparative value for specific individuals. We will suggest that the subject-counting approach fits naturally with the person-affecting view and that the state-counting approach fits naturally with the impersonal view, though this is not a strict entailment.

2 Candidate cases of state sharing

We begin in this section by further explaining and motivating the Value Counting question by describing in more detail the kinds of mental connection that might be possible and why these kinds of mental connection might matter. First, in Sects. 2.1 and 2.2, we describe two general ways for minds to be connected: what we call ‘neural mindmelding’ (where two mostly distinct subjects have partial overlap in their mental states, in virtue of an anatomical or technological connection) and what we call ‘hive minds’ (where one mind is composed of other minds). Then, in Sects. 2.3, 2.4 and 2.5, we describe three philosophical debates that turn in part on the possibility of shared mental states: the Problem of the Many, the implications of constitutive panpsychism, and the problem of fission and fusion.

2.1 Mindmelding

First, neural mindmelding occurs if two brains are connected in ways that mimic the connections within a single brain (Danaher & Nyholm, 2021; Hirstein, 2008, 2012; Lyreskog et al., 2023). This kind of connection might result from technology known as a brain-to-brain interface, albeit likely from a version more advanced than anything currently on the horizon. In this process, linked implants would allow different brains to share information in a way that functionally resembles how nerve tracts allow different parts of a single brain to share information.

If we suppose that the boundaries of the mind are primarily fixed by information-processing patterns rather than by the anatomical boundaries of particular tissue types,Footnote 7 then neural mindmelding might produce a situation where the physical supervenience base of one mind extends into the physical supervenience base of another mind. Alternatively, it might be that the connections themselves become part of the supervenience base for both minds. Of course, this does not necessarily mean that two subjects have fully merged or come to function as a single subject.Footnote 8 Instead, it simply means that a subset of mental states now belongs to both of these subjects at the same time, in virtue of the connections between them.

There may be real cases where atypical brain conditions already create a kind of neural mindmelding. The split-brain condition is often described as creating two distinct subjects of consciousness. If this description is accurate, then it seems likely that the extensive neural connection between these two hemisphere-subjects (in the brainstem, thalamus, and other subcortical structures) might produce some shared token states.Footnote 9 However, the split-brain may not actually be an example of two subjects of consciousness. If two subjects do exist in such a case, then they seem to be unaware of each other, and they also seem to cooperate effectively, each taking themselves to be the only subject present.Footnote 10

A clearer candidate case might be craniopagus conjoined twins, who are distinct individuals joined at the head, potentially with linked or overlapping brains. Philosophers have discussed Canadian twins Krista and Tatiana Hogan as a possible case of state sharing because of the ‘thalamic bridge’ that seems to allow sensory information to pass between their brains.Footnote 11 In comparison with split-brain cases, this kind of conjoined-brain case appears to allow each individual to be aware of the other as a separate individual, while also allowing these individuals to share some mental states.

An intriguing example involves shared taste experience: “Krista likes ketchup, and Tatiana does not, something the family discovered when Tatiana tried to scrape the condiment off her own tongue, even when she was not eating it” (Dominus, 2011). In this kind of case, a single token experience (the taste of ketchup) appears to be produced by Krista but experienced by both Krista and Tatiana. Moreover, the experience appears to be part of an overall pleasant experience for Krista and part of an overall unpleasant experience for Tatiana (a possibility that will be important in Sect. 4).

Importantly, neural mindmelding requires more than simple information sharing that causes two subjects to have distinct token states of the same type. If Krista eating ketchup produced two matching token taste-states—one for her and one for Tatiana—then that would not be mindmelding, but would instead be a weaker connection that we might call ‘neural telepathy.’ It might be that many, if not all, cases that appear to involve mindmelding actually involve telepathy. Nevertheless, it is worth asking how mindmelding, if and when it comes about, should affect moral theorizing.

2.2 Hive minds

Second, a hive mind occurs if multiple individuals are suitably networked with each other such that (i) the whole and the parts have minds and (ii) some or all of the whole’s mental states are ‘inherited from’ the parts, and to that extent shared with them.

Some presently existing organisms have at times been considered as (admittedly speculative) candidates for being hive minds. For an example concerning parts and wholes, consider octopuses. Octopuses are sometimes said to have ‘nine brains’, one central brain and another, smaller brain for each arm. They exhibit some integrated behavior, where it appears that the organism is acting as a whole, and some fragmented behavior, where it appears that parts are acting independently of each other.Footnote 12 Assuming, as seems plausible, that consciousness exists in octopuses at all, is it possible that consciousness exists in each brain as well as in the octopus as a whole, and if so, how similar are the conscious experiences at each level?

Similarly, for an example concerning individuals and groups, consider colonies of eusocial insects like ants and termites. Members of these colonies can and do act independently of each other, but the colonies also exhibit a remarkable degree of functional unity. At present, there is more uncertainty about consciousness in insects than about consciousness in octopuses, and the possibility of consciousness in groups is less widely accepted than the possibility of consciousness in parts.Footnote 13 Still, assuming that consciousness exists in insects at all, we can once again ask: Is it possible that consciousness exists in each insect as well as in the colony as a whole, and if so, how similar are the conscious experiences at each level?

Indeed, some humans could even turn out to be hive-minds. Specifically, if parts of human brains, such as individual cerebral hemispheres or other, smaller subsystems can have conscious experiences, then is it possible that parts of human brains and whole human brains can be conscious at the same time? This would be a very surprising result, but some philosophers have defended this possibility (see, for instance, (Blackmon, 2016; 2021; cf. Roelofs 2022a), and this possibility raises interesting questions about the moral status of our parts and their moral relationships with each other and with the person as a whole.

Other examples might arise in the future. Return to the kind of neural mindmelding described in the previous subsection. There we focused on states being shared between two (or more) human brains, but what if the system constituted by these brains could be conscious as well? Then we would have a consciousness in each member of the group as well as in the group as a whole, in much the same way that we might currently have with some invertebrates (cf. Danaher & Nyholm, 2021; Danaher & Petersen, 2021). What might it be like to be a conscious group of humans, or to be a member of such a group?

A hive mind structure could appear in artificial beings as well. If and when AI systems become conscious, they might be linked in the same kinds of ways that invertebrate minds are, but to a much greater extent. In particular, a vast number and wide range of artificial minds might be linked with each other via the internet. In this case, depending on how these minds are structured, it might even be possible for hive minds to be layered, with individual minds making up collective minds that, themselves, make up even larger collective minds. If so, what would the moral status of these layered hive minds be like?

Indeed, if parts, wholes, and groups can all be conscious at the same time, then we might find that layered hive minds can proliferate, particularly as new technologies come online. For instance, we might find that parts of human brains are conscious, that whole human brains are conscious, and that groups of human brains, facilitated by brain-to-brain interfaces, are conscious all at the same time. Even if we take this possibility to be unlikely, we can still ask how, if it were to occur, it would disrupt our conception of our identities, our relationships, our abilities, and our responsibilities to ourselves and each other.

2.3 Puzzles of multiplication

According to some views, there can also be shared mental states between overlapping parts of one human being. Consider that if your severed head existed by itself (with appropriate life-support) it could be conscious. The same is true for your brain, your top half, and other parts of you. So these entities are, considered intrinsically, capable of consciousness. And why would their current situation (connected to the rest of you) suppress or inhibit their consciousness? So it might seem to follow that they are conscious right now, and since your parts are apparently distinct from one another (and from you, the whole human being), it might also seem that there are multiple conscious subjects in you right here and now.

More generally, for any material object that could be identified with ‘you’, the conscious subject, there are many overlapping material objects that seem like a comparably good candidate for being a conscious subject. You would still be conscious without any particular cell, so it might be that for each of your cells, ‘you with this cell’ is conscious and ‘you without this cell’ is conscious. Unlike two people linked by mindmelding, these different entities are often not capable of independent existence, but this would not clearly stop them from being conscious. This has been called the problem of ‘Too Many Minds’.Footnote 14

Different philosophers respond to puzzles like this in different ways. Some try to avoid this implication. For instance, to avoid saying that both you and your brain are conscious, someone might deny that the brain itself is conscious, even though it would be if it existed by itself.Footnote 15 Alternatively, someone might deny that the larger whole containing your brain is conscious. But both of these options are somewhat counter-intuitive, and in general philosophers who are inclined to think that some physical systems can have minds must grapple with the likelihood that such systems may often overlap with each other.

Other philosophers, sometimes called ‘manyists’, embrace the implication that many different physical systems have minds, maintaining that this is not a problematic result.Footnote 16 According to this view, because these systems overlap, they share the same token mental states, and we can count these mental states as one and the same for most practical purposes. However, evaluating the viability of manyism requires evaluating the metaphysical and moral implications of shared mental states, and critics of manyism have repeatedly suggested that accepting overlapping minds yields unacceptable revisions to everyday morality.Footnote 17

2.4 Constitutive panpsychism

According to other views, state sharing might be incredibly widespread, even ubiquitous. For instance, constitutive panpsychism maintains that the fundamental physical constituents of the universe have a very simple kind of consciousness, and that our complex consciousness is combined out of this very simple kind of consciousness somehow.Footnote 18

On some accounts of how this combination works (though not all; see, for instance, Goff and Roelofs, 2020), it involves token states being shared. Every token experience of yours is composed of simpler states, themselves composed of simpler states, down to the simplest phenomenal components, each of which is shared between you and some microscopic part of your brain. If constitutive panpsychism is true, then there is a sense in which we may all be very layered hive minds. Indeed, some opponents of this view have raised the supposed impossibility of such state sharing as an objection, part of the “combination problem.”Footnote 19

Our interest here is not in whether this kind of state sharing is possible, but rather in what follows for morality if the answer is yes. If your pleasures and displeasures are constituted by simpler experiences, which are themselves constituted by simpler experiences, which are themselves constituted by simpler experiences, and so on all the way down to the simplest parts of you, then what, if anything, should we infer about the pleasures and displeasures of these simpler parts, and about the total amount of welfare contained within you?

The answer to this question turns partly on what in Sect. 1 we called the Hedonic Emergence and Hedonic Alignment questions. If you have valenced experiences, does that mean that your parts do too? If they do, does that mean that the valences are aligned? And if they are, then should we count these valenced experiences many times (such that each of us contains much more welfare than we might have expected) or only once? In these respects, determining the moral implications of panpsychism partly requires answering the Value Counting question.Footnote 20

2.5 Fission, fusion, and other diachronic cases

The above examples all turn on synchronic features of a subject, that is, features that obtain at a single point in time. But in some cases philosophers have posited overlapping minds in virtue of diachronic features of a subject, that is, features that obtain over time.

Some philosophers appeal to degrees of psychological connectedness to make this point. For instance, we might posit the existence of co-located subjects whose persistence requires different degrees of psychological connectedness across time, such as a relatively internally inconsistent ‘person as a whole’ whose persistence requires a relatively low degree of connectedness and relatively internally consistent selves—say, a ‘manic self’ and a ‘depressed self’–whose persistence requires a relatively high degree of connectedness across time.Footnote 21 A proponent of this view might hold that the mental states of each ‘smaller’ self also belong to the ‘larger’ self that contains them both (see also Johnston, 2017, 2021).

Other philosophers appeal to cases where one person undergoes ‘fission’ (branching into two people, both sufficiently psychologically continuous with the original that they would clearly qualify as that very person, were it not for the other), or where two people undergo ‘fusion’ (merging somehow into one person who is psychologically continuous enough with each that they would clearly qualify as that very person, were it not for the other). At least in principle, these cases can also involve one person who turns into many people, many people who turn into one person, or various combinations of these possibilities.

One analysis of such cases, starting with Lewis (1976), holds that although there is only one ‘person-stage’ in existence before fission or after fusion, that stage is ‘shared’ by multiple persons because it occupies a place in the life-history of multiple persons. Since existing at a time just means having a person-stage at that time, all of the persons who emerge from fission or who enter into fusion exist at one and the same time, even though they perfectly overlap and thus are, at that moment, indistinguishable from a single person.

Briggs and Nolan (2015) note that this ‘multiple occupancy’ view leads to the implausible result that if someone will become two persons in the future, then their present welfare matters twice as much as it otherwise would (because two presently overlapping persons share it), though they also note that we can avoid this result if we adopt a state-counting approach (cf. Schwitzgebel, 2020). However, Javier-Castellanos (2023) criticizes this solution from a subject-counting perspective, noting that when we attend to other cases, like the Hogan twins, the state-counting approach seems to get the wrong answers—that is, Javier-Castellanos identifies precisely the tension that we address in this paper.

3 The tension among intuitions

We think intuitive reactions to these different cases are likely to point in opposite directions. On the one hand, when the proportion of shared states is small, subject counting seems like the correct approach. On the other hand, when the proportion of shared states is large, state counting seems like the correct approach. Meanwhile, some cases—including diachronic cases involving high degrees of connectedness at some points and low degrees of connectedness at other points—might prompt mixed intuitions.

To see what we mean, consider a case involving a relatively low degree of overlap that, intuitively, supports subject counting. In particular, suppose that you and your friend shared the sensations associated with your left hand, but nothing else. In this case, the stimulation of your left hand produces a single token experience that both of you can access, because it occurs in a neural region that is integrated equally into both of your brains. Now suppose someone hits that hand with a hammer. The resulting experience—again, a single token state—makes it the case that you and your friend both experience severe pain, which, of course, is very bad for both of you.

Intuitively, this situation is about as bad, hedonically speaking, as a situation where two discrete people both have their hands hit with hammers. The fact that, by hypothesis, the imagined situation involves a single shared experiential state rather than two distinct experiential states seems beside the point; what matters is that two people are both in severe pain. To count the pain only once would seem unfair to you and your friend; it might mean, for instance, that scarce pain relief is preferentially allocated to two unconnected people each in moderate pain, rather than to the two of you, both in severe pain.

Now consider a case involving a relatively high degree of overlap that, intuitively, supports state counting. Suppose that manyism is correct, and so each person is a ‘cloud’ of conscious subjects. Now suppose that someone hits two people—a larger person and a smaller person—in the hand with a hammer, causing a single token experiential state that each conscious subject in the ‘cloud’ can access. In this case, since the larger person contains many more physical subsets than the smaller person, the larger person also contains many more conscious subjects. As a result, hitting the larger person with the hammer causes many more conscious subjects to experience pain.

Intuitively, however, both strikes of the hammer are roughly equally bad, all else being equal. The mere fact that the larger person contains many more physical subsets—and thus, by hypothesis, many more conscious subjects—does not change the badness of the pain caused in each case. Indeed, critics of manyism have raised precisely this worry; Simon (2017) calls it ‘the hard problem of the many’, and offers it as a reductio of manyism and by extension of materialism itself.Footnote 22 However, Roelofs (2022b) responds that if mental states can be shared, then these results need not follow (cf. Sutton, 2014).

Meanwhile, many other cases might prompt mixed intuitions. This is particularly true of some diachronic cases. Suppose that two people each need to undergo an extremely painful procedure and that we can temporarily fuse their minds so that they share a single, pain-filled stream of consciousness for the duration of the procedure.Footnote 23 Is this a better state of affairs, since it reduces two experiences to one? Or is it equally bad, since two subjects can still access this experience? Since this kind of case is particularly confusing, and since the general approach we offer below to reconciling these competing intuitions appeals to a synchronic form of psychological holism rather than to anything diachronic, we will not try to resolve this issue here. Still, our general solution might bear on this particular issue in ways that we will not be able to explore here (see footnote 29).

How, in general, should we resolve this apparent tension between cases that prompt state-counting intuitions and cases that prompt subject-counting intuitions? First, we might seek to resolve this tension by denying that welfare-relevant states are shareable in the first place, either in general or in particular cases. For instance, if we accept substance dualism and stipulate that souls are both perfectly indivisible (ruling out puzzles of multiplication and manyism) and incapable of direct interaction (ruling out mindmelding and hive minds), then questions about the ethics of connected minds might not arise at all.Footnote 24

Second, if we deny that mindmelding and hive minds can produce shared mental states, then we can accommodate our intuitions without accepting subject counting. This approach might be especially plausible if we accepted an idea like the exclusion postulate of Integrated Information Theory (see e.g. Oizumi et al., 2014; Mørch, 2019, Albantakis et al. 2022), according to which as soon as two minds become connected enough to share an experience, they immediately cease to exist as two minds and instead simply exist as one.

Third, if we deny that humans and other individuals generally overlap with huge numbers of other conscious subjects, then we can accommodate our intuitions without accepting state counting. This approach might be especially plausible if we accept that we can solve puzzles of multiplication in a non-manyist way, by showing that while each individual appears to correspond to many entities that can qualify as a conscious subject, they in fact correspond to only one such entity (such as the brain as a whole, or the body as a whole).

We will not argue against these and other solutions that attempt to restrict state sharing to greater or lesser degrees. Instead, we will explore a solution which (i) embraces the possibility of state sharing across a wide range of cases and (ii) attempts to reconcile the disparate intuitions this possibility generates. We have a few reasons for developing this approach. One is that we personally feel skeptical that the benefits of these other solutions outweigh their costs. Another is that we want to know what would follow if state sharing were possible. And a third is that we think that our solution not only resolves the tension discussed here but also clarifies its relation to a range of other metaphysical and normative issues, as we will see in Sect. 5.

One might argue that everyday morality involves a commitment to ‘the separateness of persons’ (Gauthier, 1962, p. 126, Rawls, 1971, p. 27, Nozick, 1974, p. 33; cf. Hinton, 2009), and that this commitment rules out the possibility of shared minds.Footnote 25 But that inference would be a mistake, for at least three reasons. First, it might be that two persons can share mental states without thereby overlapping as persons. Second, it might be that two persons can overlap metaphysically and/or at the level of theory without thereby overlapping ethically and/or at the level of practice, or vice versa. Finally, it might be that two persons can overlap in all these ways, despite assumptions or intuitions to the contrary. Everyday morality is, after all, not the final word on what we should believe, value, or do—in theory or in practice.Footnote 26

4 Resolving the tension through holism about welfare-relevant states

For someone who thinks welfare-relevant mental states can be shared, the ideal view would be one which, on one hand, supports subject counting in cases of only slight mental overlap, since these are the kinds of cases in which our intuitions most support subject counting, but which, on the other hand, supports state counting in cases of near total overlap, since these are the cases in which our intuitions support state counting. We think such a view is available, since we can accept state counting while stressing the holistic character of welfare-relevant mental states. That is, when two mostly distinct subjects have only slight mental overlap, we can hold that the intrinsic value of their shared welfare relevant mental states should be state counted, but we can also hold that most or all of the value of these states is not intrinsic to them, but rather consists in their contributions to larger fields of mental states that are unshared between these subjects.

Recall the ketchup case noted above, where a shared taste experience was pleasant to one of the Hogan twins but unpleasant to the other. Here there is no temptation to say that the ketchup taste experience is intrinsically good or bad. Instead, the value of this taste experience comes from the unshared valenced states to which it contributes. On our holistic proposal, this example provides a model for how to assess the value of shared mental states in general. For example, in the case where a hammer hitting your hand causes pain both to you and to your technologically-linked friend, we can likewise say that the shared sensory experience of the hammer hitting your hand is not intrinsically bad. Instead, the value of this sensory experience comes from the unshared valenced states to which it contributes.Footnote 27

We should emphasize that we are proposing holism about valenced experience—pleasantness or unpleasantness—not about pain per se. For all we say here, pain as a distinctive sensory experience may not be holistic, and may even be subserved by a dedicated module, as argued by Casser and Clarke (2023). But this possibility tells us nothing about what determines the unpleasantness of pain. Indeed, Casser and Clarke defend the modularity of pain in part by stressing that this view allows for “instances in which cognitive states [change] the attitudes we take towards them and/or our emotional states in relation to our these, rendering these pains more or less bearable” (2023, p. 835). Our suggestion here is simply that experienced unpleasantness is at least partly a matter of such non-modular attitudinal and emotional factors. Neurological discoveries might turn out to support the modularity of affective valence itself; if so, that would undermine the sort of holism we are suggesting.

We might also envision different versions of holism for different kinds of welfare-relevant states. A strong version of holism might deny that any component of your sensory experience has valence in and of itself. On this view, what we have been describing as pain is not negatively valenced in and of itself, but is rather a kind of sensation that tends to make any total experience to which it contributes negatively valenced. We might accept such a view because we hold that valence requires cognitive mechanisms that extend beyond mere sensory experience, such as higher-order processes that turn aversive sensations into conscious suffering.Footnote 28 But whatever the explanation, the upshot of this view is that while your suffering and your friend’s suffering are both based partly on a shared sensory experience, they are each unshared, and they should count separately in evaluations of the badness of the situation.

In contrast, a weak version of holism might allow that some component of your sensory experience has valence in and of itself, while adding that some, perhaps even most, of its valence lies in its contribution to your total experience. For instance, we might say that the shared experience of being hit with a hammer has a negative valence, while noting that it also contributes to further, unshared mental states that have a negative valence as well; for example, it frustrates your desire not to be hit, it creates a traumatic memory of being hit, it creates a fear of being hit again, it creates anxiety about implications for your projects and relationships, and so on. We might then add that the negative valence of these further, unshared states outweighs the negative valence of the sensory experience that led to them.

However exactly we spell out the details of holism, it implies that welfare evaluations in situations involving only slight mental overlap can be dominated by what is not shared rather than by what is shared. Thus, the state-counting view combined with holism can vindicate the intuition that causing two subjects intense pain via a shared sensory experience can be about as bad as causing two subjects intense pain via separate sensory experiences. This kind of analysis applies especially well in mindmelding cases, since in many such cases, what is unshared matters more than what is shared, and so it makes sense that we would treat these cases as involving separate subjects and welfare states for many practical purposes. Conversely, this kind of analysis applies less well to puzzles of multiplication, since in many such cases, what is shared matters more than what is unshared, and so it makes sense that we would treat these cases as involving a single subject and welfare state for many practical purposes.

The vindication of subject-counting intuitions may be more or less complete, depending on exactly how strong the holistic claim is. Strong holism implies that subject counting is a very good proxy for state counting in cases involving only slight mental overlap, whereas weak holism implies that subject counting is a somewhat good proxy for state counting in such cases. And, of course, middle-ground views have middle-ground implications. Granted, the weaker holism becomes, the less complete its vindication of subject-counting intuitions becomes. But of course, it is unclear how fine-grained our moral intuitions are in such cases. Given how difficult it can be to quantify and compare phenomenal characters, we might be happy with a view that serves as a relatively reliable heuristic in many cases in practice.Footnote 29

Another advantage of holism about welfare-relevant mental states is that it helps the state-counting approach respond to Javier-Castellanos’s claim that treating pain states as mattering in themselves, rather than as mattering for the subjects who undergo them, amounts to being ‘fetishistic’ about pain. As Javier-Castellanos articulates this worry (drawing on arguments in Chappell, 2015), “pains are only bad because they are bad for the beings who experience them… [so] placing the emphasis on the pains themselves… gets things exactly backwards” (Javier-Castellanos 2023, p. 1786). In cases of slight overlap, holists can accommodate the thought that it would be fetishistic to focus on the intrinsic badness of a particular pain by stressing that either some, most, or all of its badness derives from its contribution to a complex mind, which renders that mind’s total experience unpleasant.Footnote 30

Is holism about welfare-relevant mental states, however understood, plausible? For non-hedonic mental states that might be relevant to welfare (desires, preferences, intentions, and so on), it is arguably very plausible. For instance, desires have a clear connection to beliefs, intentions, and other such states. And determining which course of action is best for an individual all things considered clearly requires holistically assessing how their beliefs, desires, intentions, and other such states relate to each other. Thus, if two individuals share a token desire but also have a wide range of unshared beliefs, desires, and intentions, then the mere fact that this desire is satisfied or frustrated might not tell us much about whether this situation is good or bad for these individuals all things considered.

For hedonistic states that might be relevant to welfare—namely, pleasant and unpleasant experiences—holism is plausible in many cases as well. On one hand, a proponent of holism might point to the cases of what Bradford (2020, p. 239 ff) calls “hurts so good” experiences. These are cases where an ordinarily-unpleasant experience seems to contribute to a pleasant experience. Standard examples include engagement with horror, tragedy, or other kinds of painful art, as well as cases where suffering during an activity makes the agent feel more alive and makes the achievement feel more worthwhile.Footnote 31 As Bloom says, “under the right circumstances and in the right doses, physical pain and emotional pain, difficulty and failure and loss, are exactly what we are looking for” (Bloom, 2021; cf. Rozin et al., 2013).Footnote 32

Granted, an opponent of holism might point to cases where an unpleasant experience seems to trace entirely to the intrinsic badness of a particular pain. However, a holist can reply that in such cases we are simply unable to prevent a particular pain from causing our total experience to have a negative valence. The holist might add that if the overall mental context were sufficiently different (for instance, if we went through intense meditative training or neurotechnological enhancement), then it might be possible to feel that very pain and yet be indifferent to it. In any case, we will not pursue this line of reasoning further. Instead, we will stick with the conditional: to the extent that hedonic valence is holistic in its determinants, the tension between subject-counting and state-counting intuitions can be resolved.

If our approach to reconciliation succeeds, it suggests that state-counting approaches are able to accommodate divergent intuitions under a range of metaphysical views. In contrast, subject-counting views appear committed to either rejecting some intuitions (say, about puzzles of multiplication or constitutive panpsychism) or rejecting particular metaphysical views that imply state sharing in these cases (say, rejecting manyism or constitutive panpsychism). Of course, this is not a decisive objection to subject counting: many people might accept those constraints on their metaphysics very happily; indeed, writers like Simon seek to use an implicit subject-counting assumption to motivate those constraints. Still, the greater flexibility of the state-counting approach is a virtue on balance.

5 Implications for other questions

We can close by briefly returning to several questions that relate to the Value Counting question, starting with the Hedonic Emergence question: Must a valenced composite experience be made of valenced parts, or vice versa? Holism suggests, without establishing, a negative answer: a valenced state could be composed entirely out of nonvalenced parts. Holism implies that the determinants of a total experience depends on more than the valence of that experience’s parts. However, it does not establish a negative answer because it allows for the possibility that valence in the parts is a necessary ingredient for valence in the whole, even if not the sole ingredient. Depending on how we resolve these issues, we might find that holism moderates the moral implications of panpsychism by undermining any direct inference from the valence of complex experiences to the valence of their simpler parts.

Second, consider the Hedonic Alignment question: if a composite experience and its parts both have valence, must their valences match? Holism suggests, without establishing, a negative answer to this question as well. Suppose that valence for each mind depends on the structure of its experience, and that the structure of each part contributes to, without determining, the structure of the whole. In this case, with large parts we might expect more alignment (for instance, it might be hard for the whole to be pleased if both halves are displeased), but with small parts we might expect less (for instance, it might be easy for the whole to be pleased even if each of a million parts is displeased).Footnote 33 These reflections suggest that we should not be blithely optimistic about the value alignment of future hive minds composed of human minds or their descendants: the whole is not guaranteed to feel the same way as its parts.

Finally, what about person-affecting and impersonal theories of welfare? It is important to note that, while state-counting/impersonal theories and subject-counting/person-affecting theories seem like natural pairs, these pairs can also come apart. For instance, we might combine a person-affecting view with state counting as follows: all value comes from what is better or worse for identifiable people, but when those people share states, the value associated with these states combines non-summatively (cf. Sutton, 2014). Likewise, we might combine an impersonal view with subject counting as follows: it is impersonally good for as many subjects as possible to undergo as much pleasure and as little displeasure as possible, regardless of whether they share that pleasure/displeasure with other subjects. Nevertheless, to the extent that these views are natural pairs, our argument for state counting may lend support to an impersonal view.

6 Conclusions

In this paper, we introduced the Value Counting question: If welfare-relevant mental states can be shared between subjects, should we make welfare assessments by counting welfare states or by counting their subjects? We then outlined how this question connects to other questions in ethics and metaphysics, including questions about the nature of welfare and about the composition of consciousness. We also discussed examples that intuitively support the subject-counting approach as well as examples that intuitively support the state-counting approach, and we canvassed some potential solutions.

Finally, we developed what we take to be a promising answer to the Value Counting question: the state-counting approach plus at least moderate holism about welfare-relevant mental states. On this view, what matters is the value of mental states, but many mental states are good or bad partly, mostly, or fully because of the contribution that they make to other, more complex mental states that include them. This view denies that we should give extra weight to shared states simply in virtue of their being shared, but it allows that many states can cause more positive or negative experience overall in virtue of their being shared.

Of course, more research is needed to investigate this answer as well as other possible answers. And even if all relevant philosophical questions are resolved, many other empirical and practical questions will remain. For instance, can we produce more happiness, or more value, by creating a web of overlapping minds or a set of separate minds, all else equal? Given the importance these questions may take on as technology expands the range of potential mind-structures we can create or become, they will deserve careful attention moving forward.