Abstract
It may soon be possible for neurotechnology to connect two subjects' brains such that they share a single token mental state, such as a feeling of pleasure or displeasure. How will our moral frameworks have to adapt to accommodate this prospect? And if this sort of mental-state-sharing might already obtain in some cases, how should this possibility impact our moral thinking? This question turns out to be extremely challenging, because different examples generate different intuitions: If two subjects share very few mental states, then it seems that we should count the value of those states twice, but if they share very many mental states, then it seems that we should count the value of those statesonce. We suggest that these conflicting intuitions can be reconciled if the mental states that matter for welfare have a holistic character, in a way that is independently plausible. We close by drawing tentative conclusions about how we ought to think about the moral significance of shared mental states.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Rejecting exclusivity
Morality is at least partly about promoting welfare: increasing the sum total of happiness, interest-satisfaction, or other measures of wellbeing, while decreasing the sum total of suffering, interest-frustration, or other measures of illbeing. Utilitarians think that all of morality is entirely about making more subjects better off and fewer subjects worse off in these ways, but even non-utilitarians tend to think that morality is at least partly about this project.
This endeavor partly involves identifying mental states related to welfare and evaluating them based on factors like intensity and duration. And generally speaking, we assume that each mental state belongs to exactly one subject: different pleasant or unpleasant experiences can vary in intensity, duration, and other such ways, but cannot vary in how many subjects feel them. I feel my pain, you feel yours, and that is it. Call this assumption ‘Exclusivity’.
What if this assumption is wrong? What if, for instance, there could be two or more subjects feeling the very same pain? A variety of arguments independently suggest that Exclusivity might cease to be true in the future, might already admit of exceptions, or might even fail to be true as a norm.Footnote 1 None of these arguments are decisive, and metaphysical defences of Exclusivity merit consideration,Footnote 2 but our interest here is in what would follow for welfare estimates if Exclusivity is false.Footnote 3 In particular, if two subjects feel the same token pain, then is that twice as bad as when one subject feels this kind of pain, all else equal, or is it equally bad as when one subject feels this kind of pain, all else equal? Call these the ‘subject-counting’ and ‘state-counting’ approaches,Footnote 4 and call this puzzle the ‘Value Counting’ question.
The answer to the Value Counting question might bear on the best way to resolve conflicts, allocate rights, or distribute scarce resources in situations where some minds share token mental states and others do not. It might also bear on which kinds of minds we should create or not create in the future. Advances in AI or neurotechnology might make it possible to engineer minds with structures and relationships very different from those typical of humans and nonhuman animals, and determining how to estimate welfare impacts when mental states are shared may become crucial to deciding how to treat differently-structured minds.
In Sect. 2, we review five contexts where welfare-relevant mental states might be shared. Some cases prompt subject-counting intuitions while others prompt state-counting intuitions. In Sects. 3 and 4 we argue that the best way to reconcile these intuitions is to endorse the state-counting approach but explain the subject-counting intuitions by appeal to the holistic nature of many welfare-relevant mental states. Since much or all of the value or disvalue of these states depends on how they contribute to broader mental life, we can explain why subject counting seems to make sense in cases involving relatively little overlap.
Before we begin, we can note that while the Value Counting question is our primary focus, it potentially interacts with at least three other questions that will become more important as AI and neurotechnology advance. First, consider what we call the Hedonic Emergence question: Does hedonic valence in a complex experience entail hedonic valence in parts of that experience?Footnote 5 Or could valence be ‘emergent’, in the sense of belonging to composite experiences no component of which is itself valenced? While we do not address the Hedonic Emergence question in detail, the holistic position that we defend in Sect. 4 suggests a negative answer: the valence of a composite experience depends on its overall structure, and so need not derive from valenced or unvalenced status of its parts.
Second, consider what we call the Hedonic Alignment question: If one valenced experience is made of other valenced experiences, must their valence be aligned? For example, can a collection of unhappy states collectively constitute a happy state (or vice versa), or can they collectively constitute only an unhappy state? Some of the examples that we discuss will depend essentially on this issue. While we do not address the Hedonic Alignment question in detail either, the holistic position that we defend suggests a nuanced answer: alignment is guaranteed only in proportion to the amount of overlap relative to the larger mind, so a composite mind with many very small mental parts might diverge from them sharply in welfare, while one with only a few large mental parts will be more closely aligned.
Finally, the choice between subject-counting and state-counting approaches evokes the debate between person-affecting and impersonal views of welfare.Footnote 6 Can a situation be better even if no identifiable person is better off in it? On impersonal views, the answer is yes: the situation can be better because it contains more happiness, satisfaction, or some other kind of impersonal positive value or less suffering, frustration, or some other kind of impersonal disvalue. On a person-affecting view, the answer is no: the comparative value of situations must be tied to their comparative value for specific individuals. We will suggest that the subject-counting approach fits naturally with the person-affecting view and that the state-counting approach fits naturally with the impersonal view, though this is not a strict entailment.
2 Candidate cases of state sharing
We begin in this section by further explaining and motivating the Value Counting question by describing in more detail the kinds of mental connection that might be possible and why these kinds of mental connection might matter. First, in Sects. 2.1 and 2.2, we describe two general ways for minds to be connected: what we call ‘neural mindmelding’ (where two mostly distinct subjects have partial overlap in their mental states, in virtue of an anatomical or technological connection) and what we call ‘hive minds’ (where one mind is composed of other minds). Then, in Sects. 2.3, 2.4 and 2.5, we describe three philosophical debates that turn in part on the possibility of shared mental states: the Problem of the Many, the implications of constitutive panpsychism, and the problem of fission and fusion.
2.1 Mindmelding
First, neural mindmelding occurs if two brains are connected in ways that mimic the connections within a single brain (Danaher & Nyholm, 2021; Hirstein, 2008, 2012; Lyreskog et al., 2023). This kind of connection might result from technology known as a brain-to-brain interface, albeit likely from a version more advanced than anything currently on the horizon. In this process, linked implants would allow different brains to share information in a way that functionally resembles how nerve tracts allow different parts of a single brain to share information.
If we suppose that the boundaries of the mind are primarily fixed by information-processing patterns rather than by the anatomical boundaries of particular tissue types,Footnote 7 then neural mindmelding might produce a situation where the physical supervenience base of one mind extends into the physical supervenience base of another mind. Alternatively, it might be that the connections themselves become part of the supervenience base for both minds. Of course, this does not necessarily mean that two subjects have fully merged or come to function as a single subject.Footnote 8 Instead, it simply means that a subset of mental states now belongs to both of these subjects at the same time, in virtue of the connections between them.
There may be real cases where atypical brain conditions already create a kind of neural mindmelding. The split-brain condition is often described as creating two distinct subjects of consciousness. If this description is accurate, then it seems likely that the extensive neural connection between these two hemisphere-subjects (in the brainstem, thalamus, and other subcortical structures) might produce some shared token states.Footnote 9 However, the split-brain may not actually be an example of two subjects of consciousness. If two subjects do exist in such a case, then they seem to be unaware of each other, and they also seem to cooperate effectively, each taking themselves to be the only subject present.Footnote 10
A clearer candidate case might be craniopagus conjoined twins, who are distinct individuals joined at the head, potentially with linked or overlapping brains. Philosophers have discussed Canadian twins Krista and Tatiana Hogan as a possible case of state sharing because of the ‘thalamic bridge’ that seems to allow sensory information to pass between their brains.Footnote 11 In comparison with split-brain cases, this kind of conjoined-brain case appears to allow each individual to be aware of the other as a separate individual, while also allowing these individuals to share some mental states.
An intriguing example involves shared taste experience: “Krista likes ketchup, and Tatiana does not, something the family discovered when Tatiana tried to scrape the condiment off her own tongue, even when she was not eating it” (Dominus, 2011). In this kind of case, a single token experience (the taste of ketchup) appears to be produced by Krista but experienced by both Krista and Tatiana. Moreover, the experience appears to be part of an overall pleasant experience for Krista and part of an overall unpleasant experience for Tatiana (a possibility that will be important in Sect. 4).
Importantly, neural mindmelding requires more than simple information sharing that causes two subjects to have distinct token states of the same type. If Krista eating ketchup produced two matching token taste-states—one for her and one for Tatiana—then that would not be mindmelding, but would instead be a weaker connection that we might call ‘neural telepathy.’ It might be that many, if not all, cases that appear to involve mindmelding actually involve telepathy. Nevertheless, it is worth asking how mindmelding, if and when it comes about, should affect moral theorizing.
2.2 Hive minds
Second, a hive mind occurs if multiple individuals are suitably networked with each other such that (i) the whole and the parts have minds and (ii) some or all of the whole’s mental states are ‘inherited from’ the parts, and to that extent shared with them.
Some presently existing organisms have at times been considered as (admittedly speculative) candidates for being hive minds. For an example concerning parts and wholes, consider octopuses. Octopuses are sometimes said to have ‘nine brains’, one central brain and another, smaller brain for each arm. They exhibit some integrated behavior, where it appears that the organism is acting as a whole, and some fragmented behavior, where it appears that parts are acting independently of each other.Footnote 12 Assuming, as seems plausible, that consciousness exists in octopuses at all, is it possible that consciousness exists in each brain as well as in the octopus as a whole, and if so, how similar are the conscious experiences at each level?
Similarly, for an example concerning individuals and groups, consider colonies of eusocial insects like ants and termites. Members of these colonies can and do act independently of each other, but the colonies also exhibit a remarkable degree of functional unity. At present, there is more uncertainty about consciousness in insects than about consciousness in octopuses, and the possibility of consciousness in groups is less widely accepted than the possibility of consciousness in parts.Footnote 13 Still, assuming that consciousness exists in insects at all, we can once again ask: Is it possible that consciousness exists in each insect as well as in the colony as a whole, and if so, how similar are the conscious experiences at each level?
Indeed, some humans could even turn out to be hive-minds. Specifically, if parts of human brains, such as individual cerebral hemispheres or other, smaller subsystems can have conscious experiences, then is it possible that parts of human brains and whole human brains can be conscious at the same time? This would be a very surprising result, but some philosophers have defended this possibility (see, for instance, (Blackmon, 2016; 2021; cf. Roelofs 2022a), and this possibility raises interesting questions about the moral status of our parts and their moral relationships with each other and with the person as a whole.
Other examples might arise in the future. Return to the kind of neural mindmelding described in the previous subsection. There we focused on states being shared between two (or more) human brains, but what if the system constituted by these brains could be conscious as well? Then we would have a consciousness in each member of the group as well as in the group as a whole, in much the same way that we might currently have with some invertebrates (cf. Danaher & Nyholm, 2021; Danaher & Petersen, 2021). What might it be like to be a conscious group of humans, or to be a member of such a group?
A hive mind structure could appear in artificial beings as well. If and when AI systems become conscious, they might be linked in the same kinds of ways that invertebrate minds are, but to a much greater extent. In particular, a vast number and wide range of artificial minds might be linked with each other via the internet. In this case, depending on how these minds are structured, it might even be possible for hive minds to be layered, with individual minds making up collective minds that, themselves, make up even larger collective minds. If so, what would the moral status of these layered hive minds be like?
Indeed, if parts, wholes, and groups can all be conscious at the same time, then we might find that layered hive minds can proliferate, particularly as new technologies come online. For instance, we might find that parts of human brains are conscious, that whole human brains are conscious, and that groups of human brains, facilitated by brain-to-brain interfaces, are conscious all at the same time. Even if we take this possibility to be unlikely, we can still ask how, if it were to occur, it would disrupt our conception of our identities, our relationships, our abilities, and our responsibilities to ourselves and each other.
2.3 Puzzles of multiplication
According to some views, there can also be shared mental states between overlapping parts of one human being. Consider that if your severed head existed by itself (with appropriate life-support) it could be conscious. The same is true for your brain, your top half, and other parts of you. So these entities are, considered intrinsically, capable of consciousness. And why would their current situation (connected to the rest of you) suppress or inhibit their consciousness? So it might seem to follow that they are conscious right now, and since your parts are apparently distinct from one another (and from you, the whole human being), it might also seem that there are multiple conscious subjects in you right here and now.
More generally, for any material object that could be identified with ‘you’, the conscious subject, there are many overlapping material objects that seem like a comparably good candidate for being a conscious subject. You would still be conscious without any particular cell, so it might be that for each of your cells, ‘you with this cell’ is conscious and ‘you without this cell’ is conscious. Unlike two people linked by mindmelding, these different entities are often not capable of independent existence, but this would not clearly stop them from being conscious. This has been called the problem of ‘Too Many Minds’.Footnote 14
Different philosophers respond to puzzles like this in different ways. Some try to avoid this implication. For instance, to avoid saying that both you and your brain are conscious, someone might deny that the brain itself is conscious, even though it would be if it existed by itself.Footnote 15 Alternatively, someone might deny that the larger whole containing your brain is conscious. But both of these options are somewhat counter-intuitive, and in general philosophers who are inclined to think that some physical systems can have minds must grapple with the likelihood that such systems may often overlap with each other.
Other philosophers, sometimes called ‘manyists’, embrace the implication that many different physical systems have minds, maintaining that this is not a problematic result.Footnote 16 According to this view, because these systems overlap, they share the same token mental states, and we can count these mental states as one and the same for most practical purposes. However, evaluating the viability of manyism requires evaluating the metaphysical and moral implications of shared mental states, and critics of manyism have repeatedly suggested that accepting overlapping minds yields unacceptable revisions to everyday morality.Footnote 17
2.4 Constitutive panpsychism
According to other views, state sharing might be incredibly widespread, even ubiquitous. For instance, constitutive panpsychism maintains that the fundamental physical constituents of the universe have a very simple kind of consciousness, and that our complex consciousness is combined out of this very simple kind of consciousness somehow.Footnote 18
On some accounts of how this combination works (though not all; see, for instance, Goff and Roelofs, 2020), it involves token states being shared. Every token experience of yours is composed of simpler states, themselves composed of simpler states, down to the simplest phenomenal components, each of which is shared between you and some microscopic part of your brain. If constitutive panpsychism is true, then there is a sense in which we may all be very layered hive minds. Indeed, some opponents of this view have raised the supposed impossibility of such state sharing as an objection, part of the “combination problem.”Footnote 19
Our interest here is not in whether this kind of state sharing is possible, but rather in what follows for morality if the answer is yes. If your pleasures and displeasures are constituted by simpler experiences, which are themselves constituted by simpler experiences, which are themselves constituted by simpler experiences, and so on all the way down to the simplest parts of you, then what, if anything, should we infer about the pleasures and displeasures of these simpler parts, and about the total amount of welfare contained within you?
The answer to this question turns partly on what in Sect. 1 we called the Hedonic Emergence and Hedonic Alignment questions. If you have valenced experiences, does that mean that your parts do too? If they do, does that mean that the valences are aligned? And if they are, then should we count these valenced experiences many times (such that each of us contains much more welfare than we might have expected) or only once? In these respects, determining the moral implications of panpsychism partly requires answering the Value Counting question.Footnote 20
2.5 Fission, fusion, and other diachronic cases
The above examples all turn on synchronic features of a subject, that is, features that obtain at a single point in time. But in some cases philosophers have posited overlapping minds in virtue of diachronic features of a subject, that is, features that obtain over time.
Some philosophers appeal to degrees of psychological connectedness to make this point. For instance, we might posit the existence of co-located subjects whose persistence requires different degrees of psychological connectedness across time, such as a relatively internally inconsistent ‘person as a whole’ whose persistence requires a relatively low degree of connectedness and relatively internally consistent selves—say, a ‘manic self’ and a ‘depressed self’–whose persistence requires a relatively high degree of connectedness across time.Footnote 21 A proponent of this view might hold that the mental states of each ‘smaller’ self also belong to the ‘larger’ self that contains them both (see also Johnston, 2017, 2021).
Other philosophers appeal to cases where one person undergoes ‘fission’ (branching into two people, both sufficiently psychologically continuous with the original that they would clearly qualify as that very person, were it not for the other), or where two people undergo ‘fusion’ (merging somehow into one person who is psychologically continuous enough with each that they would clearly qualify as that very person, were it not for the other). At least in principle, these cases can also involve one person who turns into many people, many people who turn into one person, or various combinations of these possibilities.
One analysis of such cases, starting with Lewis (1976), holds that although there is only one ‘person-stage’ in existence before fission or after fusion, that stage is ‘shared’ by multiple persons because it occupies a place in the life-history of multiple persons. Since existing at a time just means having a person-stage at that time, all of the persons who emerge from fission or who enter into fusion exist at one and the same time, even though they perfectly overlap and thus are, at that moment, indistinguishable from a single person.
Briggs and Nolan (2015) note that this ‘multiple occupancy’ view leads to the implausible result that if someone will become two persons in the future, then their present welfare matters twice as much as it otherwise would (because two presently overlapping persons share it), though they also note that we can avoid this result if we adopt a state-counting approach (cf. Schwitzgebel, 2020). However, Javier-Castellanos (2023) criticizes this solution from a subject-counting perspective, noting that when we attend to other cases, like the Hogan twins, the state-counting approach seems to get the wrong answers—that is, Javier-Castellanos identifies precisely the tension that we address in this paper.
3 The tension among intuitions
We think intuitive reactions to these different cases are likely to point in opposite directions. On the one hand, when the proportion of shared states is small, subject counting seems like the correct approach. On the other hand, when the proportion of shared states is large, state counting seems like the correct approach. Meanwhile, some cases—including diachronic cases involving high degrees of connectedness at some points and low degrees of connectedness at other points—might prompt mixed intuitions.
To see what we mean, consider a case involving a relatively low degree of overlap that, intuitively, supports subject counting. In particular, suppose that you and your friend shared the sensations associated with your left hand, but nothing else. In this case, the stimulation of your left hand produces a single token experience that both of you can access, because it occurs in a neural region that is integrated equally into both of your brains. Now suppose someone hits that hand with a hammer. The resulting experience—again, a single token state—makes it the case that you and your friend both experience severe pain, which, of course, is very bad for both of you.
Intuitively, this situation is about as bad, hedonically speaking, as a situation where two discrete people both have their hands hit with hammers. The fact that, by hypothesis, the imagined situation involves a single shared experiential state rather than two distinct experiential states seems beside the point; what matters is that two people are both in severe pain. To count the pain only once would seem unfair to you and your friend; it might mean, for instance, that scarce pain relief is preferentially allocated to two unconnected people each in moderate pain, rather than to the two of you, both in severe pain.
Now consider a case involving a relatively high degree of overlap that, intuitively, supports state counting. Suppose that manyism is correct, and so each person is a ‘cloud’ of conscious subjects. Now suppose that someone hits two people—a larger person and a smaller person—in the hand with a hammer, causing a single token experiential state that each conscious subject in the ‘cloud’ can access. In this case, since the larger person contains many more physical subsets than the smaller person, the larger person also contains many more conscious subjects. As a result, hitting the larger person with the hammer causes many more conscious subjects to experience pain.
Intuitively, however, both strikes of the hammer are roughly equally bad, all else being equal. The mere fact that the larger person contains many more physical subsets—and thus, by hypothesis, many more conscious subjects—does not change the badness of the pain caused in each case. Indeed, critics of manyism have raised precisely this worry; Simon (2017) calls it ‘the hard problem of the many’, and offers it as a reductio of manyism and by extension of materialism itself.Footnote 22 However, Roelofs (2022b) responds that if mental states can be shared, then these results need not follow (cf. Sutton, 2014).
Meanwhile, many other cases might prompt mixed intuitions. This is particularly true of some diachronic cases. Suppose that two people each need to undergo an extremely painful procedure and that we can temporarily fuse their minds so that they share a single, pain-filled stream of consciousness for the duration of the procedure.Footnote 23 Is this a better state of affairs, since it reduces two experiences to one? Or is it equally bad, since two subjects can still access this experience? Since this kind of case is particularly confusing, and since the general approach we offer below to reconciling these competing intuitions appeals to a synchronic form of psychological holism rather than to anything diachronic, we will not try to resolve this issue here. Still, our general solution might bear on this particular issue in ways that we will not be able to explore here (see footnote 29).
How, in general, should we resolve this apparent tension between cases that prompt state-counting intuitions and cases that prompt subject-counting intuitions? First, we might seek to resolve this tension by denying that welfare-relevant states are shareable in the first place, either in general or in particular cases. For instance, if we accept substance dualism and stipulate that souls are both perfectly indivisible (ruling out puzzles of multiplication and manyism) and incapable of direct interaction (ruling out mindmelding and hive minds), then questions about the ethics of connected minds might not arise at all.Footnote 24
Second, if we deny that mindmelding and hive minds can produce shared mental states, then we can accommodate our intuitions without accepting subject counting. This approach might be especially plausible if we accepted an idea like the exclusion postulate of Integrated Information Theory (see e.g. Oizumi et al., 2014; Mørch, 2019, Albantakis et al. 2022), according to which as soon as two minds become connected enough to share an experience, they immediately cease to exist as two minds and instead simply exist as one.
Third, if we deny that humans and other individuals generally overlap with huge numbers of other conscious subjects, then we can accommodate our intuitions without accepting state counting. This approach might be especially plausible if we accept that we can solve puzzles of multiplication in a non-manyist way, by showing that while each individual appears to correspond to many entities that can qualify as a conscious subject, they in fact correspond to only one such entity (such as the brain as a whole, or the body as a whole).
We will not argue against these and other solutions that attempt to restrict state sharing to greater or lesser degrees. Instead, we will explore a solution which (i) embraces the possibility of state sharing across a wide range of cases and (ii) attempts to reconcile the disparate intuitions this possibility generates. We have a few reasons for developing this approach. One is that we personally feel skeptical that the benefits of these other solutions outweigh their costs. Another is that we want to know what would follow if state sharing were possible. And a third is that we think that our solution not only resolves the tension discussed here but also clarifies its relation to a range of other metaphysical and normative issues, as we will see in Sect. 5.
One might argue that everyday morality involves a commitment to ‘the separateness of persons’ (Gauthier, 1962, p. 126, Rawls, 1971, p. 27, Nozick, 1974, p. 33; cf. Hinton, 2009), and that this commitment rules out the possibility of shared minds.Footnote 25 But that inference would be a mistake, for at least three reasons. First, it might be that two persons can share mental states without thereby overlapping as persons. Second, it might be that two persons can overlap metaphysically and/or at the level of theory without thereby overlapping ethically and/or at the level of practice, or vice versa. Finally, it might be that two persons can overlap in all these ways, despite assumptions or intuitions to the contrary. Everyday morality is, after all, not the final word on what we should believe, value, or do—in theory or in practice.Footnote 26
4 Resolving the tension through holism about welfare-relevant states
For someone who thinks welfare-relevant mental states can be shared, the ideal view would be one which, on one hand, supports subject counting in cases of only slight mental overlap, since these are the kinds of cases in which our intuitions most support subject counting, but which, on the other hand, supports state counting in cases of near total overlap, since these are the cases in which our intuitions support state counting. We think such a view is available, since we can accept state counting while stressing the holistic character of welfare-relevant mental states. That is, when two mostly distinct subjects have only slight mental overlap, we can hold that the intrinsic value of their shared welfare relevant mental states should be state counted, but we can also hold that most or all of the value of these states is not intrinsic to them, but rather consists in their contributions to larger fields of mental states that are unshared between these subjects.
Recall the ketchup case noted above, where a shared taste experience was pleasant to one of the Hogan twins but unpleasant to the other. Here there is no temptation to say that the ketchup taste experience is intrinsically good or bad. Instead, the value of this taste experience comes from the unshared valenced states to which it contributes. On our holistic proposal, this example provides a model for how to assess the value of shared mental states in general. For example, in the case where a hammer hitting your hand causes pain both to you and to your technologically-linked friend, we can likewise say that the shared sensory experience of the hammer hitting your hand is not intrinsically bad. Instead, the value of this sensory experience comes from the unshared valenced states to which it contributes.Footnote 27
We should emphasize that we are proposing holism about valenced experience—pleasantness or unpleasantness—not about pain per se. For all we say here, pain as a distinctive sensory experience may not be holistic, and may even be subserved by a dedicated module, as argued by Casser and Clarke (2023). But this possibility tells us nothing about what determines the unpleasantness of pain. Indeed, Casser and Clarke defend the modularity of pain in part by stressing that this view allows for “instances in which cognitive states [change] the attitudes we take towards them and/or our emotional states in relation to our these, rendering these pains more or less bearable” (2023, p. 835). Our suggestion here is simply that experienced unpleasantness is at least partly a matter of such non-modular attitudinal and emotional factors. Neurological discoveries might turn out to support the modularity of affective valence itself; if so, that would undermine the sort of holism we are suggesting.
We might also envision different versions of holism for different kinds of welfare-relevant states. A strong version of holism might deny that any component of your sensory experience has valence in and of itself. On this view, what we have been describing as pain is not negatively valenced in and of itself, but is rather a kind of sensation that tends to make any total experience to which it contributes negatively valenced. We might accept such a view because we hold that valence requires cognitive mechanisms that extend beyond mere sensory experience, such as higher-order processes that turn aversive sensations into conscious suffering.Footnote 28 But whatever the explanation, the upshot of this view is that while your suffering and your friend’s suffering are both based partly on a shared sensory experience, they are each unshared, and they should count separately in evaluations of the badness of the situation.
In contrast, a weak version of holism might allow that some component of your sensory experience has valence in and of itself, while adding that some, perhaps even most, of its valence lies in its contribution to your total experience. For instance, we might say that the shared experience of being hit with a hammer has a negative valence, while noting that it also contributes to further, unshared mental states that have a negative valence as well; for example, it frustrates your desire not to be hit, it creates a traumatic memory of being hit, it creates a fear of being hit again, it creates anxiety about implications for your projects and relationships, and so on. We might then add that the negative valence of these further, unshared states outweighs the negative valence of the sensory experience that led to them.
However exactly we spell out the details of holism, it implies that welfare evaluations in situations involving only slight mental overlap can be dominated by what is not shared rather than by what is shared. Thus, the state-counting view combined with holism can vindicate the intuition that causing two subjects intense pain via a shared sensory experience can be about as bad as causing two subjects intense pain via separate sensory experiences. This kind of analysis applies especially well in mindmelding cases, since in many such cases, what is unshared matters more than what is shared, and so it makes sense that we would treat these cases as involving separate subjects and welfare states for many practical purposes. Conversely, this kind of analysis applies less well to puzzles of multiplication, since in many such cases, what is shared matters more than what is unshared, and so it makes sense that we would treat these cases as involving a single subject and welfare state for many practical purposes.
The vindication of subject-counting intuitions may be more or less complete, depending on exactly how strong the holistic claim is. Strong holism implies that subject counting is a very good proxy for state counting in cases involving only slight mental overlap, whereas weak holism implies that subject counting is a somewhat good proxy for state counting in such cases. And, of course, middle-ground views have middle-ground implications. Granted, the weaker holism becomes, the less complete its vindication of subject-counting intuitions becomes. But of course, it is unclear how fine-grained our moral intuitions are in such cases. Given how difficult it can be to quantify and compare phenomenal characters, we might be happy with a view that serves as a relatively reliable heuristic in many cases in practice.Footnote 29
Another advantage of holism about welfare-relevant mental states is that it helps the state-counting approach respond to Javier-Castellanos’s claim that treating pain states as mattering in themselves, rather than as mattering for the subjects who undergo them, amounts to being ‘fetishistic’ about pain. As Javier-Castellanos articulates this worry (drawing on arguments in Chappell, 2015), “pains are only bad because they are bad for the beings who experience them… [so] placing the emphasis on the pains themselves… gets things exactly backwards” (Javier-Castellanos 2023, p. 1786). In cases of slight overlap, holists can accommodate the thought that it would be fetishistic to focus on the intrinsic badness of a particular pain by stressing that either some, most, or all of its badness derives from its contribution to a complex mind, which renders that mind’s total experience unpleasant.Footnote 30
Is holism about welfare-relevant mental states, however understood, plausible? For non-hedonic mental states that might be relevant to welfare (desires, preferences, intentions, and so on), it is arguably very plausible. For instance, desires have a clear connection to beliefs, intentions, and other such states. And determining which course of action is best for an individual all things considered clearly requires holistically assessing how their beliefs, desires, intentions, and other such states relate to each other. Thus, if two individuals share a token desire but also have a wide range of unshared beliefs, desires, and intentions, then the mere fact that this desire is satisfied or frustrated might not tell us much about whether this situation is good or bad for these individuals all things considered.
For hedonistic states that might be relevant to welfare—namely, pleasant and unpleasant experiences—holism is plausible in many cases as well. On one hand, a proponent of holism might point to the cases of what Bradford (2020, p. 239 ff) calls “hurts so good” experiences. These are cases where an ordinarily-unpleasant experience seems to contribute to a pleasant experience. Standard examples include engagement with horror, tragedy, or other kinds of painful art, as well as cases where suffering during an activity makes the agent feel more alive and makes the achievement feel more worthwhile.Footnote 31 As Bloom says, “under the right circumstances and in the right doses, physical pain and emotional pain, difficulty and failure and loss, are exactly what we are looking for” (Bloom, 2021; cf. Rozin et al., 2013).Footnote 32
Granted, an opponent of holism might point to cases where an unpleasant experience seems to trace entirely to the intrinsic badness of a particular pain. However, a holist can reply that in such cases we are simply unable to prevent a particular pain from causing our total experience to have a negative valence. The holist might add that if the overall mental context were sufficiently different (for instance, if we went through intense meditative training or neurotechnological enhancement), then it might be possible to feel that very pain and yet be indifferent to it. In any case, we will not pursue this line of reasoning further. Instead, we will stick with the conditional: to the extent that hedonic valence is holistic in its determinants, the tension between subject-counting and state-counting intuitions can be resolved.
If our approach to reconciliation succeeds, it suggests that state-counting approaches are able to accommodate divergent intuitions under a range of metaphysical views. In contrast, subject-counting views appear committed to either rejecting some intuitions (say, about puzzles of multiplication or constitutive panpsychism) or rejecting particular metaphysical views that imply state sharing in these cases (say, rejecting manyism or constitutive panpsychism). Of course, this is not a decisive objection to subject counting: many people might accept those constraints on their metaphysics very happily; indeed, writers like Simon seek to use an implicit subject-counting assumption to motivate those constraints. Still, the greater flexibility of the state-counting approach is a virtue on balance.
5 Implications for other questions
We can close by briefly returning to several questions that relate to the Value Counting question, starting with the Hedonic Emergence question: Must a valenced composite experience be made of valenced parts, or vice versa? Holism suggests, without establishing, a negative answer: a valenced state could be composed entirely out of nonvalenced parts. Holism implies that the determinants of a total experience depends on more than the valence of that experience’s parts. However, it does not establish a negative answer because it allows for the possibility that valence in the parts is a necessary ingredient for valence in the whole, even if not the sole ingredient. Depending on how we resolve these issues, we might find that holism moderates the moral implications of panpsychism by undermining any direct inference from the valence of complex experiences to the valence of their simpler parts.
Second, consider the Hedonic Alignment question: if a composite experience and its parts both have valence, must their valences match? Holism suggests, without establishing, a negative answer to this question as well. Suppose that valence for each mind depends on the structure of its experience, and that the structure of each part contributes to, without determining, the structure of the whole. In this case, with large parts we might expect more alignment (for instance, it might be hard for the whole to be pleased if both halves are displeased), but with small parts we might expect less (for instance, it might be easy for the whole to be pleased even if each of a million parts is displeased).Footnote 33 These reflections suggest that we should not be blithely optimistic about the value alignment of future hive minds composed of human minds or their descendants: the whole is not guaranteed to feel the same way as its parts.
Finally, what about person-affecting and impersonal theories of welfare? It is important to note that, while state-counting/impersonal theories and subject-counting/person-affecting theories seem like natural pairs, these pairs can also come apart. For instance, we might combine a person-affecting view with state counting as follows: all value comes from what is better or worse for identifiable people, but when those people share states, the value associated with these states combines non-summatively (cf. Sutton, 2014). Likewise, we might combine an impersonal view with subject counting as follows: it is impersonally good for as many subjects as possible to undergo as much pleasure and as little displeasure as possible, regardless of whether they share that pleasure/displeasure with other subjects. Nevertheless, to the extent that these views are natural pairs, our argument for state counting may lend support to an impersonal view.
6 Conclusions
In this paper, we introduced the Value Counting question: If welfare-relevant mental states can be shared between subjects, should we make welfare assessments by counting welfare states or by counting their subjects? We then outlined how this question connects to other questions in ethics and metaphysics, including questions about the nature of welfare and about the composition of consciousness. We also discussed examples that intuitively support the subject-counting approach as well as examples that intuitively support the state-counting approach, and we canvassed some potential solutions.
Finally, we developed what we take to be a promising answer to the Value Counting question: the state-counting approach plus at least moderate holism about welfare-relevant mental states. On this view, what matters is the value of mental states, but many mental states are good or bad partly, mostly, or fully because of the contribution that they make to other, more complex mental states that include them. This view denies that we should give extra weight to shared states simply in virtue of their being shared, but it allows that many states can cause more positive or negative experience overall in virtue of their being shared.
Of course, more research is needed to investigate this answer as well as other possible answers. And even if all relevant philosophical questions are resolved, many other empirical and practical questions will remain. For instance, can we produce more happiness, or more value, by creating a web of overlapping minds or a set of separate minds, all else equal? Given the importance these questions may take on as technology expands the range of potential mind-structures we can create or become, they will deserve careful attention moving forward.
Data availability
There is no empirical data associated with this paper.
Notes
Thus we are picking up on the thread left dangling by Sutton when, after making a metaphysical case for the failure of Exclusivity, she remarks that “how to figure a shared pain into a utilitarian calculus is an interesting but separate question that I will leave to the ethicists” (Sutton 2014, p. 622).
These are generalisations of the views that Javier-Castellanos calls “Quantity of Pain” and “Number of Experiencers,” 2023, p. 1771. Note that Sutton, right after setting aside the ethical question, remarks that her “first‐pass response is to count by quantity of pains rather than number of beings sharing that pain,” i.e. to endorse the state-counting approach (2014, p. 622).
In principle, one could give different answers to these two ‘directions’ of the hedonic emergence question (part-to-whole and whole-to-part), but for simplicity we will bundle them together as a single question. Thanks to Andreas Mogensen for raising this possibility.
This assumption obviously raises questions connected to debates over the Extended Mind hypothesis, though we stress that it is compatible with denying that any currently-existing technology does or could extend the mind, if the functional criteria that would need to be met are sufficiently demanding. Cf. (Clark and Chalmers 1998; Clark 2008; Vold 2015; Chalmers 2019).
Though this sort of technology might also be used to bring about that result, cf. (Roelofs 2019, p. 270 ff).
See esp. (Schecter 2018, pp. 156–180).
Our thanks to Derek Shiller and an anonymous reviewer for independently suggesting a thought-experiment along these lines.
Compare Kriegel 2017, who links “Phenomenal Inviolability” (p. 131; equivalent to our principle of Exclusivity) with the “normative inviolability” of persons (p. 133).
This move is in some ways parallel to Parfit’s argument (1984, pp. 329–346) that since persons are not actually separate in any deep or fundamental sense (assuming the falsity of substance dualism), utilitarianism is more defensible than it might otherwise be.
Though we cannot here explore all its consequences, a parallel holistic claim about the diachronic composition of lifetime welfare—that, e.g., the same moments might add up to a better life if they occurred in one order than in another—is defended by Slote (1982) and Velleman (1991); Briggs and Nolan discuss how this might complicate various approaches to the puzzles of diachronic multiplication (2015, pp. 404).
We thank Andreas Mogensen for pushing us to clarify this point.
The split-brain case stands out as intermediate between only slight overlap (supporting the subject-counting intuitions) and complete overlap (supporting the state-counting intuitions). There are important connections not just among specific sensations but major psychological structures, such as agency and self-consciousness. If there are in fact two distinct subjects here, it is very unclear what a holistic view of the value of states would imply, since much of the overall shape of their total states seems to be either shared or at least kept synchronized somehow. A strong holist might still say that these two subjects should be treated as separate and additional subjects in our moral calculations, while weaker versions are much harder to apply. We suggest that the moral upshot of holism here is deeply confusing and unclear, and that this is not the wrong result to get; the split-brain case does indeed generate deeply confusing and unclear reactions, insofar as it seems ambiguously intermediate between one and two subjects.
It might be objected that pains are still being treated as mattering independently of subjects in cases of complete overlap, like that between you and your head. But this is misguided: subjects with all the same welfare-relevant mental states overlap as subjects, sharing their reflections, their agency, their intentions and values or the holistic pattern of their feelings and desires. They share what makes them matter as individuals, and so it is not fetishistic to think of them together for moral purposes: indeed, it might be fetishistic to insist that we count them separately just because they are not technically identical, by strict metaphysical criteria.
For discussions of the ‘paradox of painful art’ see esp. (Feagin 1983; Ridley 2003; Smuts 2009; Bantinaki 2012). Not all analyses will support all forms of holism: for instance, on some analyses, our responses to painful art simply involve enough pleasure to outweigh the unpleasant components, though the latter are still in themselves bad, while on other analyses the displeasure is actually “converted into pleasure,” or “at least tincture them so strongly as to totally alter their nature” (Hume 1987, p. 220); our aim here is not to settle how best to analyze this phenomenon, but merely to point out how it might be useful for elaborating the kind of holism that a supporter of state counting needs.
Holistic claims about hedonic valence and about preferences could be mutually supporting, insofar as hedonic valence is entangled with or even reducible to conative states. It might just be that how good or bad something makes us feel is often heavily inflected by whether we’re getting what we wanted, or having our wishes and efforts frustrated. Or it might be that states only have valence because and insofar as they involve strong desires—that pleasure is good because it’s a state we can’t help but want to continue, and suffering is bad because it’s a state that we can’t help but want to end. If so—and if our wantings are inherently tied to a relatively holistic network of psychological attitudes—that would support seeing the value or disvalue of a particular pleasure or displeasure as importantly extrinsic to the experience itself.
Ned Block’s Nation-Brain thought experiment (1978), where a billion human citizens collectively implement the functional architecture of a single human brain, offers an extreme example of the disconnect between attitudes, at least, at the levels of whole and part. Indeed, the psychological independence of the levels is sufficient that it is not a clear case for state sharing at all (though see Roelofs 2019, pp. 190–198).
References
Albantakis, L., Barbosa, L., Findlay, G., Grasso, M., Haun, A. Marshall, W., Mayner, W., Zaeemzadeh, A., Boly, M., Juel, B., Sasail, S., Fujii, K., David, I., Hendren, J., Lang, J., and Tononi, G. (2022). Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms. Quantitative Biology. arXiv:2212.14787v1
Arrhenius, G. (2003). The person-affecting restriction, comparativism, and the moral status of potential people. Ethical Perspectives, 10(3–4), 185–195.
Bader, R. (2022). Person-affecting utilitarianism. In G. Arrhenius (Ed.), The Oxford handbook of population ethics (pp. 251–270). Oxford University Press.
Bantinaki, K. (2012). The paradox of horror: Fear as a positive emotion. The Journal of Aesthetics and Art Criticism, 70(4), 383–392.
Blackmon, J. (2016). Hemispherectomies and independently conscious brain regions. Journal of Cognition and Neuroethics, 3(4), 1–26.
Blackmon, J. (2021). Integrated information theory, intrinsicality, and overlapping conscious systems. Journal of Consciousness Studies, 28(11–12), 31–53.
Block, N. (1978). Troubles with functionalism. In C. W. Savage (Ed.), Perception and cognition: Issues in the foundations of psychology. University of Minneapolis Press.
Bloom, P. (2021). The sweet spot: The pleasures of suffering and the search for meaning. Harper Collins.
Bradford, G. (2020). The badness of pain. Utilitas, 32, 236–252.
Briggs, R. A., & Nolan, D. (2015). Utility monsters for the fission age. Pacific Philosophical Quarterly, 96(2), 392–407.
Buchanan, J., & Roelofs, L. (2019). Panpsychism, intuitions, and the great chain of being. Philosophical Studies, 176(11), 2991–3017.
Burke, M. (1994). Dion and theon: An essentialist solution to an ancient puzzle. Journal of Philosophy, 91(3), 129–139.
Carls-Diamante, S. (2017). The octopus and the unity of consciousness. Philosophy of Biology, 32(6), 1269–1287.
Carls-Diamante, S. (2022). Where is it like to be an octopus? Frontiers in Systems Neuroscience. https://doi.org/10.3389/fnsys.2022.840022
Casser, L., & Clarke, S. (2023). Is pain modular? Mind and Language, 38(3), 828–846.
Chalmers, D. (2015). Panpsychism and panprotopsychism. In Y. Nagasawa & T. Alter (Eds.), Consciousness in the physical world: Essays on Russellian monism. Oxford University Press.
Chalmers, D. (2019). Extended cognition and extended consciousness. In M. Colombo, E. Irvine, & M. Stapleton (Eds.), Andy clark and his critics (pp. 9–20). Wiley-Blackwell.
Chappell, R. Y. (2015). Value receptacles. Noûs, 49(2), 322–332.
Clark, A. (2008). Supersizing the Mind: Embodiment, action, and cognitive extension. Oxford University Press.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
Cochrane, T. (2021). A case of shared consciousness. Synthese, 199(1–2), 1019–1037.
Dainton, B. (2011). Review of consciousness and its place in nature. Philosophy and Phenomenological Research, 83(1), 238–261.
Danaher, J., & Nyholm, S. (2021). Should we use technology to merge minds? Cambridge Quarterly of Healthcare Ethics, 30(4), 585–603.
Danaher, J., & Petersen, S. (2021). In defence of the hivemind society. Neuroethics, 14(2), 253–267.
Dominus, S. (2011). Could conjoined twins share a mind? New York Times Magazine. Retrieved September, 2022, from https://perma.cc/D6E5-4W3T
Feagin, S. (1983). The pleasures of tragedy. American Philosophical Quarterly, 20(1), 95–104.
Friedman, D. A., & Søvik, E. (2021). The ant colony as a test for scientific theories of consciousness. Synthese, 2021(198), 1457–1480.
Gauthier, D. (1962). Practical reasoning. Clarendon Press.
Gazzaniga, M., Bogen, J., & Sperry, R. (1962). Some functional effects of sectioning the cerebral commissures in man. Proceedings of the National Academy of Sciences, 48(2), 17–65.
Gibbons, M., Crump, A., Barrett, M., Sarlak, S., Birch, J., & Chittka, L. (2022). Can insects feel pain? A review of the neural and behavioural evidence. Advances in Insect Physiology, 63, 155–229.
Goff, P., & Roelofs, L. (2020). In defence of phenomenal sharing. In J. Bugnon, M. Nida-Rümelin, & D. O’Conaill (Eds.), The phenomenology of self-awareness and conscious subjects. Routledge.
Hershenov, D. B. (2013). Who doesn’t have a problem of too many thinkers? American Philosophical Quarterly, 50(2), 203–208.
Hinton, T. (2009). Rights, duties and the separateness of persons. Philosophical Papers, 38(1), 73–91.
Hirstein, W. (2008). Mindmelding: Connected brains and the problem of consciousness. Mens Sana Monographs, 6(1), 110–130.
Hirstein, W. (2012). Mindmelding: Consciousness, neuroscience, and the mind’s privacy. Oxford University Press.
Huebner, B. (2014). Macrocognition: A theory of distributed minds and collective intentionality. Oxford University Press.
Hume, D. (1987). Of tragedy. Essays: Moral political and literary (pp. 216–225). Liberty Classics.
Javier-Castellanos, A. (2023). Should the number of overlapping experiencers count? Erkenntnis, 88, 1767–1789. https://doi.org/10.1007/s10670-021-00427-4
Johnston, M. (2017). The personite problem: Should practical reason be tabled? Noûs, 51(3), 617–644.
Johnston, M. (2021). The subject and its apparatus: Are they ontological trash? Philosophical Studies, 178(8), 2731–2744.
Kang, S.-P. (2022). Shared consciousness and asymmetry. Synthese, 200, 413.
Klein, C., & Barron, A. (2016). Insects have the capacity for subjective experience. Animal Sentience, 9(1), 1113. https://doi.org/10.51291/2377-7478.1113
Kriegel, U. (2017). Dignity and the phenomenology of recognition-respect. In J. Drummond & S. Rinof-ner-Kreid (Eds.), Emotional experiences: Ethical and social significance (pp. 121–136). Rowman & Littlefield.
Langland-Hassan, P. (2015). Introspective misidentification. Philosophical Studies, 172(7), 1737–1758.
Lewis, D. (1976). Survival and identity. In A. O. Rorty (Ed.), The identities of persons (pp. 17–40). University of California Press.
Lewis, D. (1993). Many, but almost one. In J. Bacon, K. Campbell, & L. Reinhardt (Eds.), Ontology, causality, and mind: Essays in honour of D.M. Armstrong (pp. 23–45). Cambridge University Press.
Lyreskog, D., Zohny, H., Savulescu, J., & Singh, I. (2023). Merging minds: The conceptual and ethical impacts of emerging technologies for collective minds. Neuroethics, 16, 12. https://doi.org/10.1007/s12152-023-09516-3
Mathews, F. (2003). For love of matter: A contemporary panpsychism. State University of New York Press.
Merricks, T. (1998). Against the doctrine of microphysical supervenience. Mind, 107, 59–71.
Miller, G. (2017). Can subjects be proper parts of subjects? The de-combination problem. Ratio, 30(2), 1–18.
Mørch, H. H. (2019). Is the integrated information theory of consciousness compatible with Russellian panpsychism? Erkenntnis, 84(5), 1065–1085.
Noonan, H. (2010). The thinking animal problem and personal pronoun revisionism. Analysis, 70(1), 93–98.
Nozick, R. (1974). Anarchy, state, and utopia. Basil Blackwell.
Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLOS Computational Biology, 10(5), e1003588.
Olson, E. (2003). An argument for animalism. In R. Martin, J. Barresi, & Malden (Eds.), Personal identity (pp. 318–335). Blackwell.
Parfit, D. (1984). Reasons and Persons. Oxford University Press.
Pinto, Y., Neville, D., Otten, M., Corballis, P., Lamme, V., de Haan, E., Foschi, N., & Fabri, M. (2017). Split brain: Divided perception but undivided consciousness. Brain: A Journal of Neurology, 140(5), 1231–1237.
Rawls, J. (1971). A theory of justice. Harvard University Press.
Ridley, A. (2003). Tragedy. In J. Levinson (Ed.), The oxford handbook of aesthetics (pp. 408–421). Oxford University Press.
Roelofs, L. (2016). The unity of consciousness, within and between subjects. Philosophical Studies, 173(12), 3199–3221.
Roelofs, L. (2019). Combining minds: How to think about composite subjectivity. Oxford University Press.
Roelofs, L. (2020). Can we sum subjects? Evaluating panpsychism’s hard problem. In W. Seager (Ed.), The Routledge handbook of panpsychism (pp. 245–258). Routledge.
Roelofs, L. (2022a). Dennettian panpsychism: Multiple drafts, all of them conscious. Acta Analytica, 37, 323–340.
Roelofs, L. (2022b). No such thing as too many minds. Australasian Journal of Philosophy., 102(1), 131–146.
Rozin, P., Guilot, L., Fincher, K., Rozin, A., & Tsukayama, E. (2013). Glad to be sad, and other examples of benign masochism. Judgment and Decision Making, 8(4), 439–447.
Schechter, E. (2015). The subject in neuropsychology: Individuating minds in the split-brain case. Mind and Language, 30(5), 501–525.
Schechter, E. (2018). Self-consciousness and “split” brains: The minds’ I. Oxford University Press.
Schwitzgebel, E. (2020). How robots and monsters might destroy human moral systems (pp. 101–105). MIT Press.
Seager, W. (1995). Consciousness, information and panpsychism. Journal of Consciousness Studies, 2–3, 272–288.
Seager, W. (2010). Panpsychism, aggregation and combinatorial infusion. Mind & Matter, 8(2), 167–184.
Sebo, J. (2015). The just soul. Journal of Value Inquiry, 49(1–2), 131–143.
Sebo, J. (2015b). Multiplicity, self-narrative, and akrasia. Philosophical Psychology, 28(4), 589–605.
Sider, T. (2003). Maximality and microphysical supervenience. Philosophy and Phenomenological Research, 66, 139–149.
Simon, J. (2017). The hard problem of the many. Philosophical Perspectives, 31(1), 449–468.
Slote, M. (1982). Goods and lives. Pacific Philosophical Quarterly, 63, 311–326.
Smuts, A. (2009). Art and negative affect. Philosophy Compass, 4(1), 39–55.
Sutton, C. (2014). The supervenience solution to the too-many-thinkers problem. Philosophical Quarterly, 64(257), 619–639.
Temkin, L. (2014). Rethinking the good–a small taste. Law, Ethics and Philosophy, 2, 77–78.
Unger, P. (1980). The problem of the many. Midwest Studies in Philosophy, 5(1), 411–468.
Unger, P. (1990). Identity, consciousness, and value. Oxford University Press.
Unger, P. (2004). Mental problems of the many. In D. W. Zimmerman (Ed.), Oxford studies in metaphysics (Vol. 1, pp. 195–222). Clarendon Press.
Velleman, J. D. (1991). Well-being and time. Pacific Philosophical Quarterly, 72(1), 48–77.
Vetlesen, A. (2019). Cosmologies of the anthropocene: Panpsychism, animism, and the limits of posthumanism. Taylor & Francis.
Vold, K. (2015). The parity argument for extended consciousness. Journal of Consciousness Studies, 22, 16–33.
Zimmerman, D. (2010). From Property dualism to substance dualism. Aristotelian Society Supplementary, 84(1), 119–150.
Acknowledgements
Thanks to Toni Sims for research and editorial assistance, to the organizers and attendees of the 10th Oxford Workshop on Global Priorities Research for helpful discussion, and to the reviewers and editors at Philosophical Studies for helpful feedback.
Funding
This paper was not the product of any funding source beyond the author’s regular employment at New York University and the University of Texas at Arlington.
Author information
Authors and Affiliations
Contributions
Both authors contributed equally to all parts of this paper.
Corresponding author
Ethics declarations
Competing interests
The authors are not affiliated or supported by any organisation that stands to gain or lose by the publication of this paper, and have no other competing interests.
Ethical approval
This paper does not reflect research that requires ethics approval.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Roelofs, L., Sebo, J. Overlapping minds and the hedonic calculus. Philos Stud (2024). https://doi.org/10.1007/s11098-024-02167-x
Accepted:
Published:
DOI: https://doi.org/10.1007/s11098-024-02167-x