Skip to main content
Log in

Commonsense concepts of phenomenal consciousness: Does anyone care about functional zombies?

  • Published:
Phenomenology and the Cognitive Sciences Aims and scope Submit manuscript

Abstract

It would be a mistake to deny commonsense intuitions a role in developing a theory of consciousness. However, philosophers have traditionally failed to probe commonsense in a way that allows these commonsense intuitions to make a robust contribution to a theory of consciousness. In this paper, I report the results of two experiments on purportedly phenomenal states and I argue that many disputes over the philosophical notion of ‘phenomenal consciousness’ are misguided—they fail to capture the interesting connection between commonsense ascriptions of pain and emotion. With this data in hand, I argue that our capacity to distinguish between ‘mere things’ and ‘subjects of moral concern’ rests, to a significant extent, on the sorts of mental states that we take a system to have.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The experimental philosophy of consciousness is currently blossoming (for exampels, see Arico 2007; Arico et al. submitted; Gray et al. 2007; Haslam et al. 2008; Huebner 2009; Knobe and Prinz 2008; Robbins and Jack 2006; Sytsma and Machery, submitted, 2009)

  2. Searle (1992) argues that this view is closest to commonsense

  3. In presenting a version of this paper at the Society for Philosophy and Psychology, it was quickly made clear that many philosophers of mind object to the claim that their views rely on mere intuition. I found this surprising, and a quick look through the most important papers in the philosophy of mind reveals a pervasive appeal to intuition. Lewis (1980) motivates his analytic functionalism with the claim that he has a deep intuition that both mad-pain and Martian-pain are possible; similarly, in developing his homuncular functionalism, Lycan (1987) claims that the ascription of mental states to a tinfoil man controlled by MIT scientists is “absurd, because ‘he’ is a mere mock-up, largely empty inside”. Searle’s (1980) neuronal view is motivated by the claim that the Chinese-room could not understand Chinese. Analogously, Block (1978) claims that there is a prima facie doubt that there is anything that it is like to be the nation of China, so a functionalist theory of the mind that allows for such a system must be mistaken. Chalmers (1996) argues that a philosophical theory that takes consciousness seriously must accommodate the conceivability of zombies; and, Frank Jackson (1982) claims that it is “inescapable” that the color-blind Mary had incomplete knowledge before she left her room. Such appeals pervade the philosophy of mind. But this should be enough to convince skeptical readers that many philosophers of mind rely on intuition to motivate their positions.

  4. Chalmers (1996) claims that developing an adequate account of phenomenal consciousness is the most pressing problem in the philosophy of mind. However, he offers little more than a list of paradigmatic experiences that includes sensory experience, pain, emotion, and mental imagery.

  5. http://www.wjh.harvard.edu/~wegner/pdfs/Gray_et_al._(2007)_supportive_online_material.pdf

  6. Sytsma and Machery (unpublished data) found that non-philosophers tend to say that a simple robot can ‘see red’ but philosophers find such ascriptions odd. However, the responses of non-philosophers are not “bimodal as would be expected if they found ‘seeing red’ to be ambiguous between distinctive behavioral and phenomenal readings” (Sytsma, personal communication). So, my worry may be misplaced. Whether there are important differences between ascriptions of sensory states and ascriptions of pain and happiness remains a question for further empirical investigation.

  7. A 4 (entity) x2 (state) Mixed Model ANOVA revealed that participants’ judgments differed significantly as a result of the type of entity being considered, F (3, 91) = 7.24, p < .001, η 2p  = .193. There was also a significant difference between the acceptability of ascribing beliefs and pains, F (1, 91) = 11.03, p = .001, η 2p  = .108; but no significant interaction between entity and mental state, F (3, 91) = 1.93, p = .130, η 2p  = .060. Planed comparisons using Univariate ANOVAs failed to reveal any significant difference between the ascriptions of beliefs to the various entities, F (3, 94) = 2.30, p = .083; in fact, Bonferroni corrected post-hoc tests revealed only one marginally significant difference (Human-Brain vs Robot-CPU, p = .086), with all remaining comparisons failing to reach significance, p > .40. However, the analogous ANOVA revealed a significant difference between the various entities insofar as pain was concerned, F (3, 94) = 8.755, p < .001; and, Bonferroni corrected post-hoc tests revealed significant differences between the Human with a Human Brain and every other entity (Human-CPU, p = .014; Robot-Brain, p = .027; Robot-CPU, p = .000), with all remaining comparisons failing to reach significance, p > .15.

  8. Despite long-standing worries about null hypothesis significance testing (Cohen 1994; Dixon 2003; Gigerenzer 2002, 2004), psychologists and experimental philosophers tend to focus almost exclusively on whether their p-value reaches the critical value of p = .05. Moreover, it is typically just assumed that Likert scales should be treated as interval data and analyzed using the measurements of an ANOVA. But, such statistical methodologies often mask important differences in the data—suggesting to readers and experimenters alike that the lack of a significant difference implies the lack of an important difference. Although a mean response of 3 on a 5-point scale could be the result of every participant being ambivalent about their response, it could also be the result of some participants offering 5 s and others offering 1 s. However, since my concern in this paper concerns the strategies we adopt in ascribing mental states, this data calls for a further analysis that is both more intuitive and that makes the structure of the data more transparent. I, thus, examined the relative frequency of affirmative responses (either ‘agree’ or ‘strongly agree’), as compared to negative (‘disagree’ or ‘strongly disagree’) and ambivalent responses (‘neither disagree nor agree’) to the questions that I presented. This collapses the data from a 5-point scale to a 3-point scale for ease of presentation. However, it preserves the relevant distinctions for answering questions such as “How frequently did participants agree with the sentence ‘David believes that 2+2=4?”.

  9. Robot-CPU, χ2(2, N = 25) = 15.44, p < .001; Robot-Brain, χ2(2, N = 23) = 10.78, p < .01; Human-CPU, χ2(2, N = 25) = 10.64, p < .01; Human-Brain, χ2(2, N = 22) = 17.82, p < .001.

  10. Robot-CPU, χ2(2, N = 25) = 5.84, p = .059; Robot-Brain, χ2(2, N = 23) = .35, p = .84; Human-CPU, χ2(2, N = 25) = 1.52, p = .47; Human-Brain, χ2(2, N = 22) = 24.36, p < .001.

  11. Sytsma (personal communication) asked participants to justify their claim that a simple robot could not experience pain. His participants tended to appeal to the physical features of the entity: being “metal, mechanical, non-biological, and so on”.

  12. Note the frequency of responses for these two cases: Human-CPU (Agree = 11; Neither agree nor disagree = 6; Disagree = 8); Robot-Brain (Agree = 9; Neither agree nor disagree = 7; Disagree = 7).

  13. I assumed that ‘feels’ would disambiguate phenomenal and non-phenomenal states because Knobe and Prinz (2008) found that people were willing to ascribe emotions-sans-‘feels’ to a corporation, but unwilling to ascribe the same states when the ‘feels’ locution was included. Sytsma and Machery (submitted), however, found no significant difference in judgments about the acceptability of the sentences ‘Microsoft is feeling depressed’ and ‘Microsoft is depressed’. Sytsma and Machery (unpublished data) report a similar finding for ‘anger’. However, as the inclusion of ‘feels’ was merely a safeguard, this in no way impugns my results.

  14. A 4 (entity) x 2 (state) Mixed Model ANOVA revealed that participants’ judgments differed significantly as a result of the type of entity being considered, F (3, 105) = 3.05, p = .032, η 2p  = .080; though the size of this effect was incredibly small. This analysis also revealed a significant difference between the acceptability of ascribing beliefs and emotions to the various entities, F (1, 105) = 19.82, p < .001, η 2p  = .159; however, there was no significant interaction between entity and mental state, F (3, 105) = 1.76, p = .159, η 2p  = .048. Planed comparisons using Univariate ANOVAs failed to reveal any significant difference between the ascriptions of beliefs to the various entities, F (3, 105) = .423, p = .737; in fact, Bonferroni post-hoc tests revealed no significant difference for any comparison, all p = 1.00. However, the analogous ANOVA revealed a statistically significant difference between the ascription of pain to the various entities, F (3, 105) = 4.51, p = .005; Bonferroni post-hoc tests revealed significant differences between the Human with a Human Brain and both the Human with a CPU (p = .041) and the Robot with a CPU (p = .004), but all other comparisons failed to reach significance p > .299.

  15. Robot-CPU, χ2(2, N = 28) = 22.36, p < .0001; Robot-Brain, χ2(2, N = 26) = 19, p < .0001; Human-CPU, χ2(2, N = 28) = 18.29, p < .0001; Human-Brain, χ2(2, N = 27) = 28.22, p < .0001.

  16. Robot-CPU, χ2(2, N = 28) = 1.79, p = .409; Robot-Brain, χ2(2, N = 26) = 7.46, p = .024; Human-CPU, χ2(2, N = 28) = 1.36, p = .507; Human-Brain, χ2(2, N = 29) = 26.14, p < .001.

  17. An anonymous referee has argued that this claim outstrips the data reported in this paper. Demonstrating that commonsense psychology is functionalist regarding belief would require that ascriptions of belief were acceptable wherever an entity was computationally organized in the right way. Fortunately, a number of recent studies (Arico 2007; Arico et al. submitted; Gray et al., 2007; Haslam et al., 2008; Huebner et al. 2009; Knobe and Prinz 2008; Sytsma and Machery, submitted) demonstrate that ascriptions of belief are ascribed to a wide variety of entities including humans, non-human animals, robots, supernatural entities, and groups. On the basis of this data, I feel warranted in the stronger claim—though future data could show that there are contexts in which people do place biological constraints on cognition.

  18. This is not to claim, in an ontological tone of voice, that there are no biological constraints on cognition; these data cannot warrant such claims. I merely wish to note that reliance on thought experiments to establish this conclusion may fail to make parallels in functional organization sufficiently clear. This is clearly the case with Searle’s aggregations of beer cans, network of windpipes, and Chinese rooms. My suggestion is merely that the intuitive pull of such thought experiments suggests little more than a failure of imagination (Dennett 1988), a point to which I return below.

  19. Adam Arico and his colleagues (unpublished data) asked participants to categorize clearly figurative sentences (e.g., “Einstein was an egghead”) and clearly literal sentences (e.g., “Carpenters build houses”), as well as sentences attributing different mental states to individuals (e.g., “Some millionaires want tax cuts”) and groups (e.g., “Some corporations want tax cuts”) These sentences were rated on a 7-point scale (1 = ‘Figuratively True’ 7 = ‘Literally True’), and judgments regarding different types of mental states were compared. Arico et. al found that participants tended to treat the ascription of non-phenomenal mental states to collectivities as ‘literally true’.

  20. I leave the defense of experimental philosophy as heterophenomenology for another paper.

  21. However, as Griffin and Baron-Cohen (2002) note: “While the vast majority of six-year-olds cannot take the intentional stance on the Republican party or the Roman Catholic church, they do pretty well with people, so we can expect the core mechanisms to be in place, with success on these other entities dependant largely on experience.” There is much to say about the development of this capacity for ascribing beliefs; however, in this paper I can only note that the mature understanding of beliefs is highly promiscuous.

  22. In collaboration with Jesse Prinz (unpublished data), I examined the relationship between judgments about subjective experience and moral concern. Participants read a brief scenario where a scientist turned-off an android and erased his program without asking permission and were asked about the moral status of the action (completely permissible—1; morally forbidden—7). In one condition, the android was described as having the capacity to feel pain; in the other he was described as having the capacity for emotional experiences of various sorts. In both conditions, the android was described as having the appearances and behaviors of an ordinary human. In the pain condition, the android was described as feeling pain in response to various sorts of physical damage; in the emotion condition, the android was described as having hopes, fears, and other emotional feelings. Surprisingly, there was no statistically significant difference in participant’s judgments about the permissibility of shutting down either android (Emotion M = 3.21, SD = 1.96; Pain: M = 3.10, SD = 1.72), t (87) = .30, p = .7654, d = .06, r 2 = .001—just one-tenth of the between groups variance can be explained by the difference between pain and emotion. Perhaps this offers further confirmation of the claim that the affective component of pain and emotion are important to determining whether an entity should be treated as a subject of moral concern.

  23. The evidence comes from comparative judgments offered in response to the question, “If you were forced to harm one of these characters, which one would it be more painful for you to harm?” Gray and her colleagues found that these responses were highly correlated with their experience dimension (r = 0.85), but only weakly correlated with agency (r = 0.26).

  24. An anonymous reviewer worries that the term ‘personhood’ is ill suited for the work I put it to. After all, some of these mental states may be shared, at least to some degree, by non-human animals. I retain the term because of its prominence in moral philosophy, and because of the distinction between the intentional stance and the personal stance (Dennett 1978b). However, I concede that ‘personhood’ is less than ideal.

  25. Thanks to Robert Briscoe for this intriguing analogy.

  26. Replicants are not easily distinguished from ordinary humans. So, the Blade Runners use a machine that measures low-level physiological responses to various scenarios to determine whether a entity is human or mere machine.

  27. This is clearest in a conversation between the Blade Runner, Rick Deckard, and a replicant, named Leon that he is trying to retire (Mulhal 1994): Leon: ‘My birthday is April 10th, 2017. How long do I live?’; Deckard: ‘Four years.’; Leon: ‘More than you. Painful to live in fear, isn’t it? Nothing is worse than having an itch you can’t scratch.’

  28. The closing scene in the Final Cut version of the film (released in 2007) provides evidence that Deckard is a replicant, complete with fabricated memories and implanted daydreams about unicorns. If Deckard is a replicant, he must be retired. But, having gone through the emotional highs and lows in this film, having felt genuine compassion for Deckard, the thought that he too could be ‘retired’ seems beyond the pale of moral acceptability.

References

  • Arico, A. (2007). Should corporate consciousness be regulated? (Poster). Toronto, ON: Society for Philosophy and Psychology.

  • Arico, A., Fiala, B., Goldberg, R., & Nichols, S. (submitted). The folk psychology of consciousness.

  • Aydede, M. (2001). Naturalism, introspection, and direct realism about pain. Consciousness and emotion, 2(1), 29–73.

    Article  Google Scholar 

  • Block, N. (1978). Troubles with functionalism. Minnesota Studies in the Philosophy of Science, 9, 261–325.

    Google Scholar 

  • Block, N. (1995). On a confusion about a function of consciousness. Behavioral and brain sciences, 18, 227–246.

    Article  Google Scholar 

  • Bloom, P. (2005). Descartes baby. New York: Basic Books.

    Google Scholar 

  • Chalmers, D. (1996). The conscious mind. Oxford: Oxford University Press.

    Google Scholar 

  • Cohen, J. (1994). The earth is round (p < .05). American psychologist, 49, 997–1003.

    Article  Google Scholar 

  • Cummins, R. (1983). The nature of psychological explanation. Cambridge, Ma.: MIT.

    Google Scholar 

  • Dennett, D. (1978a). Brainstorms. Cambridge, MA: Bradford Books.

    Google Scholar 

  • Dennett, D. (1978b). Mechanism and responsibility, Brainstorms. Cambridge, MA: MIT.

    Google Scholar 

  • Dennett, D. (1987). The intentional stance. Cambridge, MA: Bradford Books.

    Google Scholar 

  • Dennett, D. (1988). The unimagined preposterousness of zombies, Brainchildren, pp. 171–179. Cambridge, MA: MIT.

    Google Scholar 

  • Dennett, D. (1992). Consciousness explained. New York: Penguin.

    Google Scholar 

  • Dixon, P. (2003). The p-value fallacy and how to avoid it. Canadian Journal of Experimental psychology, 57, 189–202.

    Google Scholar 

  • Fodor, J. (1981). RePresentations. Cambridge, MA: MIT.

    Google Scholar 

  • Gergely, G., & Csibra, G. (2003). Teleological reasoning in infancy: the naive theory of rational action. Trends in cognitive science, 7, 287–292.

    Article  Google Scholar 

  • Gigerenzer, G. (2002). Adaptive thinking. Oxford: Oxford University Press.

    Google Scholar 

  • Gigerenzer, G. (2004). Mindless statistics. Journal of socio-economics, 33, 587–606.

    Article  Google Scholar 

  • Goodman, N. (1955). Fact, fiction, and forecast. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Goodman, N. (1972). Problems and projects. New York: Bobbs-Merrill.

    Google Scholar 

  • Gray, H., Gray, K., & Wegner, D. (2007). Dimensions of mind perception. Science, 619, 315.

    Google Scholar 

  • Griffin, R., & Baron-Cohen, S. (2002). The intentional stance: Developmental and neurocognitive perspectives. In A. Brook & D. Ross (Eds.), Daniel Dennett, pp. 83–116. Cambridge: Cambridge University Press.

    Google Scholar 

  • Haslam, N. (2006). Dehumanization: an integrative review. Personality and Social Psychology, 10(3), 252–264.

    Article  Google Scholar 

  • Haslam, N., Kashima, Y., Loughnan, S., Shi, J., & Suitner, C. (2008). Subhuman, inhuman, and superhuman: contrasting humans and nonhumans in three cultures. Social Cognition, 26, 248–258.

    Article  Google Scholar 

  • Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57, 243–259.

    Article  Google Scholar 

  • Huebner, B. (2009). Commonsense concepts of phenomenal concepts: Does anyone care about functional zombies? European reivew of philosophy.

  • Huebner, B., Bruno, M., & Sarkissian, H. (2009). What does the nation of China think of phenomenal states? European reivew of philosophy, XX(YY), ZZ.

  • Jackson, F. (1982). Epiphenomenal qualia. Philosophical quarterly, 32, 127–136.

    Article  Google Scholar 

  • Johnson, S., Slaughter, V., & Carey, S. (1998). Whose gaze would infants follow? The elicitation of gaze following in 12-month-olds. Developmental Science, 1, 233–238.

    Article  Google Scholar 

  • Kauppinen, A. (2007). The rise and fall of experimental philosophy. Philosophical explorations, 10, 119–122.

    Article  Google Scholar 

  • Knobe, J., & Prinz, J. (2008). Intuitions about consciousness: experimental studies. Phenomenology and the cognitive sciences, 7, 67–85.

    Article  Google Scholar 

  • Levy, D. (2007). Love and sex with robots. New York: Harper Collins.

  • Lewis, D. (1980). Mad pain, martian pain. In N. Block (Ed.), Readings in the philosophy of psychology, Vol. 1, pp. 216–222. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Lycan, W. (1987). Consciousness. Cambridge, MA: Bradford Books.

    Google Scholar 

  • Mulhal, S. (1994). Picturing the human (body and soul). Film and philosophy, 1, 87–100.

    Google Scholar 

  • Rey, G. (1980). Functionalism and the emotions. In A. Rorty (Ed.), Explaining emotion, pp. 163–195. Berkeley: University of California Press.

    Google Scholar 

  • Robbins, P. (2008). Consciousness and the social mind. Cognitive Systems Research, 9, 15–23.

    Article  Google Scholar 

  • Robbins, P., & Jack, A. (2006). The phenomenal stance. Philosophical studies, 127, 59–85.

    Article  Google Scholar 

  • Searle, J. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417–457.

    Article  Google Scholar 

  • Searle, J. (1992). The rediscovery of the mind. Cambridge, MA: MIT.

    Google Scholar 

  • Shimizu, Y., & Johnson, S. (2004). Infants’ attribution of a goal to a morphologically novel agent. Developmental Science, 7, 425–430.

    Article  Google Scholar 

  • Sytsma, J., & Machery, E. (submitted). Two conceptions of subjective experience.

  • Sytsma, J., & Machery, E. (2009). How to study folk intuitions about phenomenal consciousness. Philosophical psychology.

  • Turkle, S. (2006). Relational artifacts with children and elders: The complexities of cybercompanionship. Connection Science, 18, 347–361.

    Article  Google Scholar 

  • Turkle, S., Breazeal, C., Dasté, O., & Scassellati, B. (2006a). First encounters with Kismet and Cog: Children respond to relational artifacts. In P. Messaris & L. Humphreys (Eds.), Digital media: Transformations in human communication. New York: Peter Lang.

    Google Scholar 

  • Turkle, S., Taggart, W., Kidd, C., & Dasté, O. (2006b). Relational artifacts with children and elders: the complexities of cybercompanionship. Connection Science, 18.

Download references

Acknowledgements

This paper has benefited greatly from conversations with Robert Briscoe, Jacek Brozozwski, Joshua Knobe, Jesse Prinz, Dylan Sabo, and Daniel Star. I am also grateful for helpful suggestions from Mike Bruno, Dan Dennett, Justin Junge, Bill Lycan, Eric Mandelbaum, Susanne Sreedhar, Daniel Stoljar, Justin Sytsma, Neil Van Leeuwen, and my audiences at the Boston University Colloquium in the Philosophy of Science and the Society for Philosophy and Psychology (2008).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bryce Huebner.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Huebner, B. Commonsense concepts of phenomenal consciousness: Does anyone care about functional zombies?. Phenom Cogn Sci 9, 133–155 (2010). https://doi.org/10.1007/s11097-009-9126-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11097-009-9126-6

Keywords

Navigation