According to the discontinuity view, people recognize a deep discontinuity between phenomenal and intentional states, such that they refrain from attributing feelings and experiences to entities that do not have the right kind of body, though they may attribute thoughts to entities that lack a biological body, like corporations, robots, and disembodied souls. We examine some of the research that has been used to motivate the discontinuity view. Specifically, we focus on experiments that examine people's aptness judgments for various mental state ascriptions to groups. These studies seem to reveal that people are more inclined to think of groups as having intentionality than as having phenomenology. Combined with the fact that groups obviously lack a single biological body, this has been taken as evidence that people operate according to the relevant discontinuity. However, these studies support discontinuity only on the assumption that the experimental participants are interpreting the relevant group mental state ascriptions in a specific way. We present evidence that people are not interpreting these ascriptions in a way that supports discontinuity. Instead, we argue that people generally interpret group mental state ascriptions distributively, as attributions of mental states to various group members.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Price includes VAT (USA)
Tax calculation will be finalised during checkout.
Of course, it is controversial whether all intentional states are nonphenomenal. The thesis under consideration must maintain that at least a core set of intentional states are.
The discontinuity thesis implies a second claim about folk psychology. In claiming that cognition relies on different sets of physical features for the distinct kinds of mental state attributions, the discontinuity thesis implies the additional claim that ordinary people distinguish (perhaps tacitly) phenomenal from (nonphenomenal) intentional states. This second claim has been contested, both empirically and theoretically, elsewhere (Arico 2010; Sytsma and Machery 2009, 2010). However, in this paper we remain neutral on this issue.
One example of a continuity view can be found in Arico et al. (2011), which defends a cognitive model of mental state attribution dubbed ‘the AGENCY Model’. On that model, our everyday, intuitive attributions of intentional states and phenomenal states are both consequences of categorizing a thing as an ‘AGENT’.
We use the terms “ascription” and “attribution” in importantly different ways. For our purposes, an ascription is a sentence that assigns a state to an entity. An attribution is a mental act of assigning a state to an entity. This highlights the distinction between positing states to an entity at the linguistic level versus the psychological level.
Both Arico (2010) and Sytsma and Machery (2009) challenge Knobe and Prinz’s conclusions based on the failure to utilize minimal pairs in their stimuli. Arico and Sytsma and Machery also present data suggesting that the original difference between ratings for intentional and phenomenal attributions observed by Knobe and Prinz essentially vanishes once the stimuli are balanced to include matching amounts of information.
In one notable exception, Knobe and Prinz support their claim using results from the Google search engine. Knobe and Prinz entered intentional attributions to Microsoft (e.g., “Microsoft intends” and “Microsoft believes”) and phenomenal attributions to Microsoft (e.g., “Microsoft feels depressed” and “Microsoft experiences joy”) into Google. Google returned significantly more results for intentional attributions than for phenomenal attributions, which Knobe and Prinz cite as evidence of a cognitive resistance to attributing phenomenal states to groups. However, in an unpublished manuscript, Arico argues that these results are evidence, not of a cognitive resistance to group phenomenal states, but of a general imbalance of online language use. He reports an identical asymmetry in Google results for intentional and phenomenal attributions to individuals (e.g., President Bush and Bill Gates), as well as an asymmetry in results for intentional and phenomenal language in general (i.e., without any subject).
Here, we adopt (what Bach 2001 calls) Grice’s Syntactic Correlation Constraint and equate what is said to the lexical-compositional meaning of the pronounced words. Perhaps what the sentence, “Acme Corp. believes that its profit margin will soon increase,” says is not the realist proposition that Acme Corp. over and above its members believes that its profit margin will soon increase. Ultimately, the semantic content of this sentence (and related sentences) is to be determined by the semanticist. Nonetheless, we grant this assumption to the discontinuity theorist, with some support from the semanticist. Link (1984) and Landman (1989a,b) discuss in detail why we should introduce groups, understood as “plural individuals that are distinct from sums of singular individuals” (Landman 572) into our semantics. For one simple argument, suppose that Acme Corporation consists entirely of Biff, Max, and Sal, all of whom are masters of finance. Then we could say, “Biff, Max, and Sal are masters of finance.” And if group names referred only to sums of singular individuals, we should also be able to say, “Acme Corporation are masters of finance.” But (as Landman notes) constructions such as this seem awkward and perhaps ungrammatical. We want, instead, to say, “Acme Corporation consists of masters of finance.” But we can do that only if we assume that “Acme Corp” refers to, “an individual that does not stand in the part of relation to the sum of the…[masters of finance]…but in a different, consists of, relation” (Landman, 572).
These are the two possibilities endorsed in the literature. For instance, some philosophers offer ordinary language arguments that take everyday sentences as literally attributing mental states to groups. Margaret Gilbert (1996, p. 268), for instance, has argued that our everyday concept of social groups takes them to be “a special kind of thing, a ‘synthesis sui generis’,” capable of their own subjective states, of holding beliefs that none of the members hold themselves. Deborah Tollefsen (2002) defends a similar view. Others, however, argue that attributions of collective intentionality ought to be analyzed in terms of the mental states of the groups’ individual members. John Searle (1990, 1995), for example, denies that our metaphysics allows for any consciousness beyond individual consciousness, and so all talk about intentionality, including collective intentionality, must be based in an analysis of individuals as the bearers of intentionality. Michael Bratman (1993, 1999) argues that shared intentions are complexes of individual intentions and plans, all interrelated to each other.
Versions of all test materials are available as Electronic supplementary material.
Results for pair-wise comparisons were subjected to t tests. For intentional vs. non-mental: t(40) = 2.36, p = 0.023 (two-tailed), SD (intentional) 1.43, SD (non-mental) 1.23, Cohen’s d = 0.75.
t(40) = 3.7, p < 0.001 (two-tailed), SD (phenomenal) 0.967, Cohen’s d = 1.17.
t(40) = 0.777 p = 0.422 (two-tailed), Cohen’s d = 0.24.
An anonymous reviewer raises an important worry for our stimuli: “the distributive question is…potentially affected by whether the group is simply a collective or a more complexly organized unit.” For example, Boeing is widely known to be a multinational corporation, with complex operations, and a variety of employees with vastly different positions ranging from CEO to salesperson to engineer to janitor. The secret task force, on the other hand, is presumably a simple group consisting of only a few members relatively equal to one another in power and influence. It is unreasonable to simply assume that these vastly different kinds of groups will all be thought of in the same way. Perhaps the less complexly organized groups are driving the distributivism effect. To respond to this worry, we ran a series of paired sample t tests comparing responses for each group to every other group across each of our three conditions. Three significant effects emerged. Distributivism ratings for Phi Lambda (M = 0.75) were significantly higher than for Boeing (M = 0.5, p = 0.021) in the intentional state condition. Distributivism ratings for Phi Lambda (M = 0.85) were also significantly higher than for Boeing (M = 0.55, p = 0.03) in the phenomenal state condition. However, distributivism ratings for Phi Lambda were significantly higher than for the Secret Task Force (M = 0.55, p = 0.03) in the phenomenal state condition. The difference between MADD (M = 0.8) and Boeing was also approaching significance (p = 0.056) in the phenomenal state condition. These results do not clearly support the contention that mental states are more realistically ascribed to larger, more complex groups than to smaller ones. After all, Phi Lambda is a national fraternity, and even a local chapter is presumably much more complex and hierarchical than a task force. MADD is a national organization, if not as complex as Boeing. Nonetheless, the relatively low distributivism ratings for Boeing do point to the need for careful future work on this question.
The median score here is 2.5 because there were an even number of subjects, and the two scores in the middle of the range were 2 and 3.
t(53) = 6.743, p < .0001, SD (phenomenal) 1.198, SD (non-mental) .92, Cohen’s d = 1.84.
t(51) = 4.706, p < .0001, SD (intentional) 1.44, Cohen’s d = 1.3.
t(54) = 1.116, p = .269, Cohen’s d = 0.29.
There are a variety of ways a state may come to be saliently associated with a particular role. For example, we may associate a given state with a given role because occupying the state is morally normative, statistically normative, or stereotypically associated with the role. Furthermore, salient associations are relative to individual beliefs (though we expect high levels of commonality in such beliefs, as with prototypicality judgments) and context sensitive.
In point of fact, this is an oversimplification, since salient association clearly comes in degrees.
Means and standard deviations for non-mental: Not saliently associated (M = 2.94, SD = 1.13), saliently associated (M = 3.55, SD = 1.13). Means and standard deviations for intentional: Not saliently associated (M = 3.26, SD = 1.24), saliently associated (M = 3.96, SD = 0.8). Means and standard deviations for phenomenal: Not saliently associated (M = 3.63, SD = 1.18), saliently associated (M = 4.24, SD = 0.87).
F(1, 212) = 19.535, p < .001.
F(2, 212) = 7.51, p = .001.
F(2, 212) = .046, p = .955.
See Electronic supplementary material for detailed statistics.
Obviously, we also need an account of how many and what members of a group must be thought to possess a state for it to be appropriate to ascribe the state to the group. We set such considerations aside for future work.
Participants were first screened for their ability to distinguish literal and figurative uses of terms by having them assess obviously figurative and literal sentences such as “Einstein was an egghead,” and, “George W. Bush is President of the United States” (which was true at the time of the study). Participants were then presented with a series of sentences attributing different mental states to groups, such as “Some corporations want tax cuts,” and asked to rate these on a scale of literalness.
As in Section 1 above (see footnote 8), here, we concessively assume that the relevant sentences actually ascribe mental states to groups over and above their members. Again, this is a matter to be settled by the semanticist.
Of course, it may be that “collapse” encodes a distinct lexical use that encompasses what happened to the stock market. But in that case, the point still stands. Where there is semantic ambiguity we have to be careful about which lexical use is in play in drawing conclusions about what is thought from what is said.
Although we have characterized group mental state ascriptions as instances of loose use in the body of this paper, officially, we want to remain noncommittal as to whether such talk constitutes loose use or merely contributes to implicature calculation.
Arico, A. (2010). Folk psychology, consciousness, and context effects. Review of Philosophy and Psychology, 1(3), 371–393.
Arico, A. (2012). Breaking out of moral typecasting. Review of Philosophy and Psychology, in press
Arico, A., Fiala, B., Goldberg, R., & Nichols, S. (2011). Folk psychology of consciousness. Mind and Language, 26(3), 327–352.
Bach, K. (2001). You don't say? Synthese, 128, 15–44.
Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227–287.
Bloom, P., & Veres, C. (1999). The perceived intentionality of groups. Cognition, 71, B1–B9.
Bratman, M. (1993). Shared intention. Ethics, 104, 97–113.
Bratman, M. (1999). Faces of intention. Cambridge: Cambridge University Press.
Cullen, S. (2009). Survey-driven romanticism. Review of Philosophy and Psychology, 1, 275–296.
Fiala, B., Arico, A., & Nichols, S. (2011). On the Psychological origins of dualism: Dual-process cognition and the explanatory gap. In E. Slingerland, & M. Collard (Eds.), Creating consilience: issues and case studies in the integration of the sciences and humanities (pp. 88–109). Oxford: Oxford University Press.
Gilbert, M. (1996). On social facts. Princeton: Princeton University Press.
Gray, K., & Wegner, D. M. (2009). Moral typecasting: divergent perceptions of moral agents and moral patients. Journal of Personality and Social Psychology, 96, 505–520.
Gray, H., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315, 619.
Gray, K., Young, L., & Waytz, A. (2012). Report the Google finding in “Mind perception is the essence of morality”. Psychological Inquiry, 23(2), 101–124.
Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57, 243–259.
Horgan, T., & Tienson, J. (2002). The intentionality of phenomenology and the phenomenology of intentionality. In D. Chalmers (Ed.), Philosophy of mind: classical and contemporary readings (pp. 520–533). Oxford: Oxford University Press.
Huebner, B. (2008). Distributing cognition: A defense of collective mentality. Ph.D. dissertation. Chapel Hill: University of North Carolina.
Huebner, B., Bruno, M., & Sarkissian, H. (2010). What does the nation of China think about phenomenal states? Review of Philosophy and Psychology, 1, 225–243.
Knobe, J., & Prinz, J. (2008). Intuitions about consciousness: experimental studies. Phenomenology and Cognitive Science, 7, 67–83.
Kriegel, U. (2003). Is intentionality dependent upon consciousness? Philosophical Studies, 116(3), 271–307.
Landman, F. (1989a). Groups, I. Linguistics and Philosophy, 12(5), 559–605.
Landman, F. (1989b). Groups, II. Linguistics and Philosophy, 12(6), 723–744.
Link, G. (1984). Hydras. On the logic of relative clause constructions with multiple heads. In F. Landman & F. Veltmann (Eds.), Varieties of formal semantics. Dordrecht: Foris.
Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 4, 435–450.
Pettit, P. (2003). “Groups with minds of their own.” In F. Schmitt (Ed.), Socializing metaphysics (pp. 167–193). New York: Rowan and Littlefield.
Phelan, M. (2010). The inadequacy of paraphrase is the dogma of metaphor. Pacific Philosophical Quarterly, 91, 481–506.
Robbins, P., & Jack, A. (2006). The phenomenal stance. Philosophical Studies, 127(1), 59–85.
Rosenthal, D. (1997). A theory of consciousness. In F. Block & Güzeldere (Eds.), The nature of consciousness (pp. 729–753). Cambridge: MIT Press.
Schwarz, N. (1996). Cognition and communication: judgmental biases, research methods, and the logic of conversation. Mahwah: Erlbaum.
Searle, J. (1990). Collective intentions and actions. In P. Cohen, J. Morgan, & M. E. Pollack (Eds.), Intentions in communication. Cambridge: Bradford Books, MIT press.
Searle, J. (1995). The construction of social reality. New York: Free Press.
Strawson, G. (2005). Real intentionality v. 2: why intentionality entails consciousness. Synthesis Philosophica, 2(40), 279–297.
Sytsma, J., & Machery, E. (2009). How to study folk intuitions about consciousness. Philosophical Psychology, 22(1), 21–35.
Sytsma, J., & Machery, E. (2010). Two conceptions of subjective experience. Philosophical Studies, 151(2), 299–327.
Tollefsen, D. (2002). Organizations as true believers. Journal of Social Philosophy, 33, 395–410.
Velleman, D. (1997). “How to Share an Intention”. Philosophy and Phenomenological Research, 57, 29–50.
Portions of this paper were presented at Brown University’s Social Cognitive Science Research Center, the European Workshop on Experimental Philosophy at Eindhoven, Netherlands, the London School of Economics’ Philosophy Department, the Metro Experimental Research Group, the Southern Society for Philosophy and Psychology, Yale University’s Experimental Philosophy Lab, Yale University’s Mind and Development Lab, and the University of Arizona’s Experimental Philosophy Lab. Audience comments helped improve this paper. The authors are also grateful to Michael Bruno, Ben Chan, Georg Kjøll, Joshua Knobe, Eric Mandelbaum, Justin Sytsma, Zoltán Gendler Szabó, Jonathan Weinberg, and three blind referees for this journal who gave valuable comments on earlier drafts.
Electronic supplementary material
Below is the link to the electronic supplementary material.
(DOC 49 kb)
About this article
Cite this article
Phelan, M., Arico, A. & Nichols, S. Thinking things and feeling things: on an alleged discontinuity in folk metaphysics of mind. Phenom Cogn Sci 12, 703–725 (2013). https://doi.org/10.1007/s11097-012-9278-7
- Phenomenal consciousness
- Collective intentionality
- Linguistic pragmatics
- Experimental philosophy
- Philosophy of sociology
- Group minds