Skip to main content
Log in

Content, Consciousness, and Cambridge Change

  • Published:
Acta Analytica Aims and scope Submit manuscript

Abstract

Representationalism is widely thought to grease the skids of ontological reduction. If phenomenal character is just a certain sort of intentional content, representationalists argue, the hard problem of accommodating consciousness within a broadly naturalistic view of the world reduces to the much easier problem of accommodating intentionality. I argue, however, that there’s a fatal flaw in this reasoning, for if phenomenal character really is just a certain sort of intentional content, it’s not anything like the sort of intentional content described by our best naturalistic theories. These theories make intentional content a mere Cambridge property of intentional states, a property that can be gained or lost through changes to distinct and causally disconnected objects. But consciousness is manifestly not like this; consciousness cannot suffer a mere Cambridge change. Thus, whatever ground is gained by explaining the phenomenal in terms of the intentional is lost again by undermining our best attempts to explain the intentional in terms of the natural. A Pyrrhic victory at best.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Block (2003) for a compelling defense of this view.

  2. Cf. Maudlin (1989, P. 409): “If an active physical system supports a phenomenal state, how could the presence or absence of a causally disconnected object affect that state? How could the object enhance or impede or alter or destroy the phenomenal state except via some causal interaction?” Maudlin objects to computational accounts of consciousness on the grounds that they allow phenomenal states to undergo what I’m calling a mere Cambridge change. Antony (1994) and Bartlett (2014) make similar objections to functionalism. While I agree with the intuition behind these objections, I disagree with their conclusions, for reasons I’ll give below.

  3. I adopt Lewis and Langton’s (1998) account of intrinsicality, but alternative definitions shouldn’t affect any of the arguments made in this paper.

  4. A terminological note: extrinsic properties can be genuine because the distinction between genuine and mere Cambridge change is one of contraries while the distinction between genuine and mere Cambridge properties is one of contradictories. The former is true by definition—genuine changes that occur to distinct but causally connected objects are neither genuine nor mere Cambridge. The latter is true by stipulation—if a property can undergo a mere Cambridge change it is a mere Cambridge property; otherwise it is genuine. Perhaps this use of the honorific “genuine” is too generous, but nothing hinges on this. What matters is simply that phenomenal properties are not mere Cambridge properties. Moreover, by adopting the weaker usage, we are adopting a weaker version of the genuineness thesis, which only strengthens our argument.

  5. A further reason for framing our argument in terms of genuine properties is that the distinction between intrinsic and extrinsic properties is often conflated with the completely orthogonal distinction between categorical and dispositional properties, as O’Sullivan (2012) observes. Thus, a number of authors argue that what makes phenomenal properties so resistant to functional reduction is that they’re intrinsic. But this can’t be true, for, as will be discussed below, functional properties needn’t be extrinsic.

  6. See McKitrick (2003) for a discussion and defense of extrinsic dispositions. I agree with Shoemaker (1980, p. 221) that extrinsic dispositions (or “powers”, as he prefers) are not genuine dispositions, but this is irrelevant to the argument of this paper. Whether or not they count as genuine dispositions, it’s clear that a great many extrinsic dispositions are mere Cambridge properties.

  7. This example is from Robert Boyle by way of Shoemaker (1980, p. 221).

  8. There is some controversy concerning whether dispositions actually entail their associated counterfactuals, but this will prove irrelevant to what follows. The psychosemantic accounts considered below do entail counterfactuals, and that, rather than some more general claim about dispositions, is all I will require for my objections.

  9. See O’Sullivan (2012), where this point is made clearly. To be fair, this understanding of functionalism is not standard. But, as O’Sullivan argues, the standard interpretation is due at least in part to a failure to distinguish extrinsic from dispositional properties. See Shoemaker (2012) for an example of a functionalist who explicitly rejects the assumption that functional properties must be extrinsic.

  10. Or to certain intrinsic properties of whatever state plays the pain role, which is to say that intrinsic dispositions are available to role functionalists as well as realizer functionalists. For the former, to be in pain would be to be in some state that plays such-and-such a functional role in virtue of its intrinsic properties, whatever they may be. Note as well that intrinsic dispositions can be multiply realizable.

  11. Nor does the rejection of mere Cambridge dispositions rule out computational accounts of consciousness, pace Maudlin (1989). Maudlin simply assumes that the computational dispositions at issue must be extrinsic, and, as Klein (2008) argues, it’s possible to provide an intrinsic dispositional account of computational states, which (together with what Klein calls an “episodic” account of computational implementation) prohibits computational systems from suffering mere Cambridge changes. Again, it’s extrinsic dispositions–and, in particular, mere Cambridge dispositions–rather than dispositions per se that are the problem.

  12. This claim may seem to conflict with the claim that some dispositional properties are extrinsic, but it doesn’t. Events often come about through the joint action of different causes. Neither tranquilizers nor alcohol by themselves are fatal, for example, but when taken together they often are. When we say that a state’s causal powers supervene on its intrinsic properties, we mean to hold constant the intrinsic properties of these other causes. Thus, alcohol’s causal powers supervene on its intrinsic properties because if we were to hold constant the patient’s medical condition, the level of tranquilizers in his blood, and so on, and merely change the extrinsic properties of the alcohol—where it was purchased, for example—this would have no effect on the outcome. When describing an object or a state’s extrinsic dispositions, however, we presuppose no such context. Thus, alcohol’s propensity to induce fatality in such and such a context is among its intrinsic properties; its propensity to induce fatality tout court is extrinsic, for it depends on the presence of the appropriate context.

  13. Mere Cambridge change might affect a state’s joint causal powers—namely, those powers it has in conjunction with whatever states suffer a genuine change. If a suffers a genuine change, this may alter b’s capacity to bring about c in conjunction with a. But this one exception does little to blunt the force of Bartlett’s argument.

  14. See Bartlett (2014) for a more careful elaboration of this argument.

  15. My presentation of naturalistic theories of content follows Papineau’s (2006) overview of the literature. See also Rey (1997, Ch. 9).

  16. I’m using words in small caps to refer to concepts and words in italics to refer to the content of these concepts. Thus, and refers to the concept, and to its content.

  17. It’s possible–though completely ad hoc–to hold that all of a concept’s inferential effects are activated along with the concept. Perhaps, per implausibile, when we think that x is a dog, we infer that x is a canine, a mammal, an animal, a living thing–and so on, for each of its inferential effects. But it’s not even possible to hold this with respect to a concept’s inferential causes. How could a concept be caused by all of its possible causes?

  18. Here, of course, we have to read “mammal” and “dog” as referring to the states considered independently of their content–as referring to the syntactic rather than semantic properties of the states, as it were.

  19. But suppose that definentia are simply structural descriptions of definienda—that, e.g., Vixen has the concepts female and fox as proper parts. And suppose that the content constitutive inferences are simply those that obtain between a concept and its constituents. Then it really would be impossible to possess vixen without possessing fox or to possess vixen and fox without the two being inferentially related, wouldn’t it? Perhaps, but this can’t be the whole story, for some concepts must be simple, on pain of infinite regress, and simple concepts have no constituents to which they can be inferentially related. Thus, if one is to tell an inferentialist story about primitive concepts, the inferential relations will have to be relations between wholly distinct relata, and content will have to be extrinsic.

  20. If one is worried that adding a connection to a state will in some way affect the state itself, simply imagining removing a dormant state or connection, as described above. I focus on addition in this and in the argument against informational semantics only for illustrative purposes. Subtraction changes content just as surely.

  21. This is similar to the “qua problem” described by Devitt and Sterelny (1999, pp. 90-93).

  22. See Fodor (1995) and Margolis (1998) for a discussion of sustaining mechanisms. Note that the sustaining mechanisms themselves don’t enter into the content of the concept—even when they are psychological. What determines content is simply that the right sort of dependency exists between concepts and objects—however this dependency is achieved. Mediation by means of circuitous and even bizarre links won’t affect content. As Fodor (1990a, p. 56) says, mediation doesn’t matter.

  23. One might feel that the states realizing our experience of red and blue are too intimately connected for changes to one to be independent of changes to the other. (I would argue that this feeling is without justification, since there’s no reason to believe this is so on the currently popular opponent process theory of color vision. Moreover, it’s irrelevant to an assessment of IS, since the theory itself makes no mention of the contingent features of our biology.) One can then simply change the example. Imagine all of the dormant connections to our dormant auditory, olfactory, and gustatory states being rerouted to red. Surely that would change the state’s informational content!

  24. Here I’m mainly following Rey’s (1997, pp. 243-249) account of the disjunction problem and the various proposed resolutions. Rey doesn’t explicitly mention what I’m calling additional constraint theories, but I believe these accounts are sufficiently distinctive to deserve special mention.

  25. Dretske also allows that there is a kind of ontogenetic selection involved in learning. But we can safely ignore this in what follows. It’s phylogenetic selection that gives rise to the sort of sensory content (experiences, sensations, feelings, etc.) typically associated with phenomenal experience. See Dretske (1995, Ch. 1) for discussion.

  26. See Mills and Beatty (1979) for an extended defense of this view.

  27. Of course, if this were to happen, we may not wish to say that S′ variants were being selected for, for this seems to imply some measure of success. No matter, for it’s clear at least that the S variants are not being selected for.

References

  • Antony, M. V. (1994). Against functionalist theories of consciousness. Mind and Language, 9(2), 105–123.

    Article  Google Scholar 

  • Bartlett, G. (2014). Against the necessity of functional roles for conscious experience: reviving and revising a neglected argument. Journal of Conscious Study, 21, 33–53.

    Google Scholar 

  • Block, N. (2003). Mental paint. In M. H. Hahn & B. Ramberg (Eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge (pp. 165–200). Cambridge, MA: MIT Press.

    Google Scholar 

  • Devitt, M., & Sterelny, K. (1999). Language and reality. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dretske, F. (1981). Knowledge and the flow of information. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dretske, F. (1995). Naturalizing the mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dretske, F. (1988). Explaining behavior. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dretske, F. (2003). Experience as representation. Philosophical Issues, 13(1), 67–82.

    Article  Google Scholar 

  • Fodor, J. (1990a). A theory of content, II. In A theory of content and other essays (pp. 89–136). Cambridge, MA: MIT Press.

    Google Scholar 

  • Fodor, J. (1990b). Psychosemantics, or, where do truth conditions come from? In W. G. Lycan (Ed.), Mind and cognition (pp. 312–337). Oxford: Blackwell.

    Google Scholar 

  • Fodor, J. (1995). The elm and the expert. Cambridge, MA: MIT Press.

    Google Scholar 

  • Fodor, J. (2008). LOT 2: The language of thought revisited. New York: Oxford University Press.

    Book  Google Scholar 

  • Geach, P. T. (1969). God and the soul. London: Routledge and Kegan Paul.

    Google Scholar 

  • Kim, J. (1974). Noncausal connections. Noûs, 8(1), 41–52.

    Article  Google Scholar 

  • Klein, C. (2008). Dispositional implementation solves the superfluous structure problem. Synthese, 165(1), 141–153.

    Article  Google Scholar 

  • Lewis, D., & Langton, R. (1998). Defining ‘intrinsic’. Philosophical Phenomenological Research, 58(2), 333–345.

    Article  Google Scholar 

  • Lycan, W. G. (1996). Consciousness and experience. Cambridge, MA: MIT Press.

    Google Scholar 

  • Lycan, W. G. (2001). The case for phenomenal externalism. Philosophical Perspectives, 15, 17–35.

    Google Scholar 

  • Margolis, E. (1998). How to acquire a concept. Mind and Language, 13(3), 347–369.

    Article  Google Scholar 

  • Maudlin, T. (1989). Computation and consciousness. J Philosophy, 86, 407–432.

    Article  Google Scholar 

  • McKitrick, J. (2003). A case for extrinsic dispositions. Australas J Philos, 81(2), 155–174.

  • Mills, S. K., & Beatty, J. H. (1979). The propensity interpretation of fitness. Philosophy of Science, 46, 263–286.

    Article  Google Scholar 

  • Neander, K. (1991). The teleological notion of ‘function’. Australasian Journal of Philosophy, 69(4), 454–468.

    Article  Google Scholar 

  • O’Sullivan, B. (2012). Absent qualia and categorical properties. Erkenntnis, 76(3), 353–371.

    Article  Google Scholar 

  • Papineau, D. (2006). Naturalist theories of meaning. In E. Lepore & B. Smith The (Eds.), Oxford Handbook of Philosophy of Language (pp. 175–188). New York: Oxford University Press.

    Google Scholar 

  • Prinz, J. (2002). Furnishing the mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Putnam, H. (1981). Brains in a vat. In Reason, truth, and history (pp. 1–21). New York: Cambridge University Press.

    Chapter  Google Scholar 

  • Rey, G. (1997). Contemporary philosophy of mind: A contentiously classical approach. Cambridge, MA: Blackwell.

    Google Scholar 

  • Shoemaker, S. (1980). Causality and properties. In P. Van Inwagen (Ed.), Time and cause (pp. 109–135). Dordrecht: Reidel.

    Chapter  Google Scholar 

  • Shoemaker, S. (2012). Physical realization. New York: Oxford University Press.

    Google Scholar 

  • Tye, M. (1995). Ten Problems of Consciousness. Cambridge, MA: MIT Press.

    Google Scholar 

  • Tye, M. (2000). Consciousness, Color, and Content. Cambridge, MA: MIT Press.

    Google Scholar 

Download references

Acknowledgments

I’d like to thank the audience at the 88th Joint Session of the Aristotelian and Mind Society at University of Cambridge and the philosophy department at Seattle Pacific University for helpful comments on previous drafts of this paper. I’d also like to thank the anonymous referee for this journal who provided a number of helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew Rellihan.

Additional information

Tom Nagel once wrote that consciousness is what makes the philosophy of mind so hard. That’s almost right. In fact, it’s intentionality that makes the philosophy of mind so hard; consciousness is what makes it impossible (Fodor 2008, p. 22).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rellihan, M. Content, Consciousness, and Cambridge Change. Acta Anal 30, 325–345 (2015). https://doi.org/10.1007/s12136-015-0256-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12136-015-0256-x

Keywords

Navigation