Abstract
I have argued elsewhere that non-sentential representations that are the close kin of scale models can be, and often are, realized by computational processes. I will attempt here to weaken any resistance to this claim that happens to issue from those who favor an across-the-board computational theory of cognitive activity. I will argue that embracing the idea that certain computers harbor nonsentential models gives proponents of the computational theory of cognition the means to resolve the conspicuous disconnect between the sentential character of the data structures they posit and the nonsentential qualitative character of our perceptual experiences of corporeal (i.e., spatial, kinematic, and dynamic) properties. Along the way, I will question the viability of some externalist remedies for this disconnect, and I will explain why the computational theory put forward here falls quite clearly beyond the useful bounds of the Chinese-Room argument.
Similar content being viewed by others
Notes
Because of the need for infinite memory, biologically questionable components or principles (e.g., an infinite tape, a capacity to add units ad infinitum, or infinitesimally variable activation values) are required in order to achieve full universality, but this has more to do with the ideal nature of the enterprise than any inherent limitations of neural networks (Franklin and Garzon 1996). Also, please bear in mind that the question of neural realizability has to do with what neurons can compute (computability theory), not what they can learn to compute (learning theory).
One might also opt for disjunctivism with regard to content externalism (see Sect. 3.2), but it fares no better in this regard.
Horgan and Tienson (2002, p. 520) claim that, in and of itself, phenomenal character always conspires to assert, or specify, that the world is thus and such, which (if we grant that this character is narrow) suggests that it constitutively determines a kind of narrow content. What I ultimately hope to establish here is that, in the corporeal case at least, the structure of our internal representational vehicles itself constitutively determines phenomenal character. Thus, though my own focus is on the structure of representations rather than their content, the two positions might be considered complimentary. To the extent that I do consider the relationship between the phenomenal character and representational content here, it is only to deny that wide contents are capable of explaining phenomenal character.
One might claim on this basis that veridical experiences are constituted by tangible happenings in an organism’s immediate environment, for the possible world in question is the actual world. However, Lycan seems to be denying that perceptual content is exhausted by just one possible world. Thus, even if happenings in the actual environment constitute part of a veridical content, they are only the tip of the semantic iceberg, the rest of which lies in the deep dark depths of the merely possible.
To be fair, Lycan’s commitment to CE has been more exploratory than steadfast. Perhaps concerns of the sort voiced here are part of the reason why.
Tye might also fall back upon his evolutionary naturalization of content, claiming that this is where materialists will find the reference to external happenings that they seek. On this view, happenings in an organisms evolutionary past determine what its perceptual states ‘track’ in the here and now. But while this is all well and good if what we are searching for is a diachronic account of qualia, it does us no good in our current quest for a synchronic one (see Tye 2000, p. 32).
It seems plausible to me that the entire reason why contents tend to co-vary with experiences is that it has proven useful that we classify the latter in terms of the former—rather like it has proven quite useful that we classify bumps on the skin in terms of their etiology (see Davidson 2001)—not because the latter are identical to the former. On this view, it is just fine to claim that we classify qualitative characteristics in terms of their relations to the world, but it is a mistake to be an externalist about what is being classified (see Dretske 1996, p. 116; Waskan 2006, pp. 82–83).
While similar to the idea of ‘mental paint’ (Block 1996), the view that follows is less metaphorical and attributes corporeal qualia to the representations themselves rather than to the medium through which they are constructed.
Figure 1a has been reprinted, with permission, from http://avl.ncsa.uiuc.edu/AtmosphericSciences.html (last accessed 9/20/08). Figure 1b has been reprinted, with permission, from Veysey and Goldenfeld (2008).
Philosophers also talk of representations being intrinsic in a different sense. This one has to do with their possessing representational content or intentionality on their own, so to speak, as opposed to having it only because of the way humans view or use them. Thus, a scale model of an SUV may well be intrinsic in the sense that concerns me, but it is surely not intrinsic in this other sense, for it is only a model of anything because of how we view or use it.
There are, of course, also important differences between InCoMs and scale models (Waskan 2005).
http://globetrotter.berkeley.edu/people/Searle/searle-con4.html (last accessed 9/25/08).
Searle bristles at this terminology, but I feel that it is perfectly apt in light of the foregoing exposition.
To overcome the limitations imposed by human memory and, worse still, by the fairly rigid constraints governing perception and thought, this project will surely require the construction of vast computational models of neural systems (see Waskan 2006, Chap. 9). Admittedly, to the extent that these models do explain qualia, it will sometimes (viz., as concerns creatures with radically different perceptual organs) be akin to the way in which computational models of black holes or the big bang explain these occurrences. Apart from inspiring a wealth of useful metaphors, in the end they may not provide us with the exact kinds of insight and understanding that Mary and Nagel seek. Still, unlike their topic-neutral predecessors, these models and metaphors would, in their own way, take us up to the level of, and thence inside the belly of the beast, as it were. This should be cause enough for celebration among us cosmic sea slugs.
References
Aristotle. (4th-century B.C./1987). On the soul. In J. L. Ackrill (Ed.), A new Aristotle reader (pp. 161–205). Princeton, NJ: Princeton University Press.
Block, N. (1981). Introduction: What is the issue? In N. Block (Ed.), Imagery (pp. 1–18). Cambridge, MA: The MIT Press.
Block, N. (1990). Mental pictures and cognitive science. In W. G. Lycan (Ed.), Mind and cognition (pp. 577–606). Cambridge, MA: Blackwell.
Block, N. (1996). Mental paint and mental latex. Philosophical Issues, 7, 19–49.
Block, N. (2002). Searle’s arguments and cognitive science. In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 70–79). New York: Oxford University Press.
Boden, M. (2006). Mind as machine. New York: Oxford University Press.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58, 10–23.
Cole, D. (1991). Artificial intelligence and personal identity. Synthese, 88, 399–417.
Copeland, B. (2002). The Chinese room from a logical point of view. In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 109–122). Oxford: Oxford University Press.
Davidson, D. (2001). Subjective, intersubjective, objective. Oxford: Clarendon Press.
Dretske, F. (1996). Phenomenal externalism or if meanings ain’t in the head, where are qualia? Philosophical Issues, 7, 143–158.
Flanagan, O. (1991). The science of the mind. Cambridge, MA: The MIT Press.
Franklin, S., & Garzon, M. (1996). Computation by discrete neural nets. In P. Smolensky, M. Mozer, & D. Rumelhart (Eds.), Mathematical perspectives on neural networks (pp. 41–84). Mahwah, NJ: Lawrence Earlbaum.
Haugeland, J. (1987). An overview of the frame problem. In Z. W. Pylyshyn (Ed.), Robot’s dilemma (pp. 77–93). Norwood, NJ: Ablex Publishing Corp.
Haugeland, J. (2002). Syntax, semantics, and physics. In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 379–392). New York: Oxford University Press.
Hauser, L. (2002). Nixin’ goes to China. In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 123–143). New York: Oxford University Press.
Horgan, T., & Tienson, J. (2002). The intentionality of phenomenology and the phenomenology of intentionality. In D. Chalmers (Ed.), Philosophy of mind: Classical and contemporary problems (pp. 520-533). New York: Oxford.
Kosslyn, S. M. (1994). Image and brain: The resolution of the imagery debate. Cambridge, MA: The MIT Press.
Leibniz, G. W. (1714/1968). The monadology (trans: Latta, R.). Oxford: Clarendon Press.
Lycan, W. (2001). The case for phenomenal externalism. Philosophical Perspectives, 15, 17–35.
Lycan, W. (2006). Enactive intentionality. Psyche, 12(3), 1–12.
Marr, D. (1982). Vision. New York: Henry Holt and Co.
McCarthy, J., & Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer & D. Michie (Eds.), Machine intelligence (pp. 463–502). Edinburgh, UK: Edinburgh University Press.
Nickel, B. (2007). Against intentionalism. Philosophical Studies, 136, 279–304.
Noë, A. (2006). Experience without the head. In T. Gendler & J. Hawthorne (Eds.), Perceptual experience (pp. 411–433). Oxford: Oxford University Press.
O’Brien, G., & Opie, J. (2001). Sins of omission and commission. Behavioral and Brain Sciences, 24(5), 997–998.
O’Regan, J., & Noë, K. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(5), 939–1031.
Palmer, S. (1978). Fundamental aspects of cognitive representation. In E. Rosch & B. Lloyd (Eds.), Cognition and categorization (pp. 259–303). Hillsdale, NJ: Lawrence Erlbaum Associates.
Penfield, W. (1958). Some mechanisms of consciousness discovered during electrical stimulation of the brain. Proceedings of the National Academy of Sciences, 44(2), 51–66.
Preston, J., & Bishop, M. (Eds.). (2002). Views into the Chinese room: New essays on Searle and artificial intelligence. Oxford: Oxford University Press.
Pylyshyn, Z. (1981). The imagery debate: Analog media versus tacit knowledge. In N. Block (Ed.), Imagery (pp. 151–206). Cambridge, MA: The MIT Press.
Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: The MIT Press.
Pylyshyn, Z. W. (2001). Seeing, acting, and knowing. Behavioral and Brain Sciences, 24(5), 999.
Pylyshyn, Z. W. (2002). Mental imagery: In search of a theory. Behavioral and Brain Sciences, 25(2), 157–182.
Pylyshyn, Z. W. (2003). Return of the mental image: Are there really pictures in the brain? Trends in Cognitive Sciences, 7(3), 113–118.
Ramachandran, V. S., & Hirstein, W. (1998). The perception of phantom limbs: The D.O. Hebb lecture. Brain, 9(121), 1603–1630.
Revonsuo, A. (2001). Dreaming and the place of consciousness in nature. Behavioral and Brain Sciences, 24(5), 939–1031.
Rey, G. (2002). Searle’s misunderstandings of functionalism and strong AI. In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 201–225). Oxford: Oxford University Press.
Rosenthal, D. (1986). Two concepts of consciousness. Philosophical Studies, 49, 329–359.
Schank, R., & Abelson, R. (1977). Scripts, plans, goals, and understanding. Hillsdale, NJ: Lawrence Erlbaum Associates.
Schlottman, A. (1999). Seeing it happen and knowing how it works: How children understand the relation between perceptual causality and underlying mechanism. Developmental Psychology, 35(5), 303–317.
Scholl, B., & Tremoulet, P. (2000). Perceptual causality and animacy. Trends in Cognitive Sciences, 4(8), 299–409.
Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457.
Searle, J. (1990). Presidential address. Proceedings and Addresses of the American Philosophical Association, 64, 21–37.
Searle, J. (1992). The rediscovery of the mind. Cambridge, MA: The MIT Press.
Searle, J. (2002). Twenty-one years in the Chinese room. In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 51–69). New York: Oxford University Press.
Searle, J. (2004). Mind. New York: Oxford University Press.
Simons, D., & Levin, D. (1997). Change blindness. Trends in Cognitive Sciences, 1(7), 261–267.
Smart, J. J. C. (1962). Sensations and brain processes. In Reprinted in D. Rosenthal (Ed.), The nature of mind (pp. 169–176). New York: Oxford University Press, 1991.
Sterelny, K. (1990). The imagery debate. In W. G. Lycan (Ed.), Mind and cognition (pp. 607–626). Cambridge, MA: Blackwell.
Tye, M. (1999). Phenomenal consciousness: The explanatory gap as a cognitive illusion. Mind, 108(432), 705–725.
Tye, M. (2000). Consciousness, color, and content. Cambridge, MA: The MIT Press.
Tye, M., & Byrne, A. (2006). Qualia ain’t in the head. Nous, 40(2), 241–255.
Veysey, J., & Goldenfeld, N. (2008). Watching rocks grow. Nature Physics, 4(4), 310–313.
Waskan, J. (2003). Intrinsic cognitive models. Cognitive Science, 27, 259–283.
Waskan, J. (2005). Review of J. Preston and M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence. Philosophical Review, 114, 277–282.
Waskan, J. (2006). Models and cognition. Cambridge, MA: The MIT Press.
Waskan, J. (2008). Knowledge of counterfactual interventions through cognitive models of mechanisms. International Studies in Philosophy of Science, 22(3), 259–275.
Wheeler, M. (2002). Change in the rules: Computers, dynamical systems, and Searle. In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 338–359). New York: Oxford University Press.
Wolff, P. (2007). Representing causation. Journal of Experimental Psychology: General, 136(1), 82–111.
Acknowledgments
For their helpful comments and guidance regarding the ideas presented in this essay, thanks go to participants of the following conferences: 2006 British Society for the Philosophy of Science, 2008 North American Computing and Philosophy, Illinois Philosophical Association 2006 Meeting. Thanks also go to Daniel Korman, John Searle, and especially to an anonymous reviewer at Philosophical Studies.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Waskan, J. A vehicular theory of corporeal qualia (a gift to computationalists). Philos Stud 152, 103–125 (2011). https://doi.org/10.1007/s11098-009-9463-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-009-9463-x