Abstract
How can abductive reasoning be physical, feasible, and reliable? This is Fodor’s riddle of abduction, and its apparent intractability is the cause of Fodor’s recent pessimism regarding the prospects for cognitive science. I argue that this riddle can be solved if we augment the computational theory of mind to allow for non-computational mental processes, such as those posited by classical associationists and contemporary connectionists. The resulting hybrid theory appeals to computational mechanisms to explain the semantic coherence of inference and associative mechanisms to explain the efficient retrieval of relevant information from memory. The interaction of these mechanisms explains how abduction can be physical, feasible, and reliable.
Similar content being viewed by others
Notes
See Fodor (2001, p. 99). See also, (Fodor 1983, pp. 104–119, 126–129; 1987, p. 140). Fodor’s riddle is not to be confused with what Carruthers (2003b) calls ‘Fodor’s problem’—“the problem of how to build distinctively-human cognition out of modular components” (502). Fodor’s problem is an attempt to solve Fodor’s riddle without jettisoning massive modularity theory. As we’ll see, this is one strategy for solving the riddle, but it is neither Fodor’s strategy nor the strategy that I propose.
Throughout this paper, italicized words in quotations are italicized in the original unless otherwise noted.
I am glossing over the differences between localist and distributed networks. In the latter, representations are identified with patterns of nodal activation rather than with individual nodes. (See Harnish 2002, pp. 318–325). The differences between these two classes of models will be immaterial in what follows.
See Cosmides and Tooby (1992a) for a massive modularity thesis manifesto. They motivate modularity theory by mentioning, inter alia, the problem of feasible inference and suggest that only encapsulated mechanisms can solve this problem. See, for example, pp.100–107. See also Cosmides and Tooby (1995).
See also Fodor’s treatment of centrality and the way in which it refers to the history of success and failure in previous trials:
[Q]uite a lot of the data that are described … as the effects of a subject’s “cognitive strategies” in categorization tasks might just as well be thought of as illustrating the effects of S’s changing estimates of which facts about the stimuli so far encountered are central; which properties of the new stimulus become “salient” depends, inter alia, on S’s pattern of success and failure in previous trials (2001, p. 110; my italics).
Perhaps a more direct route to the conclusion that abductive reasoning is isotropic comes from causal theories of explanation (see, for example, Lewis 1986; Lipton 1992). If H explains D in virtue of being among D’s causes, and if causal relations are knowable only a posteriori, then it follows that the space of relevant hypotheses cannot be delimited a priori.
For similar arguments for the isotropy of folk-psychological theorizing, see Currie and Sterelny (2000).
I am indebted to an anonymous reviewer for suggesting the objection that heuristic strategies present for Fodor’s riddle, as well as for pointing out that my solution to Fodor’s riddle can be interpreted as a species of the heuristic approach.
I am following Fodor’s convention of using expressions in small caps to refer to mental representations.
Contrast Churchland (1989, pp. 155–156), who suggests that CTM has difficulty explaining the retrieval of relevant information because “from the point of view of fast and relevant retrieval, a long list of sentences is an appallingly inefficient way to store information” (p. 156). I agree that CTM’s method of retrieving information is ‘appallingly inefficient’, but there is no reason to blame CTM’s commitment to a language of thought. If the right thought is recalled at the right time, it makes little difference whether or not the thought is syntactically structured.
Fodor should have seen this himself. Consider his discussion of experiments showing that response latencies in word/non-word decision tasks are faster in contexts in which the word is contextually appropriate than in contexts in which it isn’t—e.g., recognition of ‘bugs’ or ‘microphones’ will be faster when preceded by ‘the spy searched the room for__” than when preceded by “the doctor searched the room for__.” Fodor argues that this phenomenon is much more plausibly explained in terms of association than in terms of computation and inference. ‘Spy’ primes ‘insect’ almost as strongly as it primes ‘bug’, which is easy to explain in terms of associative relations given that (i) ‘spy’ is associated with ‘bug’, (ii) ‘bug is associated with ‘insect’, and (iii) the associative relation is (weakly) transitive. Thus, as Fodor says, contextual facilitation of word recognition “looks a lot less like the intelligent use of contextual/background information” and more like “some sort of associative relation among lexical forms … pitched at a level of representation sufficiently superficial to be insensitive to the semantic content of the items involved” (1983, p. 79). In other words, dumb, associative connections between lexical items established through their frequent co-occurrence in experience can be made to mimic intelligent, semantic connections. The parallels between this case and the case of hypothesis retrieval should be obvious.
See Harnish (2002, Chap. 11) for a review of the limitations of Hebbian algorithms.
See, for example, Churchland’s (1989, pp. 163–171) discussion of networks trained to distinguish the sonar signals of rocks from those of mines despite the fact that these signals differ in no obvious, systematic way.
See Fodor’s discussion of horizontal and vertical faculty psychology in (1983, pp. 10–23).
I am thankful to an anonymous reviewer for suggesting this objection.
Strictly speaking, thinking of e will only cause S to think of the constituents of H. An additional, computational mechanism is needed to explain why S thinks ‘the cat is on the mat’ rather than ‘the mat is on the cat’ upon thinking of cats, mats, and the relation of x being on y. This presents us with no special obstacle, however, for even if S is forced to think all of the possible ways of concatenating activated constituents, the result is nevertheless manageable. Entertaining every possible concatenation of a, b, and c is something that can be done in real time; entertaining every possible hypothesis is not.
Fodor himself is among the many opponents of connectionism who nevertheless find implementationalist connectionism palatable. See Fodor and Pylyshyn (1988).
See, for example, Minsky’s (1990) discussion of the relative strengths and weaknesses of symbolic and connectionist architectures and his proposals for a hybrid architecture. Pinker (2005) appeals to such hybrid architectures in response to Fodor’s challenge, focusing specifically on symbolic constraint satisfaction networks of the sort proposed by Hummel and Holyoak (1997). Pinker, however, shows only that these architectures are able to solve the problem of Quineian holism: “Constraint networks … are designed to do what Fodor … claims cannot be done: maintain a system of beliefs that satisfies some global property (such as consistency or simplicity) through strictly local computations” (p. 13). As we’ve seen, it is isotropy that is the real problem.
See Newborn (1997, p. 12).
If chess ability is partially innately determined, this can be explained by way of the fact that some people are innately better at forming the corresponding associations.
See Holding (1985, Chap. 4).
This article benefited greatly from the thoughtful comments of a number of readers, including Wayne Davis, Nathaniel Goldberg, Chauncey Maher, Iris Oved, Steve Kuhn, and Linda Wetzel.
References
Baron-Cohen, S. (1995). Mindblindness. Cambridge, MA: MIT.
Campbell, M., Nowatzyk, A., Hsu, F., & Anantharaman, T. (1990). A grandmaster chess machine. Scientific American, 263(4), 44–50.
Carroll, L. (1895). What the tortoise said to Achilles. Mind, 14, 278–280.
Carruthers, P. (2003a). Moderately massive modularity. In A. O’ Hear (Ed.), Mind and persons (pp. 67–91). New York: Cambridge University Press.
Carruthers, P. (2003b). On Fodor’s problem. Mind and Language, 18, 502–523.
Chomsky, N. (1965). Aspects of a theory of syntax. Cambridge: MIT.
Chomsky, N. (1986). Knowledge of language. New York: Praeger.
Churchland, P. M. (1989). On the nature of theories. In A neurocomputational perspective (pp. 153–196). Cambridge, MA: MIT Press.
Clark, A. (1994). Associative engines. Cambridge, MA: MIT.
Conan-Doyle, A. (1930). The adventure of the speckled band. In The complete Sherlock Holmes (Vol. I, pp. 257–273). Garden City, NY.
Cosmides, L., & Tooby, J. (1992a). The psychological foundations of culture. In J. H. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture (pp. 19–136). New York: Oxford University Press.
Cosmides, L., & Tooby, J. (1992b). Cognitive adaptations for social exchange. In J. H. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture (pp. 163–228). New York: Oxford University Press.
Cosmides, L., & Tooby, J. (1995). Forward to mindblindness (pp. xi–xviii). Cambridge, MA: MIT.
Currie, G., & Sterelny, K. (2000). How to think about the modularity of mind-reading. The Philosophical Quarterly, 50, 145–160.
Davidson, D. (1973). Radical interpretation. Dialectica, 27, 313–328.
Davis, W. A. (2003). Meaning, expression, and thought. New York: Cambridge University Press.
de Groot, A. D. (1978). Thought and choice in chess. The Hague: Mouton Publishers.
de Groot, A. D., & Gobet, F. (1996). Perception and memory in chess. Assen: Van Gorcum.
Dennett, D. C. (1987). Cognitive wheels: The frame problem in artificial intelligence. In Z. W. Pylyshyn (Ed.), The robot’s dilemma: The frame problem in artificial intelligence (pp. 41–64). Norwood, NJ: Ablex.
Fodor, J. A. (1975). The language of thought. Cambridge, MA: Harvard University Press.
Fodor, J. A. (1983). The modularity of mind. Cambridge, MA: MIT Press.
Fodor, J. A. (1985). Précis of the modularity of mind. Behavioral and Brain Sciences, 8, 1–5.
Fodor, J. A. (1987). Modules, frames, fridgeons, sleeping dogs, and the music of the spheres. In Z. W. Pylyshyn (Ed.), The robot’s dilemma: The frame problem in artificial intelligence (pp. 139–149). Norwood, NJ: Ablex.
Fodor, J. A. (1990). Fodor’s guide to mental representation. In A theory of content and other essays (pp. 3–31). Cambridge, MA: MIT.
Fodor, J. A. (1994). Fodor, Jerry A. In S. Guttenplan (Ed.), A companion of the philosophy of mind (pp. 292–300). New York: Oxford.
Fodor, J. A. (2001). The mind doesn’t work that way. Cambridge, MA: MIT Press.
Fodor, J. A. (2003). Hume variations. New York: Oxford University Press.
Fodor, J. A. (unpublished manuscript). How things look from here.
Fodor, J. A., & Lepore, E. (1992). Holism: A shopper’s guide. Cambridge: Blackwell.
Fodor, J. A., Pylyshyn, Z. (1988). Connectionism and cognitive architecture: A critical analysis. In S. Pinker & J. Mehler (Eds.), Connections and symbols (pp. 3–73). Cambridge: MIT.
Gobet, F. (1999). Chess, psychology of. In R. Wilson & F. Keil (Eds.), The MIT encyclopedia of the cognitive sciences (pp. 113–115). Cambridge, MA: MIT Press.
Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: the recognition heuristic. Psychological Review, 109, 75–90.
Harnish, R. M. (2002.) Minds, brains, computers. Malden, MA: Blackwell.
Hebb, D. O. (1949). The organization of behavior. New York: Wiley.
Hobbes, T. (1651/1996). Leviathan. New York: Cambridge University Press.
Holding, D. H. (1985). The psychology of chess skill. Hillsdale, New Jersey: Lawrence Erlbaum.
Hummel, J., & Holyoak, K. (1997). Distributed representations of structure: A theory of analogical access and mapping. Psychological Review, 104, 427–466.
Lewis, D. (1986). Causal explanation. In Philosophical papers (Vol. 2, pp. 214–240). New York: Oxford University Press.
Lipton, P. (1992). Inference to the best explanation. New York: Routledge.
Minsky, M. (1990). Logical vs. analogical or symbolic vs. connectionist or neat vs. scruffy. AI Magazine. In P. Winston (Ed.), Artificial intelligence at MIT, Vol. I: expanding frontiers (pp. 218–243). Cambridge: MIT Press.
Newborn, M. (1997). Kasparov versus deep blue: Computer chess comes of age. New York: Springer-Verlag.
Pinker, S. (2005). So how does the mind work?. Mind and Language, 20, 1–24.
Rey, G. (1997). Contemporary philosophy of mind. Cambridge, MA: Blackwell.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rellihan, M.J. Fodor’s riddle of abduction. Philos Stud 144, 313–338 (2009). https://doi.org/10.1007/s11098-008-9212-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-008-9212-6