Representations gone mental

Abstract

Many philosophers and psychologists have attempted to elucidate the nature of mental representation by appealing to notions like isomorphism or abstract structural resemblance. The ‘structural representations’ that these theorists champion are said to count as representations by virtue of functioning as internal models of distal systems. In his 2007 book, Representation Reconsidered, William Ramsey endorses the structural conception of mental representation, but uses it to develop a novel argument against representationalism, the widespread view that cognition essentially involves the manipulation of mental representations. Ramsey argues that although theories within the ‘classical’ tradition of cognitive science once posited structural representations, these theories are being superseded by newer theories, within the tradition of connectionism and cognitive neuroscience, which rarely if ever appeal to structural representations. Instead, these theories seem to be explaining cognition by invoking so-called ‘receptor representations’, which, Ramsey claims, aren’t genuine representations at all—despite being called representations, these mechanisms function more as triggers or causal relays than as genuine stand-ins for distal systems. I argue that when the notions of structural and receptor representation are properly explicated, there turns out to be no distinction between them. There only appears to be a distinction between receptor and structural representations because the latter are tacitly conflated with the ‘mental models’ ostensibly involved in offline cognitive processes such as episodic memory and mental imagery. While structural representations might count as genuine representations, they aren’t distinctively mental representations, for they can be found in all sorts of non-intentional systems such as plants. Thus to explain the kinds of offline cognitive capacities that have motivated talk of mental models, we must develop richer conceptions of mental representation than those provided by the notions of structural and receptor representation.

This is a preview of subscription content, access via your institution.

Fig. 1

Notes

  1. 1.

    For a canonical statement of the first sort of worry, see Clark and Toribio (1994). For a canonical statement of the second sort of worry, see Bechtel (1998)

  2. 2.

    The term ‘connectionism’ is often used narrowly, to refer to a specific research program that emerged in the 1980s, which used highly idealized neural network models—typically, feedforward multilayer perceptrons trained by backpropagation—to simulate various psychological capacities. In this paper I’ll use ‘connectionism’ more broadly, to encompass any psychological theory that appeals to signal processing within networks of nodes whose connections are shaped by experience-dependent plasticity mechanisms. Connectionism in this sense includes the PDP models of the ‘80s, as well as more biologically realistic models of specific neural circuits such as those found in contemporary cognitive and computational neuroscience.

  3. 3.

    See Garzón and Rodriguez (2009), Grush (2008), Shagrir (2012), Sprevak (2011).

  4. 4.

    I should note that Ramsey discusses two other conceptions of cognitive representation that I will not discuss in this paper: ‘input–output’ (I–O) representations and ‘tacit’ representations. As with structural and receptor representations respectively, Ramsey holds that I–O representations are proprietary to classical explanations and play a genuinely representational explanatory role, whereas tacit ‘representations’ are proprietary to connectionist explanations and are not really representations at all. I do not discuss I–O or tacit representations at length in this paper because Ramsey himself places far more emphasis on the contrast between structural and receptor representations, and because the notion of structural representation is arguably the most important and widely discussed conception of representation in cognitive science. Moreover, I think that Ramsey’s arguments about the representational status of I–O and tacit representations are far less convincing than his arguments about structural and receptor representations, though space limitations prevent me from giving anything more than a rough sketch of my reasons. First consider I–O representations. Ramsey holds that explanations in classical cognitive science proceed by first characterizing the cognitive capacity to be explained in terms of a mapping from inputs to outputs, which are characterized in terms of some external problem domain, and then decomposing the cognitive capacity into simpler sub-capacities, which are explained by appealing to computational sub-processes that implement ‘internal’ input–output mappings defined over the same domain as the overall capacity to be explained. Ramsey holds that these sub-processes therefore manipulate epresentations of entities within that domain. However, Ramsey’s characterization of this explanatory strategy, which he identifies with the homuncular functionalism of Dennett (1981), strikes me as mistaken. The whole point of homuncular functionalism is that sub-processes do not manipulate representations of entities that are in the domain of the cognitive capacity to be explained—that’s how decomposition is supposed to expunge the homunculus. Now consider tacit representations. Ramsey points out that connectionist explanations often invoke states that are somehow implicitly embodied throughout the functional architecture of a network, and holds that these ‘tacit’ states are characterized as representations merely because they dispose the network to settle into a certain pattern of activity. However, Ramsey argues, this kind of role isn’t distinctively representational, for all sorts of physical states ground dispositions without us having any inclination to think of them as representations. The central reason this argument fails, I think, is that it rests upon a beguiling yet defective conception of explicitness—what (Kirsh (1990), p. 350) has called the “bewitching image of a word printed on a page”. Even a symbolic structure within a classical system, a paragon of computational explicitness, might be stored in essentially the same manner as the ‘tacit’ states of a connectionist network, in the sense that it might be arbitrarily distributed throughout memory, and only have a determinate identity by virtue of the way it is read by the processor—i.e. by virtue of the dispositions it grounds within the functional architecture of the system. Much more can and should be said about these issues, but unfortunately that will have to wait for another occasion.

  5. 5.

    The main players in this project include Dretske (1988), Fodor (1990), and Millikan (1984).

  6. 6.

    Others have made essentially the same point about the profligacy of Millikan’s view. For example, Allen and Hauser (1993) complain that Millikan’s view entails that “some interactions between trees can have content attributed to them” (p. 88), and Sterelny (1995) expresses concern that on Millikan’s view, “it will turn out that saliva represents food” (p. 256).

  7. 7.

    Note that I am using ‘isomorphism’ loosely here, to refer to the kind of resemblance relations that structural representations purportedly participate in, since that term is so familiar in this context. However, I will go on to argue that the resemblance relations at issue here are probably best understood as homomorphisms rather than isomorphisms.

  8. 8.

    Where I’m here using ‘connectionism’ in the broad sense I outlined in note 2.

  9. 9.

    This reply, and my response to it, echo many of the points about so-called ‘tacit’ representations that I address in note 4.

  10. 10.

    See Eliasmith (2005) for a review.

  11. 11.

    See Miall and Wolpert (1996) for a review.

  12. 12.

    See also Grush (2008), Sprevak (2011).

  13. 13.

    Note that the boundary between these two kinds of replies to Ramsey is fuzzy. Some of those who have argued that connectionists invoke structural representations can be understood to be arguing, implicitly, that states that intuitively seem to be receptors in fact also count as structural representations. However, to my knowledge nobody has explicitly developed an argument to the effect that all and only structural representations are receptors.

  14. 14.

    See, e.g., Bartels (2006), who develops this idea in the context of debates about scientific representation, a context in which many of the present issues about structural representation are recapitulated.

  15. 15.

    An isomorphism is a bijective (i.e. one-one and onto) function from one set-theoretic structure to another, which preserves the relations defined over the elements of each structure. More precisely, an isomorphism between structures \(A\) and \(B\) is a bijective mapping \(\phi :{A \rightarrow B}\) from the objects in \(A = \{a_1, \ldots , a_n\}\) to the objects in \(B = \{b_1, \ldots , b_n\}\), such that for any relation \(R \in A\), if \(R\) obtains for a subset of the objects in \(A\), \(A' = \{a_i, \ldots , a_j\}\), there is a relation \(S \in B\), that obtains for a subset of the objects in \(B\), \(B' = \{\phi (a_i), \ldots , \phi (a_j)\}\). A homomorphism, like an isomorphism, is a structure-preserving mapping from one set-theoretic structure to another, but unlike isomorphisms, homomorphisms needn’t be bijective. Thus, a homomorphic mapping can be many-one, and needn’t map onto all the of the elements in the represented structure.

  16. 16.

    (Ramsey (2007), pp. 98–99) does respond to a kind of observer-dependency worry in this context, but not the specific worry that I’m raising here. He addresses the concern that the structural representations posited by classical cognitive scientists are merely useful fictions. Like Ramsey, I don’t think we should lose any sleep over that concern. The worry that I’m raising here is different: it’s that Ramsey’s explication of the kind of structural representations posited by classical cognitive scientists entails that the content of a structural representation is radically observer-dependent. One might be troubled by that worry without having any specific views about the scientific realism debate.

  17. 17.

    See, for example, Godfrey-Smith (2006), Grush (2004), and Millikan (1984).

  18. 18.

    The most detailed presentation of his theory of representation appears in his most recent book, Gallistel and King (2010).

  19. 19.

    Note that I don’t take myself to have provided a strong defense of a Gallistelian view of structural representation against the traditional objections to resemblance-based theories. For such a defense, see Isaac (2012). My claim here is simply the conditional one that if any account of structural representation is viable, Gallistel’s is the most plausible candidate.

  20. 20.

    The distinction between an indicator’s being selected over the course of phylogeny, and its being selected over the course of ontogeny, plays an important theoretical role for Dretske. However the distinction is largely orthogonal to present debates, so, like Ramsey, I’ll elide over it in what follows.

  21. 21.

    Sprevak (2011) makes essentially the same point when he writes that “what satisfies the receptor notion, by itself, may not fulfill the job description of a representation, but the wider explanatory role that it plays in explaining successful behaviour may justify its labelling as a representation.”

  22. 22.

    As I’m using the term here, ‘analog’ is not synonymous with ‘continuous’. Rather, I’m using ‘analog’ in roughly Lewis (1971) sense, according to which magnitudes in the representing system are directly proportional to magnitudes in the represented system. However, when I go on to distinguish between ‘analog’ and ‘binary’ receptors, the distinction I wish to mark is not Lewis’ distinction between analog and digital representation. Binary receptors are arguably still cases of analog representation in Lewis’ sense, it’s just that they’re degenerate cases; they can occupy only one of two states.

  23. 23.

    Gallistel (1990) writes that “representation should have the same meaning in psychology as it has in mathematics” (p. 1), and that “those familiar with the theory of measurement\(\ldots \)will recognize the parallel between this use of representation and its use in measurement theory” (p. 2).

  24. 24.

    The idea, of course, is not that a feature detector fires when and only when the feature that it is tuned to is in its receptive field; neurons are noisy critters, and are constantly firing even in the absence of external stimulation. The idea is that although a feature detector might fire across a range of frequencies, it’s only firing above a certain threshold frequency that is functionally relevant to its role as a detector.

  25. 25.

    A seminal application of this idea to ‘edge’ and ‘line’ detectors is Rao and Ballard (1999).

  26. 26.

    This might be a bit quick. Some, such as van Fraassen (2008), argue that there’s simply no sense to be made of the idea that a homomorphism might hold from one concrete, physical system to another, since the technical notion of a homomorphism is only well-defined in the domain of abstract, mathematical systems. This issue deserves further discussion, but it’s orthogonal to my present concerns, since it doesn’t provide any grist for the mill of someone who wants to claim that there’s a substantive theoretical distinction between what I’m calling ‘analog’ and ‘binary’ receptors, or between receptors and structural representations. Insofar as the notion of homomorphism applies to any of these mechanisms, it applies to all of them.

  27. 27.

    The word I replaced here is ‘isomorphism’. In earlier work, Gallistel tended to express his view in terms of isomorphisms, but in more recent work he expresses it in terms of homomorphisms, presumably due to a recognition of the problems with an isomorphism-based view of representation of the kind we discussed earlier.

  28. 28.

    To relate the point here to my earlier characterization of functioning homomorphisms in terms of measurement theory, measurement needn’t involve the assignment of numerals to a given system, which must be interpreted by an intelligent agent; measurement procedures can be automated within a system—think, for example, of how the measurement of temperature is automated within a thermostat. An automated measurement procedure just is a procedure that mediates a functioning homomorphism in Gallistel’s sense

  29. 29.

    This line of thought seems to underlie many discussions of structural representation, which employ terms like ‘standing-in’ and ‘surrogative reasoning’. However, it’s often unclear whether these terms are being used in Clark’s sense or Ramsey’s. My impression is that there’s a tendency in the literature to assume that for a representation to be used as a surrogate just is for it to be manipulated offline, but as our discussion of Ramsean surrogates has shown, many surrogates are causally coupled to the systems they’re surrogates for. Indeed, this is arguably the normal function of surrogates—think of a map being used in conjunction with landmark recognition to navigate through an environment.

  30. 30.

    To underscore the point made in the previous section, note also that circadian clocks in plants satisfy the conditions for being Dretskean receptors, for essentially the same reasons.

  31. 31.

    Of course, if one could show that a cognitive theory doesn’t posit representations in any sense, one would have shown that it doesn’t posit mental representations. But as I pointed out at the beginning of Sect. 3, Ramsey’s argument doesn’t show that; even granting all the premises, the most the argument could show is that connectionist mechanisms aren’t representations by virtue of functioning as receptors, not that such mechanisms fail to be representations simpliciter. Indeed, the naturalistic methodology that Ramsey adopts precludes him from developing the kind of argument at issue; recall that Ramsey rightly holds that it is forlorn to seek a general analysis of representation that will encompass all and only representations.

  32. 32.

    I should note that I don’t advocate an anthropocentric view of the domain of organisms that possess mental representations. I think that many animals, and probably even insects, have mental representations. For example, Clayton et al. (2001) have provided compelling evidence that scrub jays have episodic memory. But the ingenuity of this experiment, and the controversy surrounding its interpretation, highlights the fact that the capacities that mental representations mediate are highly non-trivial to demonstrate, and are substantially different from the ‘cognitive’ capacities of plants.

  33. 33.

    Incidentally, this strategy for showing that structural representations can function as representations in the absence of a homunculus—what Ramsey calls the ‘mindless strategy’ of showing that structural representations can play a role within non-mentalistic systems, and hence aren’t distinctively mental—seems to conflate two distinct questions. It’s one thing to ask whether a type of representation is distinctively mental, and it’s quite another to ask whether a type of representation can function as a representation within a purely mechanistic system. While the mindless strategy might be sufficient for expunging the homunculus, it’s surely not necessary; to suppose otherwise would seem to assume a kind of dualism according to which minds cannot be explained mechanistically.

  34. 34.

    See Danker and Anderson (2010), Kent and Lamberts (2008), and Schacter et al. (2008) for reviews.

References

  1. Allen, C., & Hauser, M. (1993). Communication and cognition: Is information the connection? Philosophy of Science, 2(8), 81–91.

    Google Scholar 

  2. Bartels, A. (2006). Defending the structural concept of representation. Theoria, 55, 7–19.

    Google Scholar 

  3. Bechtel, W. (1998). Representations and cognitive explanations: Assessing the dynamicist’s challenge in cognitive science. Cognitive Science, 22(3), 295–318.

    Article  Google Scholar 

  4. Cavallari, N., Frigato, E., Vallone, D., Fröhlich, N., Lopez-Olmeda, J., Foà, A., et al. (2011). A blind circadian clock in cavefish reveals that opsins mediate peripheral clock photoreception. PLoS Biology, 9(9), e1001142.

    Article  Google Scholar 

  5. Churchland, P., & Churchland, P. (2002). Neural worlds and real worlds. Nature Reviews Neuroscience, 3, 903–907.

    Article  Google Scholar 

  6. Clark, A. (2001). Reasons, robots and the extended mind. Mind & Language, 16(2), 121–145.

    Article  Google Scholar 

  7. Clark, A., & Toribio, J. (1994). Doing without representing? Synthese, 101(3), 401–431.

    Article  Google Scholar 

  8. Clayton, N., Yu, K., & Dickinson, A. (2001). Scrub jays (Aphelocoma coerulescens) form integrated memories of the multiple features of caching episodes. Journal of Experimental Psychology: Animal Behavior Processes, 27(1), 17–29.

    Google Scholar 

  9. Craik, K. (1943). The nature of explanation. Cambridge: Cambridge University Press.

    Google Scholar 

  10. Cummins, R. (1989). Meaning and mental representation. Cambridge, MA: MIT Press.

    Google Scholar 

  11. Cummins, R. (1994). Interpretational semantics. In S. Stich & T. Warfield (Eds.), Mental representation: A reader (pp. 297–298). Cambridge, MA: Blackwell.

    Google Scholar 

  12. Danker, J., & Anderson, J. (2010). The ghosts of brain states past: Remembering reactivates the brain regions engaged during encoding. Psychological Bulletin, 136(1), 87.

    Article  Google Scholar 

  13. Dennett, D. (1981). Brainstorms: Philosophical essays on mind and psychology. Cambridge, MA: MIT Press.

    Google Scholar 

  14. Desimone, R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience, 3(1), 1–8.

    Article  Google Scholar 

  15. Dretske, F. (1988). Explaining behavior: Reasons in a world of causes. Cambridge, MA: MIT Press.

    Google Scholar 

  16. Eliasmith, C. (2005). A unified approach to building and controlling spiking attractor networks. Neural Computation, 17(6), 1276–1314.

    Article  Google Scholar 

  17. Fodor, J. (1985). Fodor’s guide to mental representation: The intelligent auntie’s vade-mecum. Mind, 94(373), 76–100.

    Article  Google Scholar 

  18. Fodor, J. (1990). A theory of content and other essays. Cambridge, MA: MIT Press.

    Google Scholar 

  19. Gallistel, C. (1990). Representations in animal cognition: An introduction. Cognition, 37(1–2), 1–22.

    Article  Google Scholar 

  20. Gallistel, C. (1998). Symbolic processes in the brain: The case of insect navigation. In D. Osherson, D. Scarborough, L. Gleitman, & D. Sternberg (Eds.), An invitation to cognitive science: Methods, models, and conceptual issues (2nd edn., Vol. 4, pp. 1–52). Cambridge, MA: The MIT Press.

  21. Gallistel, C., & King, A. (2010). Memory and the computational brain. Oxford: Wiley-Blackwell.

    Google Scholar 

  22. Garzón, F., & Keijzer, F. (2011). Plants: Adaptive behavior, root-brains, and minimal cognition. Adaptive Behavior, 19(3), 155–171.

    Article  Google Scholar 

  23. Garzón, F., & Rodriguez, A. (2009). Where is cognitive science heading? Minds and Machines, 19(3), 301–318.

    Article  Google Scholar 

  24. Godfrey-Smith, P. (2006). Mental representation, naturalism, and teleosemantics. In D. Papineau (Ed.), Teleosemantics: New philosophical essays (pp. 42–68). Oxford: Oxford University Press.

    Google Scholar 

  25. Goodman, N. (1968). Languages of art: An approach to a theory of symbols. Indianapolis, IN: Bobbs-Merrill.

    Google Scholar 

  26. Goodspeed, D., Chehab, E., Min-Venditti, A., Braam, J., & Covington, M. (2012). Arabidopsis synchronizes jasmonate-mediated defense with insect circadian behavior. Proceedings of the National Academy of Sciences o the United States of America, 109(12), 4674–4677.

    Article  Google Scholar 

  27. Grush, R. (2004). The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences, 27(3), 377–396.

    Google Scholar 

  28. Grush, R. (2008). Representation reconsidered by William M. Ramsey. Notre Dame Philosophical Reviews. http://ndpr.nd.edu/news/23327-representation-reconsidered/.

  29. Hamada, F., Rosenzweig, M., Kang, K., Pulver, S., Ghezzi, A., Jegla, T., et al. (2008). An internal thermal sensor controlling temperature preference in drosophila. Nature, 454(7201), 217–220.

    Article  Google Scholar 

  30. Hubel, D., & Wiesel, T. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160(1), 106–154.

    Google Scholar 

  31. Isaac, A. (2012). Objective similarity and mental representation. Australasian Journal of Philosophy, 1–22.

  32. Johnson-Laird, P. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press.

    Google Scholar 

  33. Kay, S. (1997). PAS, present and future: Clues to the origins of circadian clocks. Science, 276(5313), 753–754.

    Article  Google Scholar 

  34. Kent, C., & Lamberts, K. (2008). The encoding–retrieval relationship: Retrieval as mental simulation. Trends in Cognitive Sciences, 12(3), 92–98.

    Article  Google Scholar 

  35. Kirsh, D. (1990). When is information explicitly represented. In P. Hanson (Ed.), Information, language and cognition (pp. 340–365). Vancouver: University of British Columbia Press.

    Google Scholar 

  36. Kosslyn, S., & Thompson, W. (2003). When is early visual cortex activated during visual mental imagery? Psychological Bulletin, 129(5), 723–746.

    Article  Google Scholar 

  37. Krantz, D., Luce, R., Suppes, P., & Tversky, A. (Eds.). (1971). The foundations of measurement. New York: Academic Press.

    Google Scholar 

  38. Lehky, S., & Sejnowski, T. (1988). Network model of shape-from-shading: Neural function arises from both receptive and projective fields. Nature, 333(6172), 452–454.

    Article  Google Scholar 

  39. Lettvin, J., Maturana, H., McCulloch, W., & Pitts, W. (1959). What the frog’s eye tells the frog’s brain. Proceedings of the Institute of Radio Engineers, 47(11), 1940–1951.

    Google Scholar 

  40. Lewis, D. (1971). Analog and digital. Nous, 5(3), 321–327.

    Article  Google Scholar 

  41. Locke, J. (1689 [1975]). An essay concerning human understanding. Oxford: Oxford University Press.

  42. Miall, C., & Wolpert, D. (1996). Forward models for physiological motor control. Neural Networks, 9(8), 1265–1279.

    Article  Google Scholar 

  43. Millikan, R. (1984). Language, thought, and other biological categories. Cambridge, MA: MIT Press.

    Google Scholar 

  44. O’Brien, G., & Opie, J. (2001). Connectionist vehicles, structural resemblance, and the phenomenal mind. Communication and Cognition, 34, 1–2.

    Google Scholar 

  45. O’Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map: Preliminary evidence from unit activity in the freely-moving rat. Brain Research, 34(1), 171–175.

    Article  Google Scholar 

  46. Palmer, S. (1978). Fundamental aspects of cognitive representation. In E. Rosch & B. Bloom-Lloyd (Eds.), Cognition and categorization (pp. 259–303). Hillsdale, NJ: Lawrence Erlbaum Associates.

  47. Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.

    Google Scholar 

  48. Rao, R., & Ballard, D. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87.

    Article  Google Scholar 

  49. Rumelhart, D., Smolensky, P., Mcclelland, J., & Hinton, G. (1986). Schemata and sequential thought processes in pdp models. In J. McClelland, D. Rumelhart, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition, Vol. 2: Psychological and biological models, Chap. 14 (pp. 7–57). Cambridge, MA: MIT Press.

  50. Schacter, D., Addis, D., & Buckner, R. (2008). Episodic simulation of future events: Concepts, data, and applications. Annals of the New York Academy of Sciences, 1124, 39–60.

    Article  Google Scholar 

  51. Schwartz, A., & Koller, D. (1986). Diurnal phototropism in solar tracking leaves of Lavatera cretica. Plant Physiology, 80(3), 778–781.

    Article  Google Scholar 

  52. Shagrir, O. (2012). Structural representations and the brain. The British Journal for the Philosophy of Science, 63(3), 519–545.

    Article  Google Scholar 

  53. Shepard, R., & Chipman, S. (1970). Second-order isomorphism of internal representations: Shapes of states. Cognitive Psychology, 1(1), 1–17.

    Article  Google Scholar 

  54. Sprevak, M. (2011). Review of William Ramsey, ‘representation reconsidered’. The British Journal for the Philosophy of Science, 62(3), 669–675.

    Article  Google Scholar 

  55. Sterelny, K. (1995). Basic minds. Philosophical Perspectives, 9, 251–270.

    Article  Google Scholar 

  56. Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese, 87(3), 449.

    Article  Google Scholar 

  57. van Fraassen, B. (2008). Scientific Representation. Oxford: Oxford University Press.

    Google Scholar 

Download references

Acknowledgments

Special thanks to Bill Ramsey for writing such a stimulating book and for generous discussion. Thanks also to Paco Calvo, Frances Egan, Bob Matthews, Lisa Miracchi, Gualtiero Piccinini, Ron Planer, and an anonymous referee for helpful comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Alex Morgan.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Morgan, A. Representations gone mental. Synthese 191, 213–244 (2014). https://doi.org/10.1007/s11229-013-0328-7

Download citation

Keywords

  • Representation
  • Isomorphism
  • Psychological Explanation
  • Mental models
  • Neural networks
  • Feature detectors
  • Circadian clocks
  • Dretske
  • Eliminativism