Similarity-based cognition is commonplace. It occurs whenever an agent or system exploits the similarities that hold between two or more items—e.g., events, processes, objects, and so on—in order to perform some cognitive task. This kind of cognition is of special interest to cognitive neuroscientists. This paper explicates how similarity-based cognition can be understood through the lens of radical enactivism and why doing so has advantages over its representationalist rival, which posits the existence of structural representations or S-representations. Specifically, it is argued that there are problems both with accounting for the content of S-representations and with understanding how neurally-based structural similarities can work as representations (even if contentless) in guiding intelligent behavior. Finally, with these clarifications in place, it is revealed how radical enactivism can commit to an account of similarity-based cognition in its understanding of neurodynamics.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
Swoyer (1991) illustrates this familiar phenomenon with the following example: “By examining the behavior of a scale model of an aircraft in a wind tunnel, we can draw conclusions about a newly designed wing’s response to wind shear, rather than trying it out on a Boeing 747 over Denver. By using numbers to represent the lengths of physical objects, we can represent facts about the objects numerically, perform calculations of various sorts, then translate the results back into a conclusion about the original objects. In such cases we use one sort of thing as a surrogate in our thinking about another, and so I shall call this surrogative reasoning” (p. 449, emphasis original).
It is common in the literature to depict the mapping or mirroring relations in terms of isomorphism. However, current examples in neurocomputational theories of cognition appeal to highly abstract structure-preserving mapping relations that are considerably weaker than isomorphism (see Neander 2017, p. 176; see also Gładziejewski 2016; Morgan 2014). For our purposes, we can remain neutral with respect to this discussion. We will speak more generally and inclusively of structural similarities or resemblances (see O’Brien and Opie (2004), Shea (2013, 2018) for a technical and detailed analysis of these notions).
It is not obvious that place cells constitute a Cartesian coordinate system. For example, Bechtel (2016) has argued that “[w]hereas in a cartographic map the spatial locations between representations correspond, albeit only approximately and with distortions, to the spatial relations between the places represented, this is not true of the map realized in place cells” (p. 1297, emphasis added). Shea (2014) raises similar doubts, observing that “[t]he mechanism depends on place cell firing correlating reliably with location, but not on any relation between different place cells, nor on spatial relation between locations” (p. 126, emphasis added).
“Explanations that invoke S-representations should thus be construed as causal explanations that feature facts regarding similarity as an explanans and success or failure as an explanandum. To exploit structural similarity in this sense is to use a strategy whose success is causally dependent on structural similarity between the representational vehicle and what is represented” (Gładziejewski and Miłkowski 2017, p. 340, emphasis added).
The relationship between similarity and success is not a straightforward one. Consider a cartographic map. A cartographic map does not fully replicate the terrain it is meant to represent. On the contrary, it simplifies it—only including elements that are relevant for the function it was designed to achieve. A map that resembles its target too much would become excessively complex and thus useless. The same rationale applies to S-representations. As Gładziejewski and Miłkowski (2017) note, “too much similarity can render the S-representation inefficient at serving its purpose” (p. 344).
In order to recognize the scope of the job description challenge it is important to mention that it does not just trouble S-repesentational theories in cognitive neurosenience. Instead, serious worries have been raised in its wake about the tenability of classical cognitivist’s conjecture that cognition is rooted in digital computation. For, even if cognition proves to be digitally instantiated, there are deeper unanswered puzzles about how representational contents could be causally efficacious, rather than being systematically screened off from playing any causal explanatory role.
As Thomson and Piccinini (2018) present it, the received view is that “[f]or something to count as a representation, it must have a semantic content (e.g., ‘‘there is yogurt in the fridge’’) and an appropriate functional role (e.g., to guide behavior with respect to the yogurt in the fridge)” (p. 193).
Invoking the much-discussed example of the thermostat, O’Brien seeks to demonstrate “the causal efficacy of content fixed by resemblance” (2015a, p. 9). As he tells us, the thermostat’s functioning is causally driven by the structural similarity holding between the curvature of the bi-metallic strip and the temperature of the room. Thus, if it is assumed that structural similarities are intrinsically contentful it would follow that representational contents can be causally efficacious of behavior.
Another, related, objection has to do with the fact that structural similarities, unlike representations, are symmetrical. A map structurally mirrors the layout of a city as much as the city structurally mirrors the layout of the map. If that is the case, S-representationalists have to conclude that the city represents the map too. To solve this problem, a number of authors have suggested to rethink the representation relation as a triadic relation, this is, as a relation that involves not only the representational state and its target, but also a representational user or consumer (Millikan 1984; O’Brien 2015a). With this condition at hand, we can now say that what makes the map a representation of the city, and not the other way around, is the fact that the map is being used or consumed as such by a cognitive agent or system.
O’Brien (2015a, b; see also O’Brien and Opie 2015) proposes a similar solution to the content-specificity problem, putting emphasis on the interpretive activity of users. According to this idea, an S-representational state R of a system S is a representation of T if S’s responses to T are causally mediated by R. As he writes, “the behavioural dispositions of the system restrict the represented domain to [T], and the second-order resemblance relations determine what [features of T] each vehicle represents” (O’Brien 2015a, p. 11).
Ramsey (2016) holds that a neural state R is a representation of T if T caused R to come about and acquire the structure it has. Thus, if a particular S-representation “was developed in an effort to learn how to navigate a specific maze, then it is that particular maze that is the target [of this S-representation]” (p. 7). Accordingly, in such cases, S-representational content is not fixed by structural similarity relations solely, it is also fixed by the relevant causal relations that brought the S-representational vehicle into existence.
As Lee (2018) explains, “x bears natural information about y, iff x reliably covaries with y. In this case, x’s bearing information about y is dependent on a direct physical relationship. By contrast, x bears non-natural information about y iff x stands-in for y, where x’s tokening does not entail the truth of y. In this instance, x’s bearing information about y is not dependent on any direct physical relationship” (p. 8).
There is a tendency in the current literature to attempt to deflate the mainstream notion of mental representation. Egan (forthcoming) has suggested that we can treat representational content as an explanatory gloss. She proposes this maneuver as a way of retaining the notion of mental representation in the cognitive sciences while avoiding the seemingly intractable problem of providing a naturalistic explanation for the origin of representational contents. For detailed discussions of this kind of deflationary move see Ramsey (forthcoming) and Hutto and Myin (2018).
Interestingly, Jacobson justifies this idea by directly appealing to the explanatory role of similarity in cognitive neuroscience. As she writes: “With the rise of representational similarity and their elaboration of what representation in neuroscience amounts to, there seems no doubt now that cognitive neuroscientists have in mind a very different notion of representation … cognitive neuroscience is not employing contentful representations” (2015, p. 3).
As Shea (2018) explains, “the remarkable discovery of the location-specific sensitivity of place cells does not, by itself, show that rats have a cognitive map” (p. 115).
For their experiment, Pfeiffer and Foster (2013) recorded the activity of 250 place cells at short time scales (circa 20 ms). The sequences or sweeps measured by Pfeiffer and Foster occur during sharp-wave-ripple (SWR) events—this is, irregular burst of brief (100–200 ms) high-frequency (140–200 Hz) neuronal activity. Place cell sweeps during SWR events are traditionally associated to processes of memory consolidation during sleep.
There is a growing literature in cognitive neuroscience that holds that a non-representational reading of forward-oriented neural activity is feasible. According to these views, it is possible to understand the contribution of the future-oriented neural activity to the system’s behaviour without assuming that this neural activity represents future events (see, e.g., Kirchhoff and Robertson 2018; Gallagher 2017; Stepp et al. 2011).
Anderson, A. (2014). After phrenology. Neural reuse and the interactive brain. Cambridge, MA: MIT Press.
Bechtel, W. (2016). Investigating neural representations: The tale of place cells. Synthese, 193(5), 1287–1321.
Burge, T. (2010). The origins of objectivity. Oxford: Oxford University Press.
Clark, A. (2016). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford: Oxford University Press.
Cummins, R. (1994). Interpretational semantics. In S. P. Stitch & T. A. Warfield (Eds.), Mental representation: A reader (pp. 297–298). Cambridge, MA: Blackwell.
Egan, F. (forthcoming). A deflationary account of mental representation. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? Oxford: Oxford University Press.
Engel, A. K., Maye, A., Kurthen, M., & König, P. (2013). Where’s the action? The pragmatic turn in cognitive science. Trends in Cognitive Sciences, 17(5), 202–209. https://doi.org/10.1016/j.tics.2013.03.006.
Fodor, J. A. (1987). Psychosemantics. The problem of meaning in the philosophy of mind. Cambridge, MA: MIT Press.
Fodor, J. A. (1990). A theory of content and other essays. Cambridge, Mass.: MIT Press.
Gallagher, S. (2017). Enactivist interventions. Rethinking the mind. Oxford: Oxford University Press.
Gallistel, C. R. (1990). The organization of learning. Cambridge: MIT Press.
Gallistel, C. R., & King, A. P. (2009). Memory and the computational brain: Why cognitive science will transform neuroscience. Malden, MA: Wiley-Blackwell.
Gładziejewski, P. (2016). Predictive coding and representationalism. Synthese, 193(2), 559–582.
Gładziejewski, P., & Miłkowski, M. (2017). Structural representations: causally relevant and different from detectors. Biology and Philosophy, 32(3), 337–355.
Godfrey-Smith, P. (2006). Mental representation, naturalism, and teleosemantics. In D. Papineau & G. MacDonald (Eds.), Teleosemantics: New philosophical essays (pp. 42–68). Oxford: Oxford University Press.
Godfrey-Smith, P. (2009). Representationalism reconsidered. In D. Murphy & M. A. Bishop (Eds.), Stich and his critics (pp. 30–46). Malden, MA: Wiley-Blackwell.
Goodman, N. (1968). Languages of art. London, UK: Oxford University Press.
Hutto, D. D. (2008). Folk psychological narratives. The sociocultural basis of understanding reasons. Cambridge, MA: MIT Press.
Hutto, D. D., & Myin, E. (2013). Radicalizing enactivism: Basic minds without content. Cambridge, MA: MIT Press.
Hutto, D. D., & Myin, E. (2017). Evolving enactivism: Basic minds meet content. Cambridge, MA: MIT Press.
Hutto, D. D., & Myin, E. (2018). Much ado about nothing? Why going non-semantic is not merely semantics. Philosophical Explorations, 21(2), 187–203.
Hutto, D. D., & Myin, E. (forthcoming). Deflating deflationism about mental representation. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? Oxford: Oxford University Press.
Hutto, D. D., Peeters, A., & Segundo-Ortin, M. (2017). Cognitive ontology in flux: the possibility of protean brains. Philosophical Explorations, 20(2), 209–223. https://doi.org/10.1080/13869795.2017.1312502.
Hutto, D. D., & Satne, G. (2015). The natural origins of content. Philosophia, 43(3), 521–536.
Jacobson, A. J. (2003). Mental representations: What philosophy leaves out and neuroscience puts in. Philosophical Psychology, 16(2), 189–203.
Jacobson, A. J. (2015). Three concerns about the origins of content. Philosophia, 43(4), 625–638.
Johnson, A., & Redish, A. D. (2007). Neural ensembles in CA3 transiently encode paths forward of the animal at a decision point. The Journal of Neuroscience, 27(45), 12176–12189.
Kiefer, A., & Hohwy, J. (2018). Content and misrepresentation in hierarchical generative models. Synthese, 195(6), 2387–2415. https://doi.org/10.1007/s11229-017-1435-7.
Kirchhoff, M. D., & Robertson, I. (2018). Enactivism and predictive processing: a non-representational view. Philosophical Explorations, 21(2), 264–281.
Knierim, J. J. (2015). From the GPS to HM: Place cells, grid cells, and memory. Hippocampus, 25, 719–725.
Kriegeskorte, N., & Kievit, R. A. (2013). Representational geometry: Integrating cognition, computation, and the brain. Trends in Cognitive Sciences, 17(8), 401–412.
Lee, J. (2018). Structural representations and the two problems of content. Mind and Language, 8, 1–21.
Miłkowski, M. (2015). Satisfaction conditions in anticipatory mechanisms. Biology and Philosophy, 30, 709–728.
Millikan, R. G. (1984). Language, thought and other biological categories. Cambridge, MA: MIT Press.
Morgan, A. (2014). Representations gone mental. Synthese, 191(2), 213–244.
Neander, K. (2017). A mark of the mental: In defense of informational teleosemantics. Cambridge, MA: MIT Press.
O’Brien, G. (2015a). How does mind matter? Solving the content causation problem. In T. K. Metzinger & J. M. Windt (Eds.), Open MIND. https://doi.org/10.15502/9783958570146.
O’Brien, G. (2015b). Rehabilitating resemblance redux. In T. K. Metzinger & J. M. Windt (Eds.), Open MIND. https://doi.org/10.15502/9783958571136.
O’Brien, G., & Opie, J. (2004). Notes toward a structuralist theory of mental representation. In H. Clapin, P. Staines, & P. Slezak (Eds.), Representation in mind (pp. 1–20). Oxford: Elsevier.
O’Brien, G., & Opie, J. (2006). How do connectionist networks compute? Cognitive Processing, 7(1), 30–41.
O’Brien, G., & Opie, J. (2009). The role of representation in computation. Cognitive Processing, 10(1), 53–62.
O’Brien, G., & Opie, J. (2015). Intentionality lite or analog content? Philosophia, 43(3), 723–729.
O’Keefe, J. (1976). Place units in the hippocampus of the freely moving rat. Experimental Neurology, 51(1), 78–109.
O’Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research, 34(1), 171–175.
O’Keefe, John, & Nadel, L. (1978). The hippocampus as a cognitive map. Oxford: Oxford University Press.
O’Brien, G., & Opie, J. (2010). Representation in analog computation. In A. Newen, A. Bartels, & E. M. Jung (Eds.), Knowledge and representation (pp. 109–129). Stanford: CSLI Publications.
Pfeiffer, B. E., & Foster, D. J. (2013). Hippocampal place-cells sequences depict future paths to remembered goals. Nature, 497(7447), 74–79.
Piccinini, G. (2018). Computation and representation in cognitive neuroscience. Minds and Machines, 28(1), 1–6.
Ramsey, W. M. (2007). Representation reconsidered. Cambridge: Cambridge University Press.
Ramsey, W. M. (2016). Untangling two questions about mental representation. New Ideas in Psychology, 40, 3–12.
Ramsey, W. M. (2018). Maps, models and computational simulations in the mind. In M. Sprevak & M. Columbo (Eds.), Handbook of the computational mind (pp. 259–271). Rutledge: Abingdon.
Ramsey, W. M. (forthcoming). Defending representational realism. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? Oxford: Oxford University Press.
Rescorla, M. (2016). Bayesian sensorimotor psychology. Mind and Language, 31(1), 3–36.
Roelofs, L. (2018). Why imagining requires content: A reply to a reply to an objection to radical enactive cognition. Thought: A Journal of Philosophy, 7(4), 246–254.
Rosenberg, A. (2015). The genealogy of content or the future of an illusion. Philosophia, 43(3), 537–547.
Rosenberg, A. (2018). How history gets things wrong: The neuroscience of our addiction to stories. Cambridge, Mass.: MIT Press.
Sachs, C. B. (2019). In defense of picturing. Sellars’s philosophy of mind and cognitive neuroscience. Phenomenology and the Cognitive Sciences, 18(4), 669–689.
Schmidt, B., & Redish, A. D. (2013). Navigation with a cognitive map. Nature, 497(7447), 42–43.
Shagrir, O. (2012). Structural representations and the brain. The British Journal for the Philosophy of Science, 63(3), 519–545.
Shea, N. (2013). Millikan’s Isomorphism Requirement. In D. Ryder, J. Kingsbury, & K. Williford (Eds.), Millikan and her critics. Malden, MA: Wiley-Blackwell.
Shea, N. (2014). Exploitable isomorphism and structural representation. Proceedings of the Aristotelian Society, 114, 123–144.
Shea, N. (2018). Representation in cognitive science. Oxford: Oxford University Press.
Sprevak, M. (2011). Review of William M. Ramsey. Representation reconsidered. The British Journal for the Philosophy of Science, 62(3), 669–675.
Stepp, N., Chemero, A., & Turvey, M. T. (2011). Philosophy for the rest of cognitive science. Topics in Cognitive Science, 3, 425–437.
Stich, S. P. (1983). From folk psychology to cognitive science: The case against belief. Cambridge: MIT Press.
Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese, 87, 449–508.
Thomson, E., & Piccinini, G. (2018). Neural representations observed. Minds and Machines, 28(1), 191–235.
Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55(4), 189–208.
Von Eckardt, B. (1993). What is cognitive science?. Cambridge, MA: MIT Press.
Williams, D. (2017). Predictive processing and the representation wars. Minds and Machines. https://doi.org/10.1007/s11023-017-9441-6.
Williams, D., & Colling, L. (2017). From symbols to icons: The return of resemblance in the cognitive neuroscience revolution. Synthese. https://doi.org/10.1007/s11229-017-1578-.
We would like to express our gratitude to the participants at the Neural Mechanisms Online Webinar Series (April 5th, 2019) for their useful comments and suggestions on an earlier version of this paper. Research for this article was supported by the Australian Research Council Discovery Project “Mind in Skilled Performance” (DP170102987).
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Segundo-Ortin, M., Hutto, D.D. Similarity-based cognition: radical enactivism meets cognitive neuroscience. Synthese 198, 5–23 (2021). https://doi.org/10.1007/s11229-019-02505-1
- Similarity-based cognition
- Cognitive neuroscience
- Radical enactivism
- Job description challenge
- Hard problem of content