Skip to main content

Advertisement

Log in

Neural representations unobserved—or: a dilemma for the cognitive neuroscience revolution

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Neural structural representations are cerebral map- or model-like structures that structurally resemble what they represent. These representations are absolutely central to the “cognitive neuroscience revolution”, as they are the only type of representation compatible with the revolutionaries’ mechanistic commitments. Crucially, however, these very same commitments entail that structural representations can be observed in the swirl of neuronal activity. Here, I argue that no structural representations have been observed being present in our neuronal activity, no matter the spatiotemporal scale of observation. My argument begins by introducing the “cognitive neuroscience revolution” (Sect. 1) and sketching a prominent, widely adopted account of structural representations (Sect. 2). Then, I will consult various reports that describe our neuronal activity at various spatiotemporal scales, arguing that none of them reports the presence of structural representations (Sect. 3). After having deflected certain intuitive objections to my analysis (Sect. 4), I will conclude that, in the absence of neural structural representations, representationalism and mechanism can’t go together, and so the “cognitive neuroscience revolution” is forced to abandon one of its commitments (Sect. 5).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Source: Figs. 1 and 2b in (Tootell et al. 1998). Reproduced with permission. Copyright (1988) Society for Neuroscience

Fig. 2
Fig. 3

Similar content being viewed by others

Data availability

Not applicable.

Code availability

Not applicable.

Notes

  1. Here, “cognitive neuroscience” and “cognitive science” will refer only to mainstream approaches—that is, representational and computational—in the respective disciplines. For non-mainstream alternatives, see (Anderson 2014; Bruineberg & Rietveld 2019; Chemero 2009; Kelso 1995; Van der Weel et al., 2022).

  2. Predictive Processing also admits non-representational interpretations which (sadly) remained quite marginal (see Downey 2018; Facchin 2021a).

  3. But see (Silberstein & Chemero 2013; Silberstein 2021) for a diverging opinion.

  4. This caveat is actually important: NSRs proper are relations between neural vehicles and their targets, so they can’t be observed just by observing neural goings on. At best, then, observing neural goings lets us see one relatum, that is, the relevant representational vehicles (the NSRV).

  5. Through, as a reviewer noticed, this is not the only possible understanding of structural representations. See the Appendix at the end of the paper. Still, Gładziejewski’s account remains the one most typically referred to in the cognitive neuroscience revolution.

  6. As Kohar (2023) has persuasively argued, this is also the only relevant unpacking of the structural similarity.

  7. As an additional point, notice that (b) allows for  . So, the two relations can be identical. And that is exactly what happens with regular cartographic maps, in which spatial relations are involved on both sides of the mapping.

  8. At least partially. Other factors may be relevant in determining the content of V. For example, Shea (2018) calls upon teleological factors, whereas Piccinini (2020a; 2022) calls upon teleo-informational factors and factors concerning the embodiment and embeddedness of cognitive systems.

  9. This Ceteris paribus clause is meant to exclude cases in which excessive degrees of similarity stand in the way of representational usage, as in the case of an hypothetical map in 1:1 scale.

  10. At least, in sufficiently complex systems: we surely could design a robot whose central control system allows the tokening of states satisfying (1)-(3) but not (4). However, since the paper focuses on brains (and brains are arguably sufficiently complex) I will take (4) to be entailed by (2).

  11. Notice that the point here is exclusively methodological. It should not be confused with the endorsement of an “indicator” view of representation, according to which neural activity represents what it causally sensitive to/ correlates with. On the relationship between structural representations and indicators, see references given in (§3.1).

  12. Notice that the claims that neuronal maps and activations spaces are vehicles of NSRs are not similarly ambiguous: both claims express a form of population coding, which is a special case of rate coding. No interpretation of these claims in terms of single spike trains (or single spikes) is possible.

  13. One could still argue that individual neuronal responses represent what they represent because they are part of a larger structural representation. Notice, however, that, in such a case, individual neuronal responses would not be NSRVs, but only vehicle constituents of a larger NSRV. At any rate, §§ 3.2-3.4 will consider putatively larger vehicles, concluding that they don’t qualify as NSRVs either.

  14. Piccinini (2020a) might, under a certain reading, be an exception—but he really seems more concerned with populations of neurons rather than individual neurons. I will thus deal with his view in (§3.2).

  15. See also (Gładziejewski & Miłkowski 2017; Lee & Calder 2023) for other attempts to resist this view.

  16. Through see the post scriptum to see one reason underpinning such a negative answer.

  17. See (Bechtel 2008; 2014; Thomson and Piccinini 2018) for a non NSRs-centric representational account of these neural structures.

  18. Though it should be noted that the experimental interventions in (Hartmann et al., 2016) are not interventions only on somatotopicity, as they always also change the artificial sensors from which neurons receive inputs. Here, I will ignore this complication for the sake of simplicity.

  19. More on this point below.

  20. One could object that motor homunculus is not a good example, because it is not at all clear how the primary motor cortex represents our body and its movements (cf. Piccinini 2020a; Thomson and Piccinini 2018). This, however, is more a problem for the defender of NSRs than for me: how can they claim that the motor homunculus is a NSRV if they do not know what it is structurally similar to?

  21. Though others suggest that wiring length minimization does not strongly correlate with topographic organization (cf Yarrow et al., 2014).

  22. Penfield was explicit on this point. He considered his homunculus as “a cartoon of representation in which scientific accuracy is impossible” intended to be used as an “aid to memory” (both quotes from Penfield and Rasmussen 1950, p.56).

  23. As an aside, notice that the same state of affairs prevents us from considering these neurons and neuronal regions indicators in any straightforward and intuitive way.

  24. For an exception to this general rule, see (Isaac 2013).

  25. Notice an objector cannot deny this latter methodological point without thereby granting my point that NSRV have not been observed. For, in the case of neuronal maps (and other bona fide NSRV), it is standardly claimed that the relevant mapping has been discovered through such means. But, if these means were inadequate to observe NSRVs, then it clearly follows that we’ve not observed them—and this is exactly my point!

  26. A tempting and obvious solution to this problem is that of resorting to a form of informational (or information-based) semantics; that is, claiming that each neuron “maps onto” the stimulus about which it carries the most information (cf. Wiese 2017, pp. 219-223, also (arguably) Piccinini 2020b). However, such informational linkages seem unable to ascribe determined contents (Artiga & Sebastian 2018; Rosche & Sober 2019). More generally, theories of structural representations interact poorly with informational accounts of content (cf. Facchin 2021a). A second solution is that of appealing to the agent’s actual context (Ramsey 2007). But this solution can only work in some cases of successful online behavior. If the relevant vehicle is used in a decoupled manner, in service of offline cognition, then there is nothing in the agent’s context that can discriminate between Γ(TA,TB) and Γ(T@,TB)—else, the agent’ would not be decoupled from at least one of them. So, the solution does not generalize and fails to appropriately restore content determinacy. Other solutions are far less obvious, and thus cannot be considered here.

  27. See also (Rutar et al., 2022) for a more nuanced—and less structural-representationalist—treatment.

  28. Pitched at this level of generality, the claim is importantly contested (cf. Ritchie et al., 2019; Gessel et al.2021). These critical arguments, however, do not apply to RSA, and so I will ignore them here.

  29. I will make a more general point about this issue in the post scriptum of this paper.

  30. Or non-representational computational states more generally (cf. Piccinini 2015).

  31. Notice that I’m writing “(TA,TB)” for the relation upon which the structural similarity is based is the same on both sides of the mapping.

  32. In all fairness, some philosophers try to elaborate a diachronic account of constitution (see. Leuridan & Lodewyckx 2021; Kirchhoff and Kiverstein 2021; Kiverstein and Kirchhoff 2023) which may be used to counter my point. I’m skeptical about these accounts, and I would wedge against them a modified version of Krickel’s (2023) objection. But I can’t articulate it here. So, I will only notice that defenders of the “cognitive neuroscience revolution” do not seem to be interested in such accounts, in a way that makes their view vulnerable to my objection.

  33. Of course, the same may not be true of non-implemented (purely mathematical) computational systems. But looking at such abstract entities could hardly allow us to observe neural representational vehicles.

  34. As a reviewer noticed, this also prevents defenders of the “cognitive neuroscience revolution” from categorizing inner simulations as structural representations, as they arguably should. A problem more for the cognitive neuroscience revolution.

  35. Indeed, Churchland's (1992) original structural similarity-based account of content was explicitly focused on multiple vehicles.

  36. On the concept of action oriented representations, see (Clark 1997). Curiously, Clark’s original example of an action oriented representation is that of Mataric (1991) “spatial map”—a robotic replica of the “spatial map” in the rat’s hippocampus. So, it seems that action oriented representations were NSRs all along.

  37. But see Maley (2021b) for an argument to the effect that, in the case of analog representations (including structural ones) the difference between implementational and algorithmic level collapses.

  38. Or a series of ink marks when the article will be printed.

  39. See (Ramsey 2020) for acute criticism of some such accounts.

  40. I have an in-progress paper on this matter whose preprint can be consulted on my private website (https://marcofacchinmarcof.wixsite.com/site). Thanks to this anonymous referee for having motivated me to write it!

  41. This shouldn’t be read as entailing that it spells out only the functional profile. Presumably, the content of such structural representations is in fact grounded in the similarity they bear to their targets.

  42. Carriers of structural contents surfaced in many places in the argument I developed in the paper, esp in (§§ 3.1 and 3.3). In all these cases, I argued that they are not structural representations in the relevant sense at play—that is, they don’t satisfy Gładziejewski’s account.

  43. And indeed, (Cummins 1989) is the account that Ramsey (2007) refers to when introducing structural representations in the context of classic, rule and representation based theories of cognition. For another example, see Kosslyn’s (1983) “quasi-pictorial” representations.

  44. Though the two might be distinct. See the preprint I mentioned in footnote 40.

  45. At least unless defenders of the cognitive neuroscience revolution are willing to significantly modify and complexify the mechanistic metaphysics grounding their view, allowing for non-synchronic constitutive relations.

  46. Emulators and inner simulations may be one such case.

References

  • Aflalo, T. N., & Graziano, M. S. (2006). Partial tuning of motor cortex neurons to final posture in a free-moving paradigm. Proceedings of the National Academy of Sciences, 103(8), 2909–2914.

    Article  CAS  ADS  Google Scholar 

  • Albers, A. M., et al. (2013). Shared representations for working memory and mental imagery in early visual cortex. Current Biology, 23(15), 1427–1431.

    Article  CAS  PubMed  Google Scholar 

  • Anderson, M. L. (2014). After phrenology. The MIT Press.

    Book  Google Scholar 

  • Anderson, M. L., & Champion, H. (2022). Some dilemmas for an account of neural representation: A reply to Poldrack. Synthese, 200(2), 169.

    Article  Google Scholar 

  • Artiga, M., & Sebastián, M. A. (2018). Informational theories of content and mental representation. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-018-0408-1

    Article  Google Scholar 

  • Barack, D. L., & Krakauer, J. W. (2021). Two views on the cognitive brain. Nature Reviews Neuroscience, 22(6), 359–371.

    Article  CAS  PubMed  Google Scholar 

  • Backer, B., et al. (2022). Three aspects of representation in neuroscience. Trends in Cognitive Sciences, 26(11), 942–958.

    Article  Google Scholar 

  • Baumgartner, M., Casini, L., & Krickel, B. (2020). Horizontal surgicality and mechanistic constitution. Erkenntnis, 85(3), 417–430. https://doi.org/10.1007/s10670-018-0033-5

    Article  MathSciNet  Google Scholar 

  • Bechtel, W. (2008). Mental mechanisms. Philosophical perspectives on cognitive neuroscience. Routledge.

    Google Scholar 

  • Bechtel, W. (2014). Investigating neural representations: The tale of place cells. Synthese, 193, 1287–1321.

    Article  Google Scholar 

  • Bielecka, K., & Miłkowski, M. (2020). Error detection and representational mechanisms. In J. Smortchkova, K. Dolega, & T. Schicht (Eds.), What are mental representations? (pp. 287–317). Oxford University Press.

    Chapter  Google Scholar 

  • Blauch, N. M., Behrmann, M., & Plaut, D. C. (2022). A connectivity-constrained computational account of topographic organization in primate high-level visual cortex. Proceedings of the National Academy of Sciences, 119(3), e2112566119.

    Article  MathSciNet  CAS  Google Scholar 

  • Boone, W., & Piccinini, G. (2016). The cognitive neuroscience revolution. Synthese, 193(5), 1509–1534.

    Article  Google Scholar 

  • Born, R. T., & Bradley, D. C. (2005). Structure and function of visual area MT. Annual Review of Neuroscience, 28, 157–189. https://doi.org/10.1146/annurev.neuro.26.041002.131052

    Article  CAS  PubMed  Google Scholar 

  • Brette, R. (2015). Philosophy of the spike: rate-based vs. spike-based theories of the brain. Frontiers in Systems Neuroscience, 9, 151.

    Article  PubMed  PubMed Central  Google Scholar 

  • Brette, R. (2019). Is coding a relevant metaphor for the brain? Behavioral and Brain Sciences, 42, e215.

    Article  Google Scholar 

  • Bruineberg, J., & Rietveld, E. (2019). What’s inside your head once you’ve figured out what your head’s inside of. Ecological Psychology, 31(3), 198–217.

    Article  Google Scholar 

  • Buckley, C. L., Kim, C. S., McGregor, S., & Seth, A. K. (2017). The free energy principle for action and perception: A mathematical review. Journal of Mathematical Psychology, 81, 55–79.

    Article  MathSciNet  Google Scholar 

  • Burnston, D. C. (2016). A contextualist approach to functional localization in the brain. Biology & Philosophy, 31, 527–550.

    Article  Google Scholar 

  • Buzsaki, G. (2006). Rhythms in the brain. Oxford University Press.

    Book  Google Scholar 

  • Cao, R. (2022). Putting representations to use. Synthese, 200(2), 151.

    Article  MathSciNet  Google Scholar 

  • Carlson, T. A., Ritchie, J. B., Kriegeskorte, N., Durvasula, S., & Ma, J. (2014). Reaction time for object categorization is predicted by representational distance. Journal of Cognitive Neuroscience, 26(1), 132–142.

    Article  PubMed  Google Scholar 

  • Chakrabarty, S., & Martin, J. H. (2000). Postnatal development of the motor representation in primary motor cortex. Journal of Neurophysiology, 84(5), 2582–2594.

    Article  CAS  PubMed  Google Scholar 

  • Chemero, A. (2009). Radical embodied cognitive science. The MIT Press.

    Book  Google Scholar 

  • Churchland, P. M. (1992). A neurocomputational perspective. The MIT Press.

    Book  Google Scholar 

  • Churchland, P. M. (1995). The engine of reason, the sit of the soul. The MIT Press.

    Google Scholar 

  • Clark, A. (1997). Being there. The MIT Press.

    Google Scholar 

  • Coelho Mollo, D. (2021). Deflationary realism: Representation and idealisation in cognitive science. Mind & Language, 37(5), 1048–1066.

    Article  Google Scholar 

  • Connolly, A. C., et al. (2012). The representation of biological classes in the human brain. Journal of Neuroscience, 32(8), 2608–2618.

    Article  CAS  PubMed  Google Scholar 

  • Coraci, D. (2022). Representations and processes: What role for multivariate methods in cognitive neuroscience? Rivista Internazionale Di Filosofia e Psicologia, 13(3), 187–199.

    Google Scholar 

  • Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Clarendon Press.

    Book  Google Scholar 

  • Csibra, G. (2008). Action mirroring and action understanding: An alternative account. In P. Haggard, Y. Rossetti, & M. Kawato (Eds.), Sensorimotor foundations of higher cognition (pp. 435–459). Oxford University Press.

    Google Scholar 

  • Cummins, R. (1989). Meaning and mental representation. MIT Press.

    Google Scholar 

  • Cummins, R. (1996). Representations, targets, attitudes. The MIT Press.

    Book  Google Scholar 

  • Danks, D. (2014). Unifying the mind: Cognitive representations as graphical models. The MIT Press.

    Book  Google Scholar 

  • Davis, T., & Poldrack, R. A. (2013). Measuring neural representations with fMRI: Practices and pitfalls. Annals of the New York Academy of Sciences, 1296(1), 108–134.

    Article  PubMed  ADS  Google Scholar 

  • Dayan, P., & Abbott, L. F. (2005). Theoretical neuroscience: computational and mathematical modeling of neural systems. MIT Press.

    Google Scholar 

  • De Angelis, G. C., & Newsome, W. T. (1999). Organization of disparity-selective neurons in macaque area MT. Journal of Neuroscience, 19(4), 1398–1415.

    Article  Google Scholar 

  • Dennett, D. C. (1996). Darwin’s dangerous idea. Penguin.

    Google Scholar 

  • de Wit, M. M., & Matheson, H. E. (2022). Context-sensitive computational mechanistic explanation in cognitive neuroscience. Frontiers in Psychology, 13, 903960.

    Article  PubMed  PubMed Central  Google Scholar 

  • Downey, A. (2018). Predictive processing and the representation wars: A victory for the eliminativist (via fictionalism). Synthese, 195, 5115–5139.

    Article  PubMed  Google Scholar 

  • Dretske, F. (1988). Explaining behavior. The MIT Press.

    Google Scholar 

  • Egan, F. (2020). A deflationary account of mental representations. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? (pp. 26–54). Oxford University Press.

    Chapter  Google Scholar 

  • Facchin, M. (2021a). Predictive processing and anti-representationalism. Synthese, 199(3–4), 11609–11604.

    Article  Google Scholar 

  • Facchin, M. (2021b). Structural representations do not meet the job description challenge. Synthese, 199(3), 5479–5508.

    Article  Google Scholar 

  • Favela, L. H., & Machery, E. (2023). Investigating the concept of representation in the neural and psychological sciences. Frontiers in Psychology, 14, 1165622.

    Article  PubMed  PubMed Central  Google Scholar 

  • Fodor, J. A. (1981). The mind-body problem. Scientific American 244 (January 1981). Reprinted in J. Heil, (Ed.) (2004a), Philosophy of Mind: A Guide and Anthology (168–82). Oxford University Press

  • Frisby, S. L., et al. (2023). Decoding semantic representations in mind and brain. Trends in Cognitive Sciences., 27(3), 258–281.

    Article  PubMed  Google Scholar 

  • Friston, K. (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society B: Biological Sciences., 360(1456), 815–836.

    Article  Google Scholar 

  • Gessell, B., Geib, B., & De Brigard, F. (2021). Multivariate pattern analysis and the search for neural representations. Synthese, 199(5–6), 12869–12889.

    Article  Google Scholar 

  • Gładziejewski, P. (2015). Explaining cognitive phenomena with internal representations: A mechanistic perspective. Studies in Logic, Grammar and Rhetoric, 40(1), 63–90.

    Article  Google Scholar 

  • Gładziejewski, P. (2016). Predictive coding and representationalism. Synthese, 193(2), 559–582.

    Article  Google Scholar 

  • Gładziejewski, P., & Miłkowski, M. (2017). Structural representations: Causally relevant and distinct from detectors. Biology and Philosophy, 32(3), 337–355.

    Article  PubMed  Google Scholar 

  • Gordon, E. M., et al. (2022). A mind-body interface alternates with effector-specific regions in motor cortex. Nature. https://doi.org/10.1038/s41586-023-05964-2

    Article  PubMed  PubMed Central  Google Scholar 

  • Graziano, M. S. (2011). Cables vs. networks: old and new views on the function of motor cortex. The Journal of Physiology, 589(Pt 10), 2439.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Graziano, M. S. (2016). Ethological action maps: A paradigm shift for the motor cortex. Trends in Cognitive Sciences, 20(2), 121–132.

    Article  PubMed  Google Scholar 

  • Graziano, M. S., & Aflalo, T. N. (2007). Mapping behavioral repertoire onto the cortex. Neuron, 56(2), 239–251.

    Article  CAS  PubMed  Google Scholar 

  • Grush, R. (2004). The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences, 27(3), 377–396.

    Article  PubMed  Google Scholar 

  • Grush, R., & Mandik, P. (2002). Representational parts. Phenomenology and the Cognitive Sciences, 1(3), 389–394.

    Article  Google Scholar 

  • Ha, D., & Schmidhuber, J. (2018a). Recurrent world models facilitate policy evolution. In S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in neural information processing systems 31 (pp. 2451–2463). Curran Associates.

    Google Scholar 

  • Ha, D., Schmidhuber, J. (2018b). World models. Preprint. ArXiv:18.0310122.

  • Hartmann, K., et al. (2016). Embedding a panoramic representation of infrared light in the adult rat somatosensory cortex through a sensory neuroprosthesis. Journal of Neuroscience, 36(8), 2406–2424.

    Article  CAS  PubMed  Google Scholar 

  • Haruno, M., Wolpert, D. M., & Kawato, M. (2003). Hierarchical MOSAIC for motor generation. In T. Ono, G. Matsumoto, R. R. Llinas, A. Bethoz, R. Norgren, H. Nishijo, R. Tamura (Eds.), Excepta medica international congress system (Vol. 1250), (pp. 575–590). Elsevier.

  • Haueis, P. (2018). Beyond cognitive myopia: A patchwork approach to the concept of neural function. Synthese, 195(12), 5373–5402.

    Article  Google Scholar 

  • Haxby, J., et al. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539), 2425–2430.

    Article  CAS  PubMed  ADS  Google Scholar 

  • Haxby, J. V., Connolly, A. C., & Guntupalli, J. S. (2014). Decoding neural representational spaces using multivariate pattern analysis. Annual Review of Neuroscience, 37, 435–456.

    Article  CAS  PubMed  Google Scholar 

  • Hubel, D., & Wiesel, T. (1968). Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 195(1), 215–243.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Hurley, S. (1998). Consciousness in action. Cambridge University Press.

    Google Scholar 

  • Hutto, D., & Myin, E. (2013). Radicalizing enactivism. The MIT Press.

    Google Scholar 

  • Illari, P. (2013). Mechanistic explanation: Integrating the ontic and epistemic. Erkenntnis, 78, 237–255.

    Article  Google Scholar 

  • Isaac, A. M. (2013). Objective similarity and mental representation. Australasian Journal of Philosophy, 91(4), 683–704.

    Article  Google Scholar 

  • Itskov, P. M., et al. (2011). Hippocampal representation of touch-guided behavior in rats: Persistent and independent traces of stimulus and reward location. PLoS ONE, 6, e16462. https://doi.org/10.1371/journal.pone.0016462

    Article  CAS  PubMed  PubMed Central  ADS  Google Scholar 

  • Johnson-Laird, P. (1983). Mental models. Harvard University Press.

    Google Scholar 

  • Kaplan, H. S., & Zimmer, M. (2020). Brain-wide representations of ongoing behavior: A universal principle? Current Opinion in Neurobiology, 64, 60–69.

    Article  CAS  PubMed  Google Scholar 

  • Kelso, S. (1995). Dynamic patterns. The MIT Press.

    Google Scholar 

  • Kilner, J. M., Friston, K. J., & Frith, C. D. (2007). Predictive coding: An account of the mirror neuron system. Cognitive Processing, 8(3), 159–166. https://doi.org/10.1007/s10339-007-0170-2

    Article  PubMed  PubMed Central  Google Scholar 

  • Kirchhoff, M. (2014). Extended cognition & constitution: Re-evaluating the constitutive claim of extended cognition. Philosophical Psychology, 27(2), 258–283.

    Article  MathSciNet  Google Scholar 

  • Kirchhoff, M. D. (2015). Extended cognition & the causal-constitutive fallacy: In search for a diachronic and dynamical conception of constitution. Philosophy and Phenomenological Research, 90(2), 320–360.

    Article  Google Scholar 

  • Kirchhoff, M. D., & Kiverstein, J. (2021). Diachronic constitution. Preprint. http://philsci-archive.pitt.edu/19690/

  • Kiverstein, J., & Kirchhoff, M. D. (2023). Dissolving the causal-constitution fallacy: Diachronic constitution and the metaphysics of extended cognition. In M. O. Caspar & G. F. Artese (Eds.), Situated cognition research: Methodological foundations. Springer.

    Google Scholar 

  • Kohar, M. (2023). Neural machines: A defense of non-representationalism in cognitive neuroscience. Springer.

    Book  Google Scholar 

  • Kohler, E., et al. (2002). Hearing sounds, understanding actions: Action representation in mirror neurons. Science, 297(5582), 846–848.

    Article  CAS  PubMed  ADS  Google Scholar 

  • Kosslyn, S. (1983). Ghosts in the mind’s machine. W.W. Norton.

    Google Scholar 

  • Kraus, B. J., Robinson, R. J., White, J. A., Eichenbaum, H., & Hasselmo, M. E. (2013). Hippocampal “time cells”: Time versus path integration. Neuron, 78(6), 1090–1101.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Krickel, B. (2023). Extended cognition and the search for the mark of constitution—a promising strategy? In M. O. Casper & G. F. Artese (Eds.), Situated Cognition Research - Methodological foundations. Springer.

    Google Scholar 

  • Kriegeskorte, N., et al. (2008). Representational similarity analysis—Connecting the branches of systems neuroscience. Frontiers in systems Neuroscience. https://doi.org/10.3389/neuro.06.004.2008

    Article  PubMed  PubMed Central  Google Scholar 

  • Kriegeskorte, N., & Kievit, R. A. (2013). Representational geometry: Integrating cognition, computation, and the brain. Trends in Cognitive Sciences, 17(8), 401–412.

    Article  PubMed  PubMed Central  Google Scholar 

  • Kriegeskorte, N., & Diedrichsen, J. (2019). Peeling the onion of brain representations. Annual Review of Neuroscience, 42, 407–432.

    Article  CAS  PubMed  Google Scholar 

  • Krickel, B. (2018). Saving the mutual manipulability account of constitutive relevance. Studies in History and Philosophy of Science Part A, 68, 58–67.

    Article  Google Scholar 

  • Kwan, H. C., et al. (1978). Spatial organization of precentral cortex in awake primates. II. Motor Outputs. Journal of Neurophysiology, 41(5), 1120–1131.

    CAS  PubMed  Google Scholar 

  • Lee, J. (2019). Structural representations and the two problems of content. Mind & Language, 34(5), 606–626.

    Article  Google Scholar 

  • Lee, J. (2021). Rise of the swamp creatures. Philosophical Psychology, 34(6), 805–828.

    Article  MathSciNet  Google Scholar 

  • Lee, J., & Calder, D. (2023). The many problems with S-representation (and how to solve them). Philosophy and the Mind Sciences. https://doi.org/10.33735/phimisci.2023.9758

    Article  Google Scholar 

  • Lee, A. Y., et al. (2022). The structure of analog representation. Noûs, 2022, 1–28. https://doi.org/10.1111/nous.12404

    Article  Google Scholar 

  • Leuridan, B., & Lodewyckx, T. (2021). Diachronic causal constitutive relations. Synthese, 198, 9035–9065.

    Article  Google Scholar 

  • Maley, C. (2021a). Analog computation and representation. The British Journal of Philosophy of Science. https://doi.org/10.1086/715031

    Article  MathSciNet  Google Scholar 

  • Maley, C. J. (2021b). The physicality of representation. Synthese, 199(5–6), 14725–14750.

    Article  MathSciNet  Google Scholar 

  • Maley, C. (2023). Icons, magnitudes and their parts. Forthcoming in Critica: Revista Hispanoamericana de Filosofia.

  • Martin, J. H., et al. (2000). Impairments in prehension produced by early postnatal sensorimotor cortex activity blockade. Journal of Neurophysiology, 83, 895–906.

    Article  CAS  PubMed  Google Scholar 

  • Martin, J. H., et al. (2005). Effect of forelimb use on postnatal development of the forelimb motor representation in primary motor cortex of the cat. Journal of Neurophysiology, 93(5), 2822–2831.

    Article  PubMed  Google Scholar 

  • Martinez, M., & Artiga, M. (2021). Neural oscillations as representations. The British Journal of Philosophy of Science. https://doi.org/10.1086/714914

    Article  Google Scholar 

  • Mataric, M. (1991). Navigating with a rat’s brain: A neurobiologically inspired model for robot spatial representation. In J. A. Meyer & S. Wilson (Eds.), From animals to animats 1 (pp. 169–175). The MIT Press.

    Chapter  Google Scholar 

  • McClelland, J. L., et al. (1986). Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1). The MIT Press.

    Google Scholar 

  • McLendon, H. J. (1955). Uses of similarity of structure in contemporary philosophy. Mind, 64(253), 79–95.

    Article  Google Scholar 

  • McNamee, D., & Wolpert, D. M. (2019). Internal models in biological control. Annual Review of Control, Robotics, and Autonomous Systems, 2, 339–364.

    Article  PubMed  PubMed Central  Google Scholar 

  • Mesulam, M. (2008). Representation, inference, and transcendent encoding in neurocognitive networks of the human brain. Annals of Neurology, 64(4), 367–378.

    Article  PubMed  Google Scholar 

  • Morgan, A. (2014). Representations gone mental. Synthese, 191(2), 213–244.

    Article  Google Scholar 

  • Morgan, A., & Piccinini, G. (2018). Towards a cognitive neuroscience of intentionality. Minds and Machines, 28, 119–139.

    Article  Google Scholar 

  • Moser, E. I., Kropff, E., & Moser, M. B. (2008). Place cells, grid cells, and the brain spatiotemporal representation system. Annual Review Neuroscience, 31, 69–89. https://doi.org/10.1146/annurev.neuro.31061307.090723

    Article  CAS  Google Scholar 

  • Neander, K. (2017). A mark of the mental. The MIT Press.

    Book  Google Scholar 

  • Nieder, A., Diester, I., & Tudusciuc, O. (2006). Temporal and spatial enumeration processes in the primate parietal cortex. Science, 313(5792), 1431–1435.

    Article  CAS  PubMed  ADS  Google Scholar 

  • Nirshberg, G. (2023). Structural resemblance and the causal role of content. Erkenntnis, 1–20.

  • Nirshberg, G., & Shapiro, L. (2020). Structural and Indicator representations: A difference in degree, not in kind. Synthese. https://doi.org/10.1007/s11229-020-02537-y

    Article  Google Scholar 

  • O’Brien, G. (2015). How does mind matter? Solving the content causation problem. In T. K. Metzinger & J. M. Windt (Eds.), Open mind. Mind Group. https://doi.org/10.15502/9783958570146

  • O’Brien, G., & Opie, J. (2009). The role of representation in computation. Cognitive Processing, 10, 53–62.

    Article  PubMed  Google Scholar 

  • O’Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. Clarendon Press.

    Google Scholar 

  • O’Regan, K. (2011). Why doesn’t red sound like a bell. Oxford University Press.

    Google Scholar 

  • Orlandi, N. (2020). Representing as coordinating with absence. In J. Smortchkova, K. Dołega, & T. Schlicht (Eds.), What are mental representations? (pp. 101–134). Oxford University Press.

    Chapter  Google Scholar 

  • Peirce, C. S. (1931–1958). Collected papers of Charles Sanders Peirce. In: P. Hartshorne, P. Weiss, & A. Burks (Eds.) (Vols. 1–8). Harvard University Press

  • Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389–443.

    Article  Google Scholar 

  • Penfield, W., & Rasmussen, T. (1950). The cerebral cortex of man; a clinical study of localization of function. Macmillan.

    Google Scholar 

  • Pezzulo, G. (2008). Coordinating with the future: The anticipatory nature of representation. Minds and Machines, 18, 179–225.

    Article  Google Scholar 

  • Piccinini, G. (2015). Physical computation. Oxford University Press.

    Book  Google Scholar 

  • Piccinini, G. (2020a). Neurocognitive mechanisms. Oxford University Press.

    Book  Google Scholar 

  • Piccinini, G. (2020). Nonnatural mental representations. In G. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? Oxford University Press.

    Google Scholar 

  • Piccinini, G. (2022). Situated neural representations: solving the problems of content. Frontiers in Neurorobotics. https://doi.org/10.3389/fnbot.2022.846979

    Article  PubMed  PubMed Central  Google Scholar 

  • Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183, 283–311.

    Article  Google Scholar 

  • Pickering, M. J., & Clark, A. (2014). Getting ahead: Forward models and their place in cognitive architecture. Trends in Cognitive Sciences, 18(9), 451–456.

    Article  PubMed  Google Scholar 

  • Poldrack, R. (2020). The physics of representation. Synthese, 199, 1307–1325.

    Article  Google Scholar 

  • Quiroga, R., et al. (2005). Invariant visual representation by single neurons in the human brain. Nature, 435(7045), 1102–1107.

    Article  CAS  PubMed  ADS  Google Scholar 

  • Raichle, M. E. (2015). The brain’s default mode network. Annual Review of Neuroscience, 38, 433–447.

    Article  CAS  PubMed  Google Scholar 

  • Ramsey, W. (2003). Are receptors representations? Journal of Experimental & Theoretical Artificial Intelligence, 15(2), 125–141.

    Article  Google Scholar 

  • Ramsey, W. (2007). Representation reconsidered. Cambridge University Press.

    Book  Google Scholar 

  • Ramsey, W. (2016). Untangling two questions about mental representation. New Ideas in Psychology, 40, 3–12.

    Article  Google Scholar 

  • Ramsey, W. (2020). defending representation realism. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? (pp. 54–84). Oxford University Press.

    Chapter  Google Scholar 

  • Ritchie, J. B., Tovar, D. A., & Carlson, T. A. (2015). Emerging object representations in the visual system predict reaction times for categorization. PLOS Computational Biology, 11(6), e1004316.

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  • Ritchie, J. B., Kaplan, D. M., & Klein, C. (2019). Decoding the brain: Neural representation and the limits of multivariate pattern analysis in cognitive neuroscience. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axx023

    Article  PubMed  Google Scholar 

  • Rizzolatti, G., & Sinigaglia, C. (2023). Mirroring brains. Oxford University Press.

    Book  Google Scholar 

  • Rosche, W., & Sober, E. (2019). Disjunction and distality: The hard problem for purely probabilistic causal theories of mental content. Synthese. https://doi.org/10.1007/s11229-019-02516-y

    Article  Google Scholar 

  • Roskies, A. L. (2021). Representational similarity analysis in neuroimaging: Proxy vehicles and provisional representations. Synthese, 199(3–4), 5917–5935.

    Article  Google Scholar 

  • Rutar, D., Wiese, W., & Kwisthout, J. (2022). From representations in predictive processing to degrees of representational features. Minds and Machines, 32(3), 461–484.

    Article  Google Scholar 

  • Schieber, M. H. (2001). Constraints on somatotopic organization in the primary motor cortex. Journal of Neurophysiology, 86(5), 2125–2143.

    Article  CAS  PubMed  Google Scholar 

  • Segundo-Ortin, M., & Hutto, D. D. (2021). Similarity-based cognition: Radical enactivism meets cognitive neuroscience. Synthese, 198(Suppl 1), 5–23.

    Article  Google Scholar 

  • Seth, A. K. (2015). The cybernetic bayesian brain. In T. Metzinger, J. Windt (Eds.), Open MIND. The MIND Group. https://doi.org/10.15502/9783958570108

  • Shagrir, O. (2012). Structural representations and the brain. The British Journal for the Philosophy of Science., 63(3), 519–545.

    Article  Google Scholar 

  • Shagrir, O. (2018). The brain as an input–output model of the world. Minds and Machines, 28, 53–75.

    Article  Google Scholar 

  • Shea, N. (2018). Representation in Cognitive Science. Oxford University Press.

    Book  Google Scholar 

  • Silberstein, M., & Chemero, A. (2013). Constraints on localization and decomposition as explanatory strategies in the biological sciences. Philosophy of Science, 80(5), 958–970.

    Article  Google Scholar 

  • Silberstein, M. (2021). Constraints on localization and decomposition as explanatory strategies in the biological sciences 20. In M. Viola & F. Calzavarini (Eds.), Neural Mechanisms: new challenges in the philosophy of neuroscience. Springer.

    Google Scholar 

  • Skyrms, B. (2010). Signals: Evolution, learning, and information. OUP.

    Book  Google Scholar 

  • Sprevak, M. (2013). Fictionalism about neural representations. The Monist, 96(4), 539–560.

    Article  Google Scholar 

  • Sterling, P., & Laughlin, S. (2015). Principles of neural design. MIT Press.

    Google Scholar 

  • Sun, C., Yang, W., Martin, J., & Tonegawa, S. (2020). Hippocampal neurons represent events as transferable units of experience. Nature Neuroscience, 23(5), 651–663.

    Article  CAS  PubMed  Google Scholar 

  • Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese, 87(3), 449–508.

    Article  MathSciNet  Google Scholar 

  • Tani, J. (2007). On the interactions between top-down anticipation and bottom-up regression. Frontiers in Neurorobotics, 1, 2.

    Article  PubMed  PubMed Central  Google Scholar 

  • Tani, J. (2016). Exploring robotic minds. Oxford University Press.

    Book  Google Scholar 

  • Thomson, E., & Piccinini, G. (2018). Neural representations observed. Minds and Machines, 28, 191–235.

    Article  Google Scholar 

  • Tootell, R. B., Switkes, E., Silverman, M. S., & Hamilton, S. L. (1988). Functional anatomy of macaque striate cortex. II. Retinotopic Organization. Journal of Neuroscience, 8(5), 1531–1568.

    Article  CAS  PubMed  Google Scholar 

  • Tschantz, A., Seth, A. K., & Buckley, C. L. (2020). Learning action-oriented models through active inference. PLoS Comput Biol, 16(4), e1007805.

    Article  CAS  PubMed  PubMed Central  ADS  Google Scholar 

  • Van Bree, S. (2023). A critical perspective towards mechanisms in cognitive neuroscience: Towards unification. Perspectives on Psychological Sciences. https://doi.org/10.1177/17456916231191744

    Article  Google Scholar 

  • Van der Weel, F. R., Sokolovskis, I., Raja, V., & van der Meer, A. L. (2022). Neural aspects of prospective control through resonating taus in an interceptive timing task. Brain Sciences, 12(12), 1737.

    Article  PubMed  PubMed Central  Google Scholar 

  • Van Gelder, T. (1991). What is the “D” in “PDP”? A survey of the concept of distribution. In W. Ramsey, S. P. Stich, & D. E. Rumelhart (Eds.), Philosophy and connectionist theory. Routledge.

    Google Scholar 

  • Vilarroya, O. (2017). Neural representation. A survey-based analysis of the notion. Frontiers in Psychology, 8, 1458.

    Article  PubMed  PubMed Central  Google Scholar 

  • Von Eckardt, B. (1996). What is cognitive science? The MIT Press.

    Google Scholar 

  • Wassermann, E. M., et al. (1992). Noninvasive mapping of muscle representations in human motor cortex. Electroencephalography and Clinical Neurophysiology/evoked Potentials Section, 85(1), 1–8.

    Article  CAS  Google Scholar 

  • Westlin, C., et al. (2023). Improving the study of brain-behavior relationships by revisiting basic assumptions. Trends in Cognitive Sciences., 27(3), 246–257.

    Article  PubMed  PubMed Central  Google Scholar 

  • Wiese, W. (2016). What are the contents of representations in predictive processing? Phenomenology and the Cognitive Sciences, 16, 715–736.

    Article  Google Scholar 

  • Wiese, W. (2017). Experienced wholeness. The MIT Press.

    Google Scholar 

  • Williams, D. (2017). Predictive processing and the representation wars. Minds and Machines, 28(1), 141–172.

    Article  PubMed  PubMed Central  Google Scholar 

  • Williams, D., & Colling, L. (2017). From symbols to icons: The return of resemblance in the cognitive science revolution. Synthese, 195(5), 1941–1967.

    Article  Google Scholar 

  • Wood, E. R., et al. (1999). The global record of memory in hippocampal neuronal activity. Nature, 397(6720), 613–616.

    Article  CAS  PubMed  ADS  Google Scholar 

  • Wood, E. R., et al. (2000). Hippocampal neurons encode information about different types of memory episodes occurring in the same location. Neuron, 27(3), 623–633.

    Article  MathSciNet  CAS  PubMed  Google Scholar 

  • Woolsey, et al. (1952). Patterns of localization in precentral and" supplementary" motor areas and their relation to the concept of a premotor area. Research Publications-Association for Research in Nervous and Mental Disease, 30, 238–264.

    CAS  PubMed  Google Scholar 

  • Yarrow, S., et al. (2014). Detecting and quantifying topography in neural maps. PLoS ONE, 9(2), e87178.

    Article  PubMed  PubMed Central  ADS  Google Scholar 

Download references

Acknowledgements

Thanks to (in random order) Marco Viola, Davide Coraci, Jonny Lee and Sanja Sreckovic for having read and commented upon several previous poorly written and half-baked versions of this paper. Thanks to (again, in random order) Erik Thomson, Bryce Huebner and Carl Sachs for an extremely insightful exchange via Twitter on cortical maps and structural representations. This paper has also been presented at a number of conferences and workshops—in particular: the 4th International Conference in Philosophy of Mind in Braga (Portugal), the Representational Penumbra workshop held in Valencia (Spain), the British Society for Philosophy of Science conference held in Bristol (UK), the European Congress of Analytic Philosophy held in Vienna (Austria), the European Society of Philosophy and Psychology conference in Prague (Czech Republic), and the first online conference of the International Society for Philosophy and the Mind Sciences. I wish to thank the audience of all these conferences for their engaging questions and challenges. A special thanks goes to: Marc Artiga, Manolo Martinez, Peter Schulte, Nick Shea, Rosa Cao and Krys Dolega (again, in random order) for the several challenges they raised to the arguments I present here. I swear I will answer them all in a follow-up paper (and there will really be a follow-up paper, go read the Appendix)! A thanks also to the anonymous referees—with an apology for having forced them to sit through this gargantuan paper multiple times.

Funding

This research was founded by the FWO grant “Towards a globally non-representational theory of the mind” (Grant Number 1202824N).

Author information

Authors and Affiliations

Authors

Contributions

MF is the sole author of the paper.

Corresponding author

Correspondence to Marco Facchin.

Ethics declarations

Conflict of interest

The author declares no conflict of interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: on distinguishing types of structural representations (and why it matters)

Appendix: on distinguishing types of structural representations (and why it matters)

During the review process, a reviewer (which I thank) has met many of the claims here with a number of reasonable observations on how structural representations are understood in the literature. And whilst (at least insofar this paper is concerned) the reviewer and I seem to have agreed to disagree, there is something to their observation—something that, I believe, points to the fact that, in the literature, the term “structural representation” is systematically ambiguous. Whilst this is not the place where to dispel this ambiguity,Footnote 40 I wish to point it out—if anything, to address a number of potential objections to my claim or misunderstandings of this paper.

Throughout the paper, I have relied on Gładziejewski’s (2015, 2016) account of structural representations. Such an account is explicitly guided by the image of a cartographic map: a single vehicle whose constituents are enveloped in a web of relations that mirror the web of relations of the constituents of a target, thereby making the former a representation of the latter. On such a—hopefully by now familiar—view of structural representations, the constituents of the whole vehicle are—in a way—representations too, whose representational status derives from the representational status of the whole vehicle (cf. Cummins, 1996). Given the popularity of Gładziejewski’s (2015, 2016) account, and the fact that it is constantly referred to in the cognitive neuroscience revolution literature, it is reasonable to treat this as the standard understanding of structural representations (at least in that corner of philosophy). This is understanding of structural representation has been the target of my attack, and I won’t comment any further on it—if not to notice two things: (a) the account spells out a specific “functional profile” for structural representations—telling us that they function as representations by functioning as maps (see Gładziejewski, 2015)Footnote 41—and (b) that such an account is markedly anti-csymbolic. Thusly understood, structural representations can’t be arbitrary symbols, for they can’t be arbitrary: their very physical shape connects them to their targets (cf. Williams & Colling, 2017). Insofar “classic”, rule-and-representation based cognitive science is symbolic, then, this account of structural representations is anti-classical.

As the reviewer correctly noticed, however, entities satisfying the description above are not the only referents philosophers grace with the title of structural representations. William Ramsey (2007) and, more recently, Matej Kohar (2023) used the term to refer to what I’ll here call (for reasons that will soon be manifest) carriers of structural contents.Footnote 42 According to their usage, the term “structural representation” refers to individual vehicles belonging to a set of vehicles, the relations amongst which “mirror” the relations holding amongst the elements of some target domain. So, both according to my (and Gładziejewski’s) usage and the structural content usage, the term “structural representation” refers to an individual vehicle. Yet, in my usage the structural similarity holds amongst an individual vehicle and its target, whereas in the structural content usage the similarity holds amongst the set each individual structural representation is part of, and some target domain. These are clearly different things.

Why call the entities satisfying the description above “carriers of structural contents”? Because what this account gives us is an account of why each individual vehicle of the set represents what it represents. Each vehicle represents what it represents because it is part of a set of vehicles, the relations amongst which make the whole set structurally similar to a target domain. Such a view of structural representations assigns a content to each vehicle based on its “place” in the overall similarity, but it remains utterly silent about its functional profile (which is left undefined) and their physical shape. Indeed, the vehicles carrying structural contents can be arbitrary—at least to the extent to which their arbitrary physical shapes do not interfere with them standing in the appropriate relations with each other.

Carriers of structural content can thus be coherently mashed with classical, symbolic, rules-and-representations based cognitive science. To see why this is the case, it is sufficient to notice that Cummins’s (1989) account of content for classical cognitive science is a particular incarnation of what I’ve been calling structural contents.Footnote 43 In the view Cummins originally proposed, computational states (the symbols of classical cognitive science) represent what they represent in virtue of the fact that the computational state transitions holding amongst them “mirror” certain relevant relations in a target domain. So, these vehicles represent what they represent in virtue of the fact that certain computational relations (mirroring the relevant relations of a target domain) hold among them. On some accounts, then, classical, symbolic representations can be structural contents—and can thus be called structural representations according to one usage of the term—which, however, it is not (and indeed cannot) be the relevant usage of the term made by defenders of the cognitive neuroscience revolution.

Similarly, indicators and detectors can qualify as carriers of structural contents—at least given the arguments offered by (Facchin, 2021b; Nirshberg & Shapiro, 2020).Footnote 44 On such views, individual indicators represent what they represent (and indicate what they indicate) in virtue of a specific structural similarity holding between the set of indicator states and the indicated target: indication is a special case of structural similarity (at least, if Facchin, Nirshberg and Shapiro are correct). Since—as argued in (Sects. 3.1 and 3.3) individual indicator states can’t be constituents of a larger vehicle, we’re seemingly forced to interpret them as individual vehicles of structural contents. So, indicators and detectors too can be said to be structural representations in one sense of the term, though not in the sense relevant to the cognitive neuroscience revolution.

Such a distinction between structural representations and carriers of structural contents, I believe, can be mobilized to make sense of why structural representations seem both to be everywhere and to systematically elude our gaze (as I argued above).

Consider first neuronal responses—both individually and collectively (as they are considered, for example, in representational similarity analysis, see Sect. 3.3) Individual neuronal responses are naturally classified as indicators (cf. Section 3.1), and so as carriers of structural contents (at least, if Facchin, Morgan, Shapiro and Nirshberg are on the right track). Sets of neuronal responses are also naturally read as carriers of structural contents—at least insofar the structural similarity holds between the entire set of responses and some target domain (cf. Section 3.5). So, whilst both are structural representations in some sense, they’re not structural representations in the relevant, cognitive neuroscience revolution validating sense.

Consider now inner simulations and emulations. Such representations are often invoked in cognitive neuroscience (e.g. Csibra, 2008; Grush, 2004) and are taken as bona fide cases of structural representations. And indeed, they are carriers of structural contents: individual states of the simulation or emulation need not structurally resemble anything—only the entire process must. And since the process can’t plausibly be considered an individual vehicle (cf Sects. 3.1 and 3.3), then we’re left with carriers of structural contents.Footnote 45 Again, simulations and emulations are structural representations in some sense, but that sense is not the one relevant for the cognitive neuroscience revolution. This, as the reviewer noticed, is a big problem for the cognitive neuroscience revolution. Arguably, their theoretical commitments make them unable to capitalize on (and are actually incompatible with, see below) the most widespread type of structural representation in the current neuroscientific literature.

Consider lastly the fact that I’ve hunted for structural representation roughly at the implementation level, looking at the actual neural machinery (allegedly) doing the representing. Can’t structural representations be found at higher, roughly algorithmic, levels of abstraction? Yes, but only in the sense that carriers of structural contents can be found at such levels of abstraction.Footnote 46 For, in this case, the physical shape of the vehicles is not relevant to their being structural representations (i.e. carriers of structural contents)—only their relations are. In contrast, in the case of structural representations in the relevant sense, the physical shape of the vehicles is essential to their status as a structural representation. Their implementation matters for their representational state. Hence, they should be found at the implementation level.

The distinction between structural representations in the relevant sense and carriers of structural contents, then, allows us to make sense of both the seemingly omnipresence of structural representations (indeed, carriers of structural contents appear to be widespread) and their actual disappearance on closer inspection (nothing seems to satisfy Gładziejewski’s account). A natural question, at this point, is whether the cognitive neuroscience revolution may ditch Gładziejewski’s structural representations in favor of carriers of structural contents. The answer, I think, is negative. For, carriers of structural contents are entirely compatible with classic cognitive science. By adopting them, the cognitive neuroscience revolution would stop being a revolution. Worse still, the contents carried by carriers of structural contents is independent from their vehicle properties. So, it can’t play the relevant causal role played by the content of structural representations (Sects. 1 and 2). As such, the contents of carriers of structural contents are not explanatory assets defenders of the cognitive neuroscience revolution can count upon.

Does this mean that structural representations, in the relevant sense discussed here, will never be observed? Not necessarily. Perhaps, as the reviewer suggests, we might be able to observe them thanks to a methodological shift—diverting our attention from neuronal responses (which, at best, carry structural contents) from spontaneous, endogenous and “decoupled”, non-stimulus-driven neural activity. Whilst such a shift in attention faces some methodological challenges (see Sect. 3), it might be possible to face them, and observe structural representations in the relevant sense.

Even in this case, however, neural structural representation (in the relevant sense) would remain unobserved—they may populate our brains, but we have not seen them yet. What we’re left with, then, are some thorny issues for the defenders of the cognitive neuroscience revolution to solve, together with the need to disentangle various distinct senses of the term “structural representations”. And the latter is definitely a task for a different paper.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Facchin, M. Neural representations unobserved—or: a dilemma for the cognitive neuroscience revolution. Synthese 203, 7 (2024). https://doi.org/10.1007/s11229-023-04418-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-023-04418-6

Keywords

Navigation