Structural representations do not meet the job description challenge

Abstract

Structural representations are increasingly popular in philosophy of cognitive science. A key virtue they seemingly boast is that of meeting Ramsey's job description challenge. For this reason, structural representations appear tailored to play a clear representational role within cognitive architectures. Here, however, I claim that structural representations do not meet the job description challenge. This is because even our most demanding account of their functional profile is satisfied by at least some receptors, which paradigmatically fail the job description challenge. Hence, the functional profile typically associated with structural representations does not identify representational posits. After a brief introduction, I present, in the second section of the paper, the job description challenge. I clarify why receptors fail to meet it and highlight why, as a result, they should not be considered representations. In the third section I introduce what I take to be the most demanding account of structural representations at our disposal, namely Gładziejewski's account. Provided the necessary background, I turn from exposition to criticism. In the first half of the fourth section, I equate the functional profile of structural representations and receptors. To do so, I show that some receptors boast, as a matter of fact, all the functional features associated with structural representations. Since receptors function merely as causal mediators, I conclude structural representations are mere causal mediators too. In the second half of the fourth section I make this conclusion intuitive with a toy example. I then conclude the paper, anticipating some objections my argument invites.

This is a preview of subscription content, access via your institution.

Code availability

Not applicable.

Notes

  1. 1.

    Importantly, these theories try to provide an account of original (non-derived or intrinsic) content. Roughly put, content is original when it is not grounded in some already contentful state, item or process. Notice further that the distinction between mental and public representations is orthogonal to the distinction between original and derived content according to at least some naturalistic accounts of content. For instance, according to Millikan’s teleosemantics, bee dances have original content, even if they are not mental representations (see Millikan 1984; see also Lyre 2016; Vold and Schlimm 2020 for other examples). There might even be mental representations whose content is not original (see Clark 2010 for a possible case). At any rate, in the following I will use “intentionality” and “content” as meaning “original intentionality” and “original content”, unless stated otherwise. I will also use the term “representation” as a shorthand for “representation with original content”, whether public or mental.

  2. 2.

    It should be noted, however, that such a similarity, albeit sufficient to meet the challenge, is not necessary to meet it. In fact, Ramsey seems to allow that certain posits actually qualify as genuinely representational mostly because of their explanatory role within a theory. Arguments by analogy, however, are by far the most popular way to confront the challenge, and therefore they will be the focus of the present treatment.

  3. 3.

    See also (Artiga and Sebastián 2018) for an argument to the same effect which is largely independent from Ramsey’s (2007) framework.

  4. 4.

    Notice that according to both definitions, indication is not a causal notion. The fact that V indicates T might, but need not, obtain in virtue of a causal relation holding among V and T (Dretske 1981, pp. 26–39). Nor does “Shannon information” (around which the original notion of indication was modeled) necessarily depend on any straightforwardly causal link (Shannon and Weaver 1949). In fact, textbooks on information theory are silent on causality (e.g. Cover and Thomas 2006). All that matters seems to be uncertainty reduction.

  5. 5.

    Importantly, as a reviewer noticed, taking entire structures as representations is a deviation from Dretske’s framework. In Dretske’s view, it is not correct to say that, for instance, a barometer represents the pressure. Rather, we should say that the barometer being in state s represents the fact that the pressure is n Pascals. However, this loose usage is not just prominent in the literature (e.g. Morgan 2014, pp. 231–232; Williams and Colling 2017, p. 1947), it also strikes me as entirely unproblematic. To continue with the previous example, the claim that a barometer represents the pressure is entirely intelligible and easily unpacked by saying that the barometer represents the pressure of a given environment by occupying, at any moment, the state that indicates the pressure at that moment.

  6. 6.

    In order to justify this claim, it is sufficient to notice that the level of the sea cannot misrepresent the position of the moon. But something can count as a representation only if it can misrepresent in at least some cases.

  7. 7.

    Notice here that panrepresentationalism is a problem only because I’m assuming that the content at play here is original. There is, I believe, no problem of panrepresentationalism related to non-original (or derived) content, for each and every thing can, in principle, be assigned some derived content. We could surely stipulate, for instance, that a mug represents Napoleon, or that a pair of shoes represents Castor and Pollux. This seems also the reason why semioticians (who are interested in representations with both original and derived content) have no problem in saying, for instance, that a cigarette butt found on a crime scene represents the fact that the murder is a smoker, or that finding my fingerprints on a surface signals the fact that I touched that surface. In all these cases, the relevant signs (or representations) are tied to their targets only by a loose causal connection. However, this does not generate any problem with panrepresentationalism because their content is derived, as it depends on the interpretation of some clever detective (or some other interpreter).

  8. 8.

    In the original formulation, I employed the term “(neuro)psychological” instead of “psychological or cognitive”. An anonymous reviewer noticed that the original formulation was too strong: what if intentionality were to be naturalized in purely causal-computational terms? I agree with the reviewer, thus I resort to the more neutral (and I fear more vague) “psychological or cognitive” formulation. Yet, I wish to highlight that the same problem remains, regardless which sort of account will finally succeed in fully naturalizing intentionality. For, even if intentionality were to be fully naturalized in causal-computational terms, at least some causal-computational goings on should turn out to be non-intentional; otherwise, the empirical adequacy of the account would be seriously threatened (minimally, because pan-intentionalism is not a desideratum of a naturalistic theory of intentionality).

  9. 9.

    Here I'm trading precision for clarity: in particular, I'm suppressing the set-theoretic lexicon of the original formulation in favor of intuitiveness and ease of exposition.

  10. 10.

    Notice that some would also suggest that decouplability (i.e. point (3)) is dispensable (e.g. Miłkowski 2017). See also (Chemero 2009, pp. 50–55).

  11. 11.

    I owe the phrasing of this point to my colleague Silvia Bianchi.

  12. 12.

    Some examples servicing intuitive clarity: the hair in a hair hygrometer gets longer as the humidity rises; the floating unit of a fuel gauge gets lower as the tank gets emptier; the return signal of a proximity sensor is faster as the target gets closer, and so on.

  13. 13.

    To be precise, Shea’s definition of exploitability also imposes that the features of the target and their relations must be of significance to the system, where “significance” is at least partially determined by the system’s functions. Given that Gładziejewski and Miłkowski (2017) do not discuss this aspect of exploitability and simply assume it obtains, I will assume it too.

  14. 14.

    Notice having the same kind of relations on both sides of the similarity is perfectly legitimate. Indeed, maps do represent spatial relations through spatial relations.

  15. 15.

    This sequence might also include ta. The point I’d like to make would not be challenged by its inclusion.

  16. 16.

    To be sure, that would be a very thin structural similarity. Yet notice that the relevant definition of structural similarity Gładziejewski endorses quantifies only on “at least some”, and it thus seems satisfied by what it is shown in my example.

  17. 17.

    Notice that ta is included, as it was (by stipulation) the state occupied by the target in the beginning of the example.

  18. 18.

    Importantly, from this it follows that all systems relying on receptors to organize their behavior are exploiting at least this time-dependent structural resemblance, as it cannot be merely epiphenomenal.

  19. 19.

    The anonymous reviewer also suggested that in such a case the litmus paper would count as a decoupled receptor of pH-in-the-past. I think I disagree. It seems to me that litmus papers have, by design, the function of indicating the pH of substances in the present.

  20. 20.

    Albeit they hold that each call is structurally similar to one predator (see Nirshberg and Shapiro 2020, p. 16).

  21. 21.

    Notice also that, at least in this case, single calls afford the detection of representational error. It is in fact suggested that repeated mistokening of these calls might cause the “liar” vervet to be ignored by the pack (e.g. Cheney and Seyfarth 1985, p. 160).

  22. 22.

    An anonymous reviewer suggested that Gładziejewski actually embraced the existence of decoupled receptors in his (2015a). Specifically, the reviewer argues that Gładziejewski (2015a) used to consider indications of interactive potentialities (see Bickhard 1993, 1999) as decoupled receptors (they are decoupled because they indicate future actions). I’m unsure whether this is the correct interpretation of (Gładziejewski 2015a). In fact, it seems to me that Gładziejewski understood (and maybe still understands) indications of interactive potentialities as tacit representations rather than receptors (see Gładziejewski 2015a, p. 19). However, as far as I can see, nothing in the present essay hinges over this point.

  23. 23.

    Notice strong decouplability fails to obtain: the whole robot is coupled to the cube it's pushing.

  24. 24.

    Notice these nets lack both self-recurrent connections and hidden units: the typical resources that are considered representational vehicles in connectionist systems (e.g. Shea 2007; Shagrir 2012). Their activity is thus interpretable in a straightforwardly non-representational manner (Ramsey 1997).

  25. 25.

    After the learning period, in which the net learns the robot's sensorimotor contingencies (see O'Regan and Noë 2001): the ways in which stimulation changes as a consequence of movement.

  26. 26.

    Technically, the architecture behaves as if it were detecting the mismatch between the received inputs and the ones self-generated by a forward model (see Bovet 2007, pp. 79–106). This mismatch is ordinarily considered as prediction error in the predictive processing literature, and Gładziejewski (2015b, 2016) himself relies on this very same notion of error.

  27. 27.

    Here my gratitude goes to an anonymous reviewer, to which I owe both the objection and its brilliant framing.

  28. 28.

    Notice that this is just a “ghost channel” in the sense of Dretske (1981, pp. 38–39): a set of statistically salient dependency relations between the state of two systems that are not in causal contact.

  29. 29.

    Importantly, if, as Lee (2018) suggests, condition (4) can be dispensed, the capacitor already is a structural representation. If, however, condition (4) cannot be dispensed (as surely Gładziejewski holds), then a fairly simple modification of the system is needed.

  30. 30.

    Notice that in thermostats bi-metallic strips are used as switches in the same way.

  31. 31.

    Notice also that such a move would undermine the claim that SRs meet the job description challenge. In fact, to the best of my knowledge, that claim has only been supported by means of arguments by analogy.

  32. 32.

    See also (Wiese 2017) for a case of false but useful representations at the sub-personal level of explanation.

  33. 33.

    Perhaps indication is a special case of structural similarity, as not all structural similarities need to involve indication (see Shea 2018, p. 138 for one example). But special cases of structural similarities still are structural similarities.

  34. 34.

    Strikingly, most of the time SRs are defined in terms of representations (see Swoyer 1991; Ramsey 2007, pp. 77–92. See also the insightful discussion in Shea 2018 pp. 117–118).

  35. 35.

    Or, in more mundane terms, the cases in which the system malfunctions.

  36. 36.

    Here, by substantial I mean “non pragmatic”. The pragmatic rationale behind (4) is fairly straightforward: (4) makes the account of SRs more robust, protecting it from trivializing counterexamples. Notice further that Gładziejewski (2016) simply takes error detection for granted, without offering any substantial justification for it. In fact, his own brief discussion of error-detection might be leveraged as an argument against (4). If as Gładziejewski (2016) insists, one cannot determine whether one’s own pragmatic failures are determined by the presence of misrepresentations or by the misapplication of correct representations, one is not able to detect representational errors. Rather, one able to detect pragmatic failures, which might be due either to representational errors or to misapplications of correct representations.

  37. 37.

    To be fair to Bickhard, it is important to point out that the idea that genuine representations are representations for whole organisms is not the sole reason as for why he deems error-detection a necessary condition. The prospect of avoiding the problems of content indeterminacy seems to play an important role too. I do not see, however, how acknowledging this challenges my point: it still seems to me correct to say that, in the theoretical framework cognitive science offers, genuine representations do not need to be representations for entire organisms.

  38. 38.

    Notice that I do not actually dispute this claim. Above I have denied only the fact that the accuracy conditions of a posit are causally relevant to a system’s success is sufficient for that posit to qualify as a representation. But this clearly does not exclude that having causally relevant semantic properties is necessary in order for a posit to qualify as a representation.

  39. 39.

    Importantly, Egan (2020, pp. 43–45) seems to articulate precisely this idea.

  40. 40.

    Given the fairly widespread assumption that receptors do not meet it.

  41. 41.

    Some readers might be shocked by this statement, as SRs and input–output representations are sometimes taken to be identical (e.g. Sprevak 2011). But to identify SRs with input–output representations seems to me a mistake. For one thing, input–output representations are essentially linked to computational accounts of cognition, whereas structural representations are not (e.g. Tolman 1948). Moreover, structural representations are necessarily structurally similar to their targets. But input output representations need not necessarily structurally resemble what they represent. In fact, they might be arbitrary symbols.

  42. 42.

    I believe some “historical” clarifications are in order. As Ramsey (2007) presents them, input–output representations need not (albeit might) represent mathematical objects. The claim that the relevant representations involved in computational processes represent mathematical objects (namely, the arguments and values of the functions computed) is, to the best of my knowledge, a claim articulated independently by Frances Egan in a number of publications (e.g. Egan 2014). Recently, Ramsey (2020, pp. 72–73) has declared that Egan’s account captures, in a more sophisticated way, his notion of input–output representations. Here, I’m following Ramsey (2020).

  43. 43.

    Importantly, this at least partially depends on the theory of computational implementation one endorses. Here, I will stay neutral on the issue. Notice, however, that many (I suspect the majority of) theories of computational implementation try to avoid pancomputationalism; namely, the view that any complex physical system implements a number of (or perhaps all) computations (see Searle 1992 for the pancomputationalist challenge; see Copeland 1996; Scheutz 1999; Rescorla 2014; Piccinini 2015 for some ways to defuse it). The important point to notice, for present purposes, is this: that many accounts of computational implementation would not deem sufficient, for a physical system to compute a function, that the causal goings-on internal to the system systematically “mirror” the transition between computational states. Thus, if the idea common to these accounts is correct, input–output representations need to be more than causal mediators allowing a system to “march in step” with some relevant computable function.

  44. 44.

    This claim is typically made by philosophers leaning towards antirepresentationalism (e.g. Chemero 2009; Ramsey 2017; Hutto and Myin 2020). But the rationale behind it works both ways: if antirepresentationalism is not an a priori truth, one ought to revise one’s own antirepresentationalist commitment in the light of the relevant empirical evidence.

References

  1. Anderson, M., & Chemero, T. (2013). The problem with brain GUTs: conflation of different senses of “prediction” threatens metaphysical disaster. Behavioral and Brain Sciences, 36(3), 204–205.

    Article  Google Scholar 

  2. Anderson, M., & Chemero, T. (2019). The world well gained. In M. Colombo, E. Irvine, & M. Stapleton (Eds.), Andy Clark and his critics (pp. 161–173). New York: Oxford University Press.

    Google Scholar 

  3. Artiga, M., & Sebastián, M. A. (2018). Informational theories of content and mental representation. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-018-0408-1.

    Article  Google Scholar 

  4. Bickhard, M. H. (1993). Representational content in humans and machines. Journal of Experimental and Theoretical Artificial Intelligence, 5, 285–333.

    Article  Google Scholar 

  5. Bickhard, M. H. (1999). Interaction and representation. Theory and Psychology, 9, 435–458.

    Article  Google Scholar 

  6. Bickhard, M. H. (2009). The interactivist model. Synthese, 166(3), 547–591.

    Article  Google Scholar 

  7. Bovet, S. (2007). Robots with Self-Developing Brains, Dissertation, University of Zurich. https://www.zora.uzh.ch/id/eprint/163709/1/20080298_001884101.pdf. Accessed 25 Feb 2020.

  8. Bovet, S., & Pfeifer, R. (2005a). Emergence of delayed reward learning from sensorimotor coordination. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. https://doi.org/10.1109/IROS.2005.1545085.

    Article  Google Scholar 

  9. Bovet, S., & Pfeifer, R. (2005b). Emergence of coherent behaviors from homogeneous sensorimotor coupling. ICAR '05 Proceedings 12th International Conference on Advanced Robotics. Doi: https://doi.org/10.1109/ICAR.2005.1507431

  10. Braitenberg, V. (1984). Vehicles: experiments in synthetic psychology. Cambridge: The MIT Press.

    Google Scholar 

  11. Brooks, R. (1999). Cambrian intelligence. Cambridge: The MIT Press.

    Google Scholar 

  12. Chemero, A. (2009). Radical embodied cognitive science. Cambridge: The MIT Press.

    Google Scholar 

  13. Cheney, D. L., & Seyfarth, R. M. (1985). Vervet monkey alarm calls: manipulation through shared information? Behavior, 94(1–2), 150–166.

    Article  Google Scholar 

  14. Churchland, P. M. (2012). Plato’s camera. Cambridge: The MIT Press.

    Google Scholar 

  15. Clark, A. (1993). Associative engines. Cambridge: The MIT Press.

    Google Scholar 

  16. Clark, A. (1997). The dynamical challenge. Cognitive Science, 21(4), 461–481.

    Article  Google Scholar 

  17. Clark, A. (2010). Memento’s revenge: the extended mind, extended. In R. Menary (Ed.), The extended mind (pp. 43–66). Cambridge: The MIT Press.

    Google Scholar 

  18. Clark, A. (2013). Mindware. an introduction to the philosophy of cognitive science (2nd ed.). New York: Oxford University Press.

    Google Scholar 

  19. Clark, A., & Grush, R. (1999). Towards a cognitive robotics. Adaptive Behavior, 7(1), 5–16.

    Article  Google Scholar 

  20. Clark, A., & Toribio, J. (1994). Doing without representing? Synthese, 101(3), 401–431.

    Article  Google Scholar 

  21. Copeland, J. B. (1996). What is computation? Synthese, 108(3), 335–359.

    Article  Google Scholar 

  22. Cover, T. M., & Thomas, J. A. (2006). Elements of information theory. New York: Wiley.

    Google Scholar 

  23. Downey, A. (2018). Predictive processing and the representation wars: a victory for the eliminativist (via fictionalism). Synthese, 195(12), 5115–5139.

    Article  Google Scholar 

  24. Dretske, F. (1981). Knowledge and the flow of information. Cambridge: The MIT Press.

    Google Scholar 

  25. Dretske, F. (1988). Explaining behavior. Cambridge: The MIT Press.

    Google Scholar 

  26. Dretske, F. (1994). The explanatory role of information. Philosophical Transactions of the Royal Society of London Series A: Physical and Engineering Sciences, 349(1689), 59–70.

    Article  Google Scholar 

  27. Egan, F. (2014). How to think about mental content. Philosophical Studies, 170(1), 115–135.

    Article  Google Scholar 

  28. Egan, F. (2020). A deflationary account of mental representations. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations (pp. 26–53). New York: Oxford University Press.

    Google Scholar 

  29. Eliasmith, C. (2005). A new perspective on representational problems. Journal of Cognitive Science, 6(97), 97–123.

    Google Scholar 

  30. Fodor, J. (1989). Semantics: wisconsin style. In J. Fodor (Ed.), A theory of content and other essays (pp. 31–49). Cambridge: The MIT Press.

    Google Scholar 

  31. Fodor, J. (1990). A theory of content and other essays. Cambridge: The MIT Press.

    Google Scholar 

  32. Gallistel, C. R., & King, A. P. (2010). Memory and the computational brain. Oxford: Wiley.

    Google Scholar 

  33. Gładziejewski, P. (2015). Explaining cognitive phenomena with internal representations: a mechanistic perspective. Studies in Logic, Grammar and Rhetoric, 40(1), 63–90.

    Article  Google Scholar 

  34. Gładziejewski, P. (2015). Action guidance is not enough, representations need correspondence too: a plea for a two-factor theory of representation. New Ideas in Psychology, 40, 13–25.

    Article  Google Scholar 

  35. Gładziejewski, P. (2016). Predictive coding and representationalism. Synthese, 193(2), 559–582.

    Article  Google Scholar 

  36. Gładziejewski, P., & Miłkowski, M. (2017). Structural representations: causally relevant and different from detectors. Biology and Philosophy, 32(3), 337–355.

    Article  Google Scholar 

  37. Goodman, N. (1969). The language of arts. London: Oxford University Press.

    Google Scholar 

  38. Gorman, R. P., & Sejnoski, T. J. (1988). Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks, 1(1), 75–89.

    Article  Google Scholar 

  39. Gosche, T., & Koppelberg, D. (1991). The concept of representation and the representation of concepts in connectionist models. In W. Ramsey, S. P. Stich, & D. E. Rumelhart (Eds.), Philosophy and connectionist theory (pp. 129–163). New York: Rutledge.

    Google Scholar 

  40. Grush, R. (1997). The architecture of representation. Philosophical Psychology, 10(1), 5–23.

    Article  Google Scholar 

  41. Harvey, I., et al. (1997). Evolutionary robotics: the Sussex approach. Robotics and Autonomous Systems, 20(2–4), 205–224.

    Article  Google Scholar 

  42. Harvey, I., Husbands, P., & Cliff, D. (1994). Seeing the light: artificial evolution, real vision. In D. Cliff, P. Husbands, J. A. Meyer, & S. W. Winson (Eds.), From animals to animats 3 (pp. 392–401). Cambridge: The MIT press.

    Google Scholar 

  43. Haugeland, J. (1991). Representational genera. In W. Ramsey, S. P. Stich, & D. E. Rumelhart (Eds.), Philosophy and connectionist theory (pp. 61–91). New York: Rutledge.

    Google Scholar 

  44. Hubel, D., & Wiesel, T. (1962). Receptive fields, binocular interaction, and the functional architecture of the cat’s visual cortex. The Journal of Physiology, 160(1), 106–154.

    Article  Google Scholar 

  45. Hubel, D., & Wiesel, T. (1968). Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 195(1), 215–243.

    Article  Google Scholar 

  46. Husbands, P., Harvey, I., & Cliff, D. (1995). Circle in the round: state space attractors for evolved sighted robots. Journal of Robotics and Autonomous Systems, 15, 83–106.

    Article  Google Scholar 

  47. Hutto, D., & Myin, E. (2020). Deflating deflationism about mental representations. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? (pp. 79–100). New York: Oxford University Press.

    Google Scholar 

  48. Kiefer, A., & Hohwy, J. (2018). Content and misrepresentation in hierarchical generative models. Synthese, 195(6), 2397–2415.

    Article  Google Scholar 

  49. Lee, J. (2018). Structural representation and the two problems of content. Mind and Language, 34(5), 606–626.

    Article  Google Scholar 

  50. Levittin, J. Y., Maturana, H. R., McCulloch, W. S., & Pitts, W. H. (1959). What the frog’s eye tells the frog’s brain. Proceedings of the IRE, 47(11), 1940–1951.

    Article  Google Scholar 

  51. Lyre, H. (2016). Active content externalism. Review of Philosophy and Psychology, 7(1), 17–33.

    Article  Google Scholar 

  52. Maris, M., & Schaad, R. (1995). The didactic robots, Techreport No. IIF-AI-95.09, AI Lab, Department of Computer Science, University of Zurich.

  53. Maris, M., & te Boekhorst, R. (1996). Exploiting physical constraints: heap formation through behavioral error in a group of robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1655–1660) Piscataway: IEEE Press.

  54. Miłkowski, M. (2013). Explaining the computational mind. Cambridge: The MIT Press.

    Google Scholar 

  55. Miłkowski, M. (2017). Szaleństwo, a nie metoda. Uwagi o książce Pawła Gładziejewskiego "Wyjaśnianie za pomocą reprezentacji mentalnych". Filozofia Nauki, 25(3), 57–67.

  56. Millikan, R. G. (1984). Language, thought and other biological categories. Cambridge: The MIT Press.

    Google Scholar 

  57. Morgan, A. (2014). Representations gone mental. Synthese, 191(2), 213–244.

    Article  Google Scholar 

  58. Moser, E. I., Kropff, E., & Moser, M. B. (2008). Place cells, grid cells, and the brain spatiotemporal representation system. Annual Review Neuroscience, 31, 69–89.

    Article  Google Scholar 

  59. Nieder, A., Diester, I., & Tudusciuc, O. (2006). Temporal and spatial enumeration processes in the primate parietal cortex. Science, 313(5792), 1431–1435.

    Article  Google Scholar 

  60. Nirshberg, G., & Shapiro, L. (2020). Structural and Indicator representations: a difference in degree, not in kind. Synthese. https://doi.org/10.1007/s11229-020-02537-y.

    Article  Google Scholar 

  61. O’Brien, G. (2015). How does the mind matter? Solving the content-causation problem. In: T. Metzinger, J. M. Windt (Eds.). Pen MIND: 28(T). Frankfurt am Main: The MIND Group. Doi: https://doi.org/10.15502/9783958570146

  62. O’Brein, G., & Opie, J. (2004). Notes towards a structuralist theory of mental representations. In H. Clapin, P. Staines, & P. Slezak (Eds.), Representation in mind: new approaches to mental representaion (pp. 1–20). Oxford: Elsevier.

    Google Scholar 

  63. O’Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. New York: Oxford University Press.

    Google Scholar 

  64. O’Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(5), 939–973.

    Article  Google Scholar 

  65. Orlandi, N. (2014). The innocent eye. New York: Oxford University Press.

    Google Scholar 

  66. Pezzulo, G. (2008). Coordinating with the future: the anticipatory nature of representation. Minds and Machines, 18(2), 179–225.

    Article  Google Scholar 

  67. Pfeifer, R., & Bongard, J. (2007). How the body shapes the way we think. Cambridge: The MIT Press.

    Google Scholar 

  68. Piccinini, G. (2015). Physical computation: a mechanistic account. New York: Oxford University Press.

    Google Scholar 

  69. Plebe, A., & De la Cruz, M. V. (2017). Neural representations beyond “plus X.” Mind and Machines, 28(1), 93–117.

    Article  Google Scholar 

  70. Ramsey, W. (1997). Do connectionist representations earn their explanatory keep? Mind and Language, 12(1), 34–66.

    Article  Google Scholar 

  71. Ramsey, W. (2003). Are receptors representations? Journal of Experimental and Theoretical Artificial Intelligence, 15(2), 125–141.

    Article  Google Scholar 

  72. Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.

    Google Scholar 

  73. Ramsey, W. (2015). Untangling two questions about mental representation. New Ideas in Psychology, 40, 3–12.

    Article  Google Scholar 

  74. Ramsey, W. (2017). Must cognition be representational? Synthese, 194(11), 4197–4214.

    Article  Google Scholar 

  75. Ramsey, W. (2019). Maps, models and computational simulations of the mind. In M. Sprevak & M. Colombo (Eds.), The routledge handbook of the computational mind (pp. 259–271). New York: Tylor & Francis.

    Google Scholar 

  76. Ramsey, W. (2020). Defending representation realism. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? (pp. 54–78). New York: Oxford University Press.

    Google Scholar 

  77. Ramstead, M. J. D., Kirchhoff, M. D., & Friston, K. (2019). A tale of two densities: active inference is enactive inference, Adaptive Behavior. doi: 1059712319862774

  78. Rescorla, M. (2014). A theory of computational implementation. Synthese, 191(6), 1277–1307.

    Article  Google Scholar 

  79. Rupert, R. (2018). Representation and mental representation. Philosophical Explorations, 21(2), 204–225.

    Article  Google Scholar 

  80. Scheutz, M. (1999). When physical systems realize functions. Minds and Machines, 9(2), 161–196.

    Article  Google Scholar 

  81. Searle, J. (1992). The rediscovery of the mind. Cambridge: The MIT Press.

    Google Scholar 

  82. Segundo-Ortin, M., & Hutto, D. (2019). Similarity-based cognition: radical enactivism meets cognitive neuroscience. Synthese. https://doi.org/10.1007/s11229-019-02505-1.

    Article  Google Scholar 

  83. Shagrir, O. (2012). Structural representations and the brain. The British Journal of Philosophy of Science, 63(3), 519–545.

    Article  Google Scholar 

  84. Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Urbana: University of Illinois Press.

    Google Scholar 

  85. Sharot, T. (2011). The optimism bias. Current Biology, 21(23), R491–R945.

    Article  Google Scholar 

  86. Shea, N. (2007). Content and its vehicles in connectionist systems. Mind and Language, 22(3), 246–269.

    Article  Google Scholar 

  87. Shea, N. (2014). VI: Exploitable isomorphism and structural representation. Proceedings of the Aristotelian Society, 114(22), 123–144.

    Article  Google Scholar 

  88. Shea, N. (2018). Representations in cognitive science. New York: Oxford University Press.

    Google Scholar 

  89. Shepard, R. N., & Chipman, S. (1970). Second order isomorphism of internal representations: shapes of states. Cognitive Psychology, 1(1), 1–17.

    Article  Google Scholar 

  90. Shi, Y. Y., & Sun, H. (2008). Image and video compression for multimedia engineering. fundamentals, algorithms and standards (2nd ed.). New York: CRC Press.

    Google Scholar 

  91. Smortchkova, J., Dolega, K., & Schlicht, T. (2020). Introduction. In J. Smortchkova, K. Dolega, & T. Schlicht (Eds.), What are mental representations? (pp. 1–26). New York: Oxford University Press.

    Google Scholar 

  92. Spratling, M. W. (2015). Predictive coding. In D. Jaeger & R. Jung (Eds.), Encyclopedia of computational neuroscience (pp. 2491–2494). New York: Springer.

    Google Scholar 

  93. Spratling, M. W. (2017). A review of predictive coding algorithms. Brain and Cognition, 112, 92–97.

    Article  Google Scholar 

  94. Sprevak, M. (2011). Review of representation reconsidered. The British Journal of Philosophy of Science, 62, 669–675.

    Article  Google Scholar 

  95. Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese, 87(3), 449–508.

    Article  Google Scholar 

  96. Taylor, S. (1989). Positive illusions. Creative self-deception and the healthy mind. New York: Basic Books.

    Google Scholar 

  97. Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55(4), 189–208.

    Article  Google Scholar 

  98. Vold, K., & Schlimm, D. (2020). Extended mathematical cognition: external representations with non-derived content. Synthese, 197, 3757–3777.

    Article  Google Scholar 

  99. Wiese, W. (2017). Action is enabled by systematic misrepresentations. Erkenntnis, 82(6), 1233–1252.

    Article  Google Scholar 

  100. Williams, D. (2017). Predictive processing and the representation wars. Minds and Machines, 28(1), 141–172.

    Article  Google Scholar 

  101. Williams, D. (2018). Predictive minds and small-scale models: Kenneth Craik’s contribution to cognitive science. Philosophical Explorations, 21(2), 245–263.

    Article  Google Scholar 

  102. Williams, D., & Colling, L. (2017). From symbols to icons: the return of resemblance in the cognitive science revolution. Synthese, 195(5), 1941–1967.

    Article  Google Scholar 

Download references

Acknowledgements

Early versions of this essay have been presented in a number of conferences; namely the OZSW Graduate conference in theoretical philosophy (Tilburg University 19/12/2019); The workshop Representation in Cognitive Science (Ruhr-Universität Bochum 3-4/02/2020) and the 10th European Congress of Analytic Philosophy. I wish to thank the audience of these conferences for helpful comments. A special thanks goes to Krys Dolega, for his encouraging words. I wish also to thank both anonymous reviewers for their insightful comments on earlier versions of this essay.

Funding

This work has been funded by the PRIN Project “The Mark of Mental” (MOM), 2017P9E9N, active from 9.12.2019 to 28.12.2022, financed by the Italian Ministry of University and Research.

Author information

Affiliations

Authors

Contributions

Marco Facchin is the sole author of the paper.

Corresponding author

Correspondence to Marco Facchin.

Ethics declarations

Conflict of interests

The author declares no conflict of interests.

Availability of data and Material

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Facchin, M. Structural representations do not meet the job description challenge. Synthese (2021). https://doi.org/10.1007/s11229-021-03032-8

Download citation

Keywords

  • Sub-personal representations
  • Structural representations
  • Feature detectors
  • Eliminativism
  • Job description challenge