Some philosophers have offered structural representations as an alternative to indicator-based representations. Motivating these philosophers is the belief that an indication-based analysis of representation exhibits two fatal inadequacies from which structural representations are spared: such an analysis cannot account for the causal role of representational content and cannot explain how representational content can be made determinate. In fact, we argue, indicator and structural representations are on a par with respect to these two problems. This should not be surprising, we contend, given that the distinction between indicator and structural representations is better conceived as one involving degree rather than kind.
This is a preview of subscription content, log in to check access.
Buy single article
Instant unlimited access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
The precise target of the objections that structural representationalist raise is actually not so clear. They criticize “dyadic” approaches, but also theories like Dretske’s, which (as we will see) are best construed as a “triadic” approaches. Likewise, they raise objections to causal theories of representation, but, again, seem to think that these objections apply to theories like Dretske’s, which is not a causal theory.
When offering a naturalistic theory of representational content, one must avoid taking the exploitative act to be itself a mental act, such as thinking, because doing so incorporates into the analysis of mental representation just the kind of capacity that one wishes to explain (Opie and O'Brien 2004, pp. 4–5).
Dretske later suggests that he is open to weakening the strength of the connection between the representational vehicle and its content to something less than 1 (Dretske 1994, p. 62). The important point is that indication is a relation between a representational vehicle and its content that observes some sort of nomic regularity. Thus, although a theory of representation like Fodor’s (1990) does not explicitly mention conditional probabilities, its reliance on a nomic regularity between a representational vehicle and its content suffices to make it a species of an indicator account of representation.
Dretske (1988) distinguishes between cases in which natural selection is responsible for the recruitment of an indicator and cases in which the recruitment occurs as a result of learning. Although important, this distinction can be set aside for present purposes.
‘Correspondence’ in this paragraph may appear ambiguous, meaning either something like a probabilistic relationship between a vehicle and its content or something more like an isomorphism (or homomorphism) between properties. As we will argue later, we deny this ambiguity, for the correspondences in which indicators participate are themselves isomorphisms (or homomorphisms) of a sort.
Opie and O'Brien (2004) discuss the causal constraint on a theory of mental representation, which requires that content be causally efficacious. However, the reasons they offer for doubting the relevance of content seem to involve suspicions about circularity rather than the inability of a relation between a vehicle and its content to play a causal role in the production of behavior. The circularity involves an analysis of content that looks to the dispositions that contentful states cause as being themselves constitutive of content (Opie and O'Brien 2004, pp. 2–3).
Exactly how the correspondence could persist while curvature was no longer under control of temperature is unclear.
Or, at any rate, something close to Prob = 1. Nothing that follows depends on how close to perfect correlation the indication relation approaches.
Isaac (2013) stands as a possible exception to structural representationalists who neglect the significance of use in an addressing the problem of misrepresentation.
Whether in a first- or second-order way.
We should note that the present comments on this first horn of the dilemma do not adequately address Cummins’ full defense of structural representations. Though isomorphism is sufficient for representation on Cummins’ view, he is aware of the need to limit content ascription. His strategy involves invoking the function of tokening a particular representation on an occasion of use. Though it’s beyond the scope of this paper to address Cummins’ view in any detail, we believe that the above strategy would result in Cummins’ account impaling itself on the second horn of the dilemma, which we outline below.
Philosophers of mind often distinguish two questions about mental representations (Ramsey 2007, 2016). The first is a question about representational status: why think some state or structure is genuinely representational? The second is a question about representational content: what makes it the case that the representation has the content that it does? It is unclear whether the acknowledgment that structural resemblance is insufficient for representation is intended to address the former question or the latter (or both). Regardless of intent, we take it that a theory of representation is not complete until both questions have been answered, and, further, that exploitation is the best candidate available to the structural representationalist for fixing representational content.
Though Gładziejewski and Miłkowski (2017), Shea (2014) and O’Brien (2016) incorporate exploitability to, in part, address concerns of “panrepresentationalism,” they are not explicit about whether exploitation is also the means by which they intend to account for misrepresentation. However, we take the problem of panrepresentationalism and the disjunction problem to be two sides of the same coin.
Ramsey (2016) argues that indicator and structural representation theories need not be seen as competing theories of representation, but, rather, as complementary answers to distinct questions about representation. Specifically, he suggests structural representations are more naturally construed as playing a genuinely representational role, while indicator theories can provide a better account of content determination. A detailed discussion of this possibility is outside the bounds of this section (which is concerned exclusively with the question of content determination), however, we direct the reader to Rupert (2018) for an argument that indicators can meet what Ramsey sometimes refers to as the job description challenge (2007). In addition, in our next section we will argue that indicators are structural representations. Though our paper isn’t intended as a defense of indicators as such, if one is already convinced that structural representations can meet the job description challenge, it ought not be surprising that indicators can do so too.
Gładziejewski and Miłkowski’s full defense of the distinction between structural and indicator representations involves establishing two crucial differences between them. One difference is captured by the argument above having to do with the role that resemblance plays in structural, but not indicator, representations. The other purported difference is that structural representations involve an endogenous source of cognitive and behavioral control, whereas indicators are purely reactive (2017). The argument we develop in the rest of the paper focuses on disputing the plausibility of the former difference. However, see Rupert (2018) for a response to the tenability of the latter distinction. Rupert argues that indicator-based theories (in particular, Dretske’s) give a more robust role to indicator representations than Gładziejewski and Miłkowski recognize, thus elevating them from being merely “reactive”.
Some structural representationalists acknowledge that something like the above example would count as a second-order structural resemblance (see: Isaac 2013, p. 700). We of course welcome this conclusion, and view it as further support for our claim that indicator and structural representations do not differ in kind.
Shea, in using this example, isn’t interested in the difference between structural and indicator representations. His primary aim is to establish the need for an exploitation condition for structural representations. A relation of structural resemblance between a set of vehicles and a set of objects is only a structural representation if it is exploited in an appropriate way by the system.
We specify “natural” forms of representation, for conventional representations, e.g. “Let this egg represent Earth,” create correspondences rather than being recruited for correspondences that occur in nature. For more on conventional representation, see Dretske (1988: 52 ff.).
Bechtel, W. (1998). Representations and cognitive explanations: Assessing the dynamicist challenge in cognitive science. Cognitive Science,22(3), 295–317.
Cummins, R. (1996). Representations, targets, and attitudes. Cambridge: MIT Press.
Dennett, D. (1982). Styles of mental representation. Proceedings of the Aristotelian Society,83(213–226), 213–226.
Dretske, F. (1988). Explaining Behavior. Cambridge: MIT Press.
Dretske, F. (1994). The explanatory role of information. Philosophical Transactions: Physical Sciences and Engineering,349, 59–70.
Fodor, J. (1984). Semantics, wisconsin style. Synthese,59(3), 231–250.
Fodor, J. (1987). Psychosemantics. Cambridge: MIT Press.
Fodor, J. (1990). A theory of content and other essays. Cambridge, MA: MIT Press.
Gładziejewski, P., & Miłkowski, M. (2017). Structural representations: Causally relevant and different from detectors. Biology and Philosophy,32(3), 337–355.
Hardwick, C. (Ed.). (1977). Semiotic and significs: The correspondence between Charles S. Peirce and Victoria Lady Welby. Bloomington: Indiana University Press.
Isaac, A. M. C. (2013). Objective similarity and mental representation. Australasian Journal of Philosophy,91(4), 683–704.
Kosslyn, S. (1983). Ghosts in the mind’s machine. New York, NY: W.W. Norton.
Morgan, A. (2014). Representations gone mental. Synthese,191(2), 213–244.
O’Brien, G. (2016). How does mind matter? Solving the content causation problem. In T. Metzinger (Ed.), Open MIND philosophy and the mind sciences in the 21st century (Vol. 2, pp. 1137–1150). Cambridge: MIT Press.
Opie, J., & O’Brien, G. (2004). Notes toward a structuralist theory of mental representation. In H. Clapin, P. Staines, & P. Slezak (Eds.), Representation in mind: New approaches to mental representation. Amsterdam: Elsevier.
Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.
Ramsey, W. (2016). Untangling two questions about mental representation. New Ideas in Psychology,40(A), 3–12.
Rupert, R. (2018). Representation and mental representation. Philosophical Explorations,21(2), 204–225.
Shea, N. (2014). Exploitable isomorphism and structural representation. Proceedings of the Aristotelian Society,114(2), 123–144.
Shea, N. (2018). Representation in cognitive science. Oxford: Oxford University Press.
Von Eckardt, B. (1993). What is cognitive science?. Cambridge: MIT Press.
Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford: Oxford University Press.
Thanks to two very thorough referees for comments that greatly improved this paper. Thanks also go to Gerard O’Brien for useful discussion, and to Rob Rupert for comments on an earlier draft.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Nirshberg, G., Shapiro, L. Structural and indicator representations: a difference in degree, not kind. Synthese (2020). https://doi.org/10.1007/s11229-020-02537-y
- Structural representation
- Disjunction problem
- Content determinacy