Abstract
It is widely held in contemporary philosophy of mind that states with underived representational content are ipso facto psychological states. This view—the Content View—underlies a number of interesting philosophical projects, such as the attempt to pick out a psychological level of explanation, to demarcate genuinely psychological from non-psychological states, and to limn the class of states with phenomenal character. The most detailed and influential theories of underived representation in philosophy are the tracking theories developed by Fodor, Dretske, Millikan and others. Tracking theorists initially hoped to ‘naturalize’ underived representation by showing that although it is distinctively psychological it is not irreducibly so, yet they ended up developing theories of representation that by their own lights don’t pick out a distinctively psychological phenomenon at all. Burge (Origins of objectivity, Oxford University Press, Oxford, 2010) sets out to develop a theory of underived representation that does pick out a distinctively psychological phenomenon. His theory promises to vindicate the Content View and the various philosophical projects that depend on it. In this paper I argue that Burge’s theory dementalizes representation for the same reason tracking theories do: These theories hold that representations are states with underived accuracy conditions, yet such states are found in all sorts of mindless systems, like plants.
Similar content being viewed by others
Notes
Thanks to an anonymous referee for emphasizing the importance of addressing these methodological points. See also note 6.
Note that the phenomenon of representing something that doesn’t exist is often conflated with mis-representation. This is a mistake. A state might be about something that doesn’t exist without thereby misrepresenting it. One might desire to meet Elvis without misrepresenting the rhinestoned crooner. Desires exhibit aboutness but just aren’t in the business of representing how the world is. Even beliefs, which are in the business of representing the world, might be directed at non-existents without representational failure; consider the true belief that Elvis is dead.
There are of course important differences between the theories of Dretske (1988), Fodor (1990), and Millikan (1984). I deliberately abstract away from these differences to reveal what I take to be important underlying commonalities. I lack the space to fully justify my grouping of these theories together, but I note that this classification is now common in the literature (e.g. Burge 2010; Kriegel 2012; Mendelovici 2013). The two most serious objections to this classification are as follows: First, one might point out that Fodor famously argues against appealing to biological function in the context of naturalizing intentionality. This is true as far as his theory of content goes, but Fodor (1990) in fact allows that biofunction might play a central role in explaining other aspects of representation, such as the capacity for misrepresentation. Second, one might point out that Millikan (1984) denies that information or indication plays a direct role in fixing content. Nevertheless, Shea (2007) argues convincingly that Millikan must appeal to an informational ‘input condition’ to avoid well-known objections to her view.
Millikan (1989) also emphasizes the importance of attending closely to how a representation is ‘consumed’, i.e. how it is used “as a representation” (p.285). But she says little about what consumption consists in, and her conception of consumption is so liberal that the ‘pulling’ of magnetosomes within the intracellular matrix of a bacterium seems to qualify (see p. 290). But one might wonder if Dretske’s use condition is similarly liberal; after all, it was Dretske (1986) who introduced magnetosomes into the philosophical literature as a primitive example of misrepresentation. Whatever Dretske in fact thought about magnetosomes, I think his mature theory has the resources to exclude them in a way that Millikan’s theory does not. The central reason is that magnetosomes cannot occupy distinct indicator states that can be enlisted to play differential roles in behavioral control.
As I discuss here and elsewhere in the text, there is evidence that plants exhibit remarkably rich forms of adaptive plasticity. Some philosophers and biologists have recently taken this to show that plants are minimally cognitive or intelligent (Gagliano 2017; Calvo Garzón and Keijzer 2011; Maher 2017). It is not always clear whether these proponents of ‘plant intelligence’ claim that plants literally have minds; one might agree with Calvo Garzón and Keijzer (2011) that plants are “cognitive in a minimal, embodied sense” (p. 166), yet deny that plants are literally psychological subjects with mental states. But those who clearly do argue that plants have mental states tend to do so on grounds that phenomena like representation and learning are inherently psychological (e.g. Maher 2017), which begs the present question. So I think I’m entitled to assume that plants don’t have minds for present purposes. Whether we’re entitled to assume this generally is an interesting question for another occasion.
Of the most prominent tracking theorists, I think Dretske (1988) has the best grounds for rejecting the charge of dementalization, though still not good grounds. Dretske is centrally interested in how represen- tational content can causally explain the behavior of individual organisms. He thus distinguishes between representations that are selected for behavioral control over phylogeny, and representations that are selected within an organism’s lifetime. He focusses on the latter kind of ‘ontogenetic’ representation, and characterizes it in terms of a specific kind of learning process: Discrimination learning. But this is not the only kind of learning whereby indicators are ontogenetically selected for behavioral control. Gagliano et al. (2016) show that plants can be classically conditioned to grow towards a source of wind that was previously paired with a source of light. For this to occur, the plant must presumably contain some internal indicator of wind direction that is selected for controlling the direction of plant growth, in virtue of what it indicates, within the plant’s lifetime. This mindless tracking seems to qualify as ‘ontogenetic’ representation in Dretske’s sense.
I am here using Burge’s convention of underlining to indicate the names of concepts.
Burge (2010) is quite explicit that the distinction between constancy capacities and mere sensory regis- tration “hinges on the nature of the internal transformations” (p. 424). See also pp. 408–410 and 424.
See Burge (2010), p. 285.
See, especially, Burge (2010) pp. 319–324 and 369–370. Burge thinks that this idea dissolves the infamous ‘disjunction problem’ that bedeviled tracking theories. His proposal goes something like this: A rabbit perceives a snake as a snake rather than as undetached snake-parts because its sensory systems are biased to track the specific, biologically salient macro-scale features of the rabbit’s environment that help to explain the rabbit’s adaptive success. I confess that I don’t see how this proposal scratches the disjunction problem. Why doesn’t the rabbit see the snake as, say, a death-bringer? Construed extensionally, that is just as much an adaptively salient macroscopic feature of the rabbit’s environment as a snake. While this specific example might be challenged, the general point is hopefully clear: As Fodor (1990) pointed out long ago, the macroscopic features of an organism’s environment fall under indefinitely many kinds, which might equally well explain an organism’s adaptive success.
Thanks to two anonymous referees for highlighting the importance of addressing the interpretive issues raised by this passage and others like it.
Burge (2010) writes that his “aim is to understand the nature of representational mind at its lower border. A corollary of this primary aim is to explain the extreme primitiveness of conditions necessary and sufficient for [perceptual] representation” (p. xi).
See esp. Burge (2010) pp. xv–xvi.
Ganson et al. (2012) also argue that constancies as Burge characterizes them do not suffice for genuine perceptual representation, though they don’t really explore the implications for Burge’s view about the origins of representational mind. Their primary critique points in the other direction: They argue, in a nutshell, that Burgean accuracy-conferring constancies are not necessary for perception, and hence that Burge offers no reason to think that perceptual states are essentially contentful. This is distinct from, but compatible with, the argumentative thrust of this paper.
He writes that assumptions or formation principles are implemented by “effective procedures, procedures that follow an algorithm” (p. 95).
This is the primary target of Ganson et al. (2012). See note 17.
See note 13 for references.
The cogency of ascribing accuracy to perceptual states is debated (Brewer 2006; Travis 2004), but the cogency of ascribing accuracy to watches and the like is surely uncontroversial. One might point out the accuracy conditions of watches and similar artifacts derive from the mental states of agents. I agree, but this is irrelevant to the point that the ordinary notion of accuracy applies intelligibly to mindless systems.
Burge (2010) writes of a “normal notion of veridicality… evident in the explanatory practice of perceptual psychology” (p. 308, my emphasis).
This notion of computing a distal feature from ambiguous sensory information is central to Burge’s characterization of constancies (Burge 2010, p. 352). One might worry about applying the notion of computation to plants, whose states presumably don’t have the syntactic structure widely assumed to be essential to computation. However, Burge rejects a narrow syntactic construal of computation in favor of the more liberal notion of implementing a computable function (ibid., p. 95). All sorts of systems, including plants, might compute in this more liberal sense (Maclennan 2004), so appealing to computation won’t help Burge here.
Here’s a more subtle reply suggested by an anonymous referee. I’ve characterized Burge’s argument that perceptual states have underived accuracy conditions in terms of the indispensability of accuracy conditions as explanantia in perceptual psychology. But Burge clearly thinks that accuracy conditions also play a role in specifying the explananda of perceptual psychology; he holds that one of the central explanatory goals of perceptual psychology is to explain how animals can accurately perceive specific features of their environments (e.g. Burge 2010, p. 342). This suggests that Burge might reply to the present objection by insisting that the explananda of plant chronobiology are not themselves representational in nature. But, again, this begs the present question. Anyone who thought that accuracy conditions play an indispensable role in explaining plant activities would presumably also think that the activities to explained are themselves representational. It seems that one of the central goals of plant chronobiology is to explain how plant clocks can accurately represent the day-night cycle. Burge cannot simply deny the appearances here—he needs some reason to believe they’re illusory.
This mereological usage is quite explicit, for example, in Dretske (1988, p. 3).
See Siegel (2006) for rich discussion of objectivity in the relevant sense.
For one version of how this idea might be developed, see Grush (2007).
References
Adams, F., & Aizawa, K. (1994). Fodorian semantics. In S. Stich & T. Warfield (Eds.), Mental representation: A reader (pp. 223–242). Oxford: Blackwell.
Adams, F., & Aizawa, K. (2008). The bounds of cognition. Malden, MA: Blackwell.
Afraz, S., Kiani, R., & Esteky, H. (2006). Microstimulation of inferotemporal cortex influences face categorization. Nature, 442(7103), 692–695.
Allen, C., & Hauser, M. (1993). Communication and cognition: Is information the connection? Philosophy of Science, 2(8), 81–91.
Andersen, R., & Cui, H. (2009). Intention, action planning, and decision making in parietal–frontal circuits. Neuron, 63(5), 568.
Bechtel, W. (2007a). Biological mechanisms: Organized to maintain autonomy. In F. Boogerd, F. Bruggeman, J. Hofmeyr, & H. Westerhoff (Eds.), Systems biology: Philosophical foundations (pp. 269–302). New York, NY: Elsevier.
Bechtel, W. (2007b). Reducing psychology while maintaining its autonomy via mechanistic explanation. In M. Schouten & D. Jong (Eds.), The matter of the mind: Philosophical essays on psychology, neuroscience, and reduction (pp. 172–198). Oxford: Blackwell.
Bechtel, W. (2011). Representing time of day in circadian clocks. In A. Newell, A. Bartels, & E. Jung (Eds.), Knowledge and representation. Palo Alto, CA: CSLI Publications.
Bermúdez, J. (1995). Nonconceptual content: From perceptual experience to subpersonal computational states. Mind and Language, 10(4), 333–369.
Blakemore, R., & Frankel, R. (1981). Magnetic navigation in bacteria. Scientific American, 245(6), 58–65.
Bourget, D. (2010). Consciousness is underived intentionality. Noûs, 44(1), 32–58.
Brentano, F. (1874 [1995]). Psychology from an empirical standpoint. London: Routledge.
Brewer, B. (2006). Perception and content. European Journal of Philosophy, 14(2), 165–181.
Burge, T. (2010). Origins of objectivity. Oxford: Oxford University Press.
Burge, T. (2014). Perception: Where mind begins. Philosophy, 89(3), 385–403.
Calvo Garzón, P., & Keijzer, F. (2011). Plants: Adaptive behavior, root-brains, and minimal cognition. Adaptive Behavior, 19(3), 155–171.
Cavallari, N., Frigato, E., Vallone, D., Fröhlich, N., Lopez-Olmeda, J., Foà, A., et al. (2011). A blind circadian clock in cavefish reveals that opsins mediate peripheral clock photoreception. PLoS Biology, 9(9), e1001142.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
Cohen, Y., & Andersen, R. (2002). A common reference frame for movement plans in the posterior parietal cortex. Nature Reviews Neuroscience, 3(7), 553–562.
de Regt, H. (2009). The epistemic value of understanding. Philosophy of Science, 76(5), 585–597.
Diéguez, A. (2013). Life as a homeostatic property cluster. Biological Theory, 7(2), 180–186.
Dretske, F. (1981). Knowledge and the flow of information. Oxford: Blackwell.
Dretske, F. (1986). Misrepresentation. In R. Bogdan (Ed.), Belief: Form, content, and function (pp. 157–173). Oxford: Clarendon.
Dretske, F. (1988). Explaining behavior: Reasons in a world of causes. Cambridge, MA: MIT Press.
Ebner, M. (2007). Color constancy. Sussex: Wiley.
Evans, G. (1982). The varieties of reference. Oxford: Oxford University Press.
Fodor, J. (1990). A theory of content and other essays. Cambridge, MA: MIT Press.
Fodor, J., & Pylyshyn, Z. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3–71.
Gagliano, M. (2017). The mind of plants: Thinking the unthinkable. Communicative & Integrative Biology, 10(2), e1288333.
Gagliano, M., Vyazovskiy, V., Borbély, A., Grimonprez, M., & Depczynski, M. (2016). Learning by association in plants. Scientific Reports, 6, 38427.
Ganson, T., Bronner, B., & Kerr, A. (2012). Burge’s defense of perceptual content. Philosophy and Phenomenological Research, 88(3), 556–573.
Gardner, M., Hubbard, K., Hotta, C., Dodd, A., & Webb, A. (2006). How plants tell the time. Biochemical Journal, 397(1), 15–24.
Godfrey-Smith, P. (1992). Indication and adaptation. Synthese, 92(2), 283–312.
Goodspeed, D., Chehab, E., Min-Venditti, A., Braam, J., & Covington, M. (2012). Arabidopsis synchronizes jasmonate-mediated defense with insect circadian behavior. Proceedings of the National Academy of Sciences, 109(12), 4674–4677.
Gould, P., Diaz, P., Hogben, C., Kusakina, J., Salem, R., Hartwell, J., et al. (2009). Delayed fluorescence as a universal tool for the measurement of circadian rhythms in higher plants. The Plant Journal, 58(5), 893–901.
Gregory, R. (1980). Perceptions as hypotheses. Philosophical Transactions of the Royal Society B: Biological Sciences, 290(1038), 181–197.
Grush, R. (2007). Skill theory v2. 0: Dispositions, emulation, and spatial perception. Synthese, 159(3), 389–416.
Hatfield, G. (2002). Perception as unconscious inference. In D. Heyer & R. Mausfeld (Eds.), Perception and the Physical World (pp. 113–143). New York: Wiley.
Hurlbert, A., & Wolf, C. (2002). Contribution of local and global cone-contrasts to color appearance: A retinex-like model. Human Vision and Electronic Imaging VII, 4662, 286–298.
Jacob, P. (1997). What minds can do: Intentionality in a non-intentional world. Cambridge: Cambridge University Press.
Knill, D., & Richards, W. (Eds.). (1996). Perception as Bayesian inference. Cambridge: Cambridge University Press.
Kriegel, U. (2002). Panic theory and the prospects for a representational theory of phenomenal consciousness. Philosophical Psychology, 15(1), 55–64.
Kriegel, U. (2012). Personal-level representation. Protosociology, 28, 77–114.
Land, E., & McCann, J. (1971). Lightness and retinex theory. Journal of the Optical Society of America, 61(1), 1–11.
Maclennan, B. (2004). Natural computation and non-Turing models of computation. Theoretical Computer Science, 317(1–3), 115–145.
Maher, C. (2017). Plant minds: A philosophical defense. New York, NY: Routledge.
Más, P. (2005). Circadian clock signaling in arabidopsis thaliana: From gene expression to physiology and development. International Journal of Developmental Biology, 49(5–6), 491.
McDowell, J. (1994). The content of perceptual experience. Philosophical Quarterly, 44(175), 190–205.
Mendelovici, A. (2013). Reliable misrepresentation and tracking theories of mental representation. Philosophical Studies, 165(2), 421–443.
Millikan, R. (1984). Language, thought, and other biological categories. Cambridge, MA: MIT Press.
Millikan, R. (1989). Biosemantics. The Journal of Philosophy, 86(6), 281–297.
Millikan, R. (2000). Naturalizing intentionality. In B. Elevitch (Ed.) The proceedings of the twentieth world congress of philosophy, vol. 9, (pp. 83–90). Philosophy Documentation Center.
Morgan, A. (2014). Representations gone mental. Synthese, 191(2), 213–244.
Morsella, E. (2005). The function of phenomenal states: Supramodular interaction theory. Psychological Review, 112(4), 1000–1021.
Nagel, E. (1961). The structure of science: Problems in the logic of scientific explanation. New York, NY: Harcourt.
Palmer, S. (1999). Vision science: Photons to phenomenology. Cambridge, MA: MIT Press.
Peacocke, C. (1992). Scenarios, concepts and perception. In T. Crane (Ed.), The contents of experience: Essays on perception (pp. 105–135). Cambridge: Cambridge University Press.
Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.
Rorty, R. (1979). Philosophy and the mirror of nature. Oxford: Blackwell.
Schwartz, A., & Koller, D. (1986). Diurnal phototropism in solar tracking leaves of lavatera cretica. Plant Physiology, 80(3), 778–781.
Shea, N. (2007). Consumers need information: Supplementing teleosemantics with an input condition. Philosophy and Phenomenological Research, 75(2), 404–435.
Shepard, N. (2001). Perceptual-cognitive universals as reflections of the world. Behavioral and Brain Sciences, 24(4), 581–601.
Siegel, S. (2006). Subject and object in the contents of visual experience. Philosophical Review, 115(3), 355–388.
Sterelny, K. (1995). Basic minds. Philosophical Perspectives, 9, 251–270.
Strawson, P. (1959). Individuals: An essay in descriptive metaphysics. London: Routledge.
Travis, C. (2004). The silence of the senses. Mind, 113(449), 57–94.
Tye, M. (2000). Consciousness, color, and content. Cambridge, MA: MIT Press.
van Gelder, T. (1995). What might cognition be, if not computation. The Journal of Philosophy, 92(7), 345–381.
von Helmholtz, H. (1866 [1924]). Concerning the perceptions in general. In J. Southall (Ed.) Treatise on Physiological Optics (3rd ed.), vol. 3, (pp. 143–172). Rochester, NY: Optical Society of America.
Wu, W. (2011). Confronting many-many problems: Attention and agentive control. Noûs, 45(1), 50–76.
Acknowledgements
Many of the ideas in this paper percolated under the auspices of the Philosophy of Neuroscience Group at the University of Tübingen. I would like to thank the members of that group, and especially the group leader Hong Yu Wong, for helpful discussion. I have presented this material at Rice University, the University of Graz, and the University of São Paulo, and I am grateful to audiences there for helpful feedback. I am especially grateful to Michael Barkasi, Tyler Burge, Gualtiero Piccinini, Charles Siewert, and two anonymous referees for comments and/or discussion that led to substantial improvements in the text.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Morgan, A. Mindless accuracy: on the ubiquity of content in nature. Synthese 195, 5403–5429 (2018). https://doi.org/10.1007/s11229-018-02011-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-018-02011-w