Skip to main content
Log in

Functional kinds: a skeptical look

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

The functionalist approach to kinds has suffered recently due to its association with law-based approaches to induction and explanation. Philosophers of science increasingly view nomological methods as inappropriate for the special sciences like psychology and biology, which has led to a surge of interest in approaches to natural kinds that are more obviously compatible with mechanistic and model-based methods, especially homeostatic property cluster theory. But can the functionalist approach to kinds be weaned off its dependency on laws? Dan Weiskopf has recently offered a reboot of the functionalist program by replacing its nomological commitments with a model-based approach more closely derived from practice in psychology. Roughly, Weiskopf holds that the natural kinds of psychology will be the functional properties that feature in many empirically successful cognitive models, and that those properties need not be localizable to parts of an underlying mechanism. I here skeptically examine the three modeling practices that Weiskopf thinks introduce such non-localizable properties: fictionalization, reification, and functional abstraction. In each case, I argue that recognizing functional properties introduced by these practices as autonomous kinds comes at clear cost to those explanations’ counterfactual explanatory power. At each step, a tempting functionalist response is parochialism: to hold that the false or omitted counterfactuals fall outside the modeler’s explanatory aims, and so should not be counted against functional kinds. I conclude by noting the dangers this attitude poses to scientific disagreement, inviting functionalists to better articulate how the individuation conditions for functional kinds might outstrip the perspective of a single modeler.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. In principle, there are two independent debates here: mechanism versus functionalism about explanation and mechanism versus functionalism about kinds. However, while one could be committed on one dispute and agnostic on the other, mechanism about kinds fits most naturally with mechanism about explanation because of their common emphasis on localization.

  2. As we will see, we need to distinguish two different notions of “complete” here. In the first sense, an explanation is complete if it is maximally detailed (it omits no relevant specifics). In the second sense, an explanation is complete if it is unlikely to be revised in future iterations of a progressing research program. Both senses are relevant to kindhood, and both will be considered below.

  3. However, without significant complications, microessentialism is probably not even plausible for ‘water’; see Brakel (2000) and Needham (2011).

  4. Viewed by today’s lights, there are obvious problems with this classic example: it narrowly targets the Nagelian bridge-law program, whereas more permissive accounts of interlevel kind criteria (e.g. the HPC view) are more promising; and Fodor requires that all examples of money share intrinsic physical similarities, whereas what it is to be money may depend on extrinsic psychological or institutional relations, which are permitted by some mechanistic accounts of natural kinds (e.g. the HPC view).

  5. In other words, the difference between a mechanism sketch and a non-mechanistic functional model concerns whether the components of the model could (at least in principle) be localized to parts of an underlying mechanism. If they cannot, then the model cannot be regarded as a (successful) mechanism sketch. (In Sects. 4.3 and 5, I explore the consequences for Weiskopf’s view if he concedes that the components of his cognitive models can in principle be localized.)

  6. In this paper, when I write of “cognitive models” I am using the label as a technical term as defined by Weiskopf; I here take no stand on whether non-representational models should be regarded as ‘cognitive’ in any other sense.

  7. Weiskopf concedes that psychology may make use of neural evidence (i.e. he does not endorse a strict reading of evidential autonomy—see Weiskopf, forthcoming), but only as a guide to or proxy for psychological findings.

  8. Craver and Weiskopf actually use the word ‘noncomponential’ here, but since Weiskopf later writes about the functional components of models that are noncomponential in this sense (a practice I follow here), I have used the word ‘nonlocalized’ to avoid confusion.

  9. To qualify this critical consensus, there are a few interesting arguments in defense of the biological plausibility of backpropagation; some have suggested that backpropagation may be plausible if nodes are regarded not as individual neurons but rather as neural assemblies with recurrent connections (Stork 1989), and others have concluded on the basis of neuroanatomical studies that something like an error signal—synaptic depression—might be transmitted backwards along individual synapses (though perhaps a time scales inconsistent with backpropagation—Fitzsimonds et al. 1997).

  10. Of course, no actual experiment can be conducted involving an infinite number of nodes and an infinite training set, and the actual neural network implementations of Turing machines have been built by hand (e.g. Siegelmann and Sontag 1991). It is a separate question what neural networks trained with a set number of nodes, a particular learning rule, a plausible learning set, and a fixed learning period can learn easily. The point stands, however, that these parameters exhibit a high degree of variability in the literature, and the number of functions that can be approximated by backpropagation-trained neural networks within this space is considerable.

  11. Schindler (2014, p. 1746) notes that it is ultimately the quantum mechanical models, together with a ‘translation key’, that ends up playing this justificatory role in Bokulich’s analysis of periodic orbits in physics.

  12. (Weiskopf (2011a), p. 318) argues that we should distinguish “allowing control and manipulation” from “being able to answer counterfactual questions”, recommending a metric of normative assessment for explanations that is neutral between the two. However, counterfactuals about the results of interventions are still counterfactuals, and even a neutral metric would disadvantage models that do not capture the results of interventions on a system’s behavior.

  13. For example, (Hummel and Biederman (1992), p. 511) themselves espouse an interest in the way that visual attention may help avoid accidental synchronization, an interest not addressed by the use of FELs.

  14. Indeed, it seems an overreach to read this parochialism into Hummel and Biederman, who at times espouse agnosticism regarding the interpretation of FELs, noting that it “remains an open question whether a neuroanatomical analog of FELs will be found to exist” (1992, p. 510).

  15. Though FELs have been conserved in later iterations of JIM, they have not appeared in any other object categorization models—and indeed have been noted as a weakness of this model by critics (e.g. Robbins 2004).

  16. Important recent explication of ‘explanatory power’ have valued the role of familiarity (e.g. see Ylikoski and Kuorikoski 2010 on ‘cognitive salience’) but only due to the pragmatic benefit that it is easier to infer counterfactuals from a familiar model, and not because it is an explanatory good in its own right.

  17. The most plausible examples of in-principle non-localizable models in cognitive science are dynamical models from systems neuroscience in which “super- and subordinate levels are indistinct, most interactions are circular, and control is decentralized” (Sporns 2011, p. 193). However, such models do not easily fit the mold of Weiskopf’s cognitive models, for they resist even functional decomposition and their main proponents eschew representational interpretation entirely (e.g. Stepp et al. 2011; Silberstein and Chemero 2013). For further arguments that such dynamical models fail to explain if they are non-mechanistic, see Kaplan and Bechtel (2011).

  18. That such an standoff is not likely to resolve the dispute is evidenced by the number of cases about which philosophers agree on all the details but disagree on their interpretation; e.g. regarding lateral inhibition compare Shapiro (2004, pp. 117–120) to Weiskopf (2011b, pp. 236–239) or on network neuroscience compare (Bechtel (2011), p. 553) to Silberstein and Chemero (2013, pp. 965–966).

  19. Levy and Bechtel emphasize that network motif models highlight the organization of neural mechanisms while omitting structural detail of the parts so organized. Such models are to be distinguished from nonmechanistic decompositions because systems can only be organized in the relevant sense if they “exhibit a certain form of dependency of the whole on its parts” (2013, p. 244). Components in abstract mechanistic models must at least in principle be localizable, even if such detail is irrelevant to the modeler’s current explanatory purposes.

  20. Throughout this section, I use talk of “higher” and “lower” levels to discuss this functionalist rejoinder without ultimately endorsing the intelligibility of such talk. For skepticism about such terminology, see (Craver (2007), Chap. 5).

  21. A commonly-overlooked issue here is that mechanists about kinds typically concede that the mechanisms securing the homeostatic stability of a kind may be externally located from the system depicted—e.g., constraints on reproduction or predation may ensure that members of a biological species reliably possess their characteristic phenotypic properties (Boyd 1999).

  22. For some discussion as to how such a course of investigation might play out for some important psychological kinds, see Buckner (2011, 2013).

  23. There is often a complex interplay between our attempts to identify the boundaries of a psychological phenomenon and the boundaries of its underlying mechanism. For a recent discussion of “lumping and splitting” that illustrates how far the discussion over special science taxonomy has move beyond the classical Fodorian frame, see Craver and Darden (2013).

  24. For example, in cases where two researchers from different epistemic perspectives attribute two different functional profiles to the same underlying kind, we might treat those functional profiles as explanatory heuristics that can be revised and improved through collaborative critical interaction (e.g. Hong and Page 2001). What remains to be articulated are the conditions, for the functionalist, where such fusing should be judged the correct outcome, as opposed to a mistake (or a changing of the subject).

References

  • Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33(04), 245–266.

    Article  Google Scholar 

  • Bechtel, W. (2007). Biological mechanisms: Organized to maintain autonomy. In F. C. Boogerd, F. J. Bruggeman, J.-H. Hofmeyr, & H. V. Westerhoff (Eds.), Systems biology: Philosophical foundations (pp. 269–302). Amsterdam: Elsevier.

  • Bechtel, W. (2010). Dynamic mechanistic explanation: Computational modeling of circadian rhythms as an exemplar for cognitive science. Studies in History and Philosophy of Science, 41, 321–333.

    Article  Google Scholar 

  • Bechtel, W. (2011). Mechanism and biological explanation. Philosophy of Science, 78(4), 533–558.

    Article  Google Scholar 

  • Bechtel, W., & Mundale, J. (1999). Multiple realizability revisited: Linking cognitive and neural states. Philosophy of Science, 66(2), 175–207.

    Article  Google Scholar 

  • Bechtel, W., & Richardson, R. C. (2010). Discovering complexity: Decomposition and localization as strategies in scientific research. Cambridge: MIT.

    Google Scholar 

  • Bickle, J. (2010). Has the last decade of challenges to the multiple realization argument provided aid and comfort to psychoneural reductionists? Synthese, 177(2), 247–260.

    Article  Google Scholar 

  • Bokulich, A. (2008). Can classical structures explain quantum phenomena? The British Journal for the Philosophy of Science, 59(2), 217–235.

    Article  Google Scholar 

  • Bokulich, A. (2011). How scientific models can explain. Synthese, 180(1), 33–45.

    Article  Google Scholar 

  • Bokulich, A. (2012). Distinguishing explanatory from nonexplanatory fictions. Philosophy of Science, 79(5), 725–737.

    Article  Google Scholar 

  • Boyd, R. (1991). Realism, anti-foundationalism and the enthusiasm for natural kinds. Philosophical Studies, 61, 127–148.

    Article  Google Scholar 

  • Boyd, R. (1999). Kinds, complexity, and multiple realization. Philosophical Studies, 95(1), 67–98.

    Article  Google Scholar 

  • Buckner, C. (2011). Two approaches to the distinction between cognition and ‘mere association’. International Journal of Comparative Psychology, 24(4).

  • Buckner, C. (2013). A property cluster theory of cognition. Philosophical Psychology, 1–30.

  • Burge, T. (2010). Origins of objectivity. New York: Oxford University Press.

  • Clark, A. (1991a). Systematicity, structured representations and cognitive architecture: A reply to Fodor and Pylyshyn. In T. Horgan et al. (Eds.), Connectionism and the philosophy of mind (pp. 198–218). New York: Springer.

  • Clark, A. (1991b). Microcognition: Philosophy, cognitive science, and parallel distributed processing. Cambridge: MIT.

    Google Scholar 

  • Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Craver, C. F., & Darden, L. (2013). Search of mechanisms: Discoveries across the life sciences. Chicago, IL: University of Chicago Press.

  • Cummins, R. (1977). Programs in the explanation of behavior. Philosophy of Science, 44, 269–287.

    Article  Google Scholar 

  • Cummins, R. C. (1983). The nature of psychological explanation. Cambridge: MIT.

    Google Scholar 

  • Fitzsimonds, R. M., Song, H. J., & Poo, M. M. (1997). Propagation of activity-dependent synaptic depression in simple neural networks. Nature, 388(6641), 439–448.

    Article  Google Scholar 

  • Fodor, J. (1997). Special sciences: Still autonomous after all these years. Noûs, 31(s11), 149–163.

    Google Scholar 

  • Fodor, J. A. (1974). Special sciences (or: the disunity of science as a working hypothesis). Synthese, 28(2), 97–115.

    Article  Google Scholar 

  • Forster, M., & Sober, E. (1994). How to tell when simpler, more unified, or less ad hoc theories will provide more accurate predictions. The British Journal for the Philosophy of Science, 45(1), 1–35.

    Article  Google Scholar 

  • Gluck, M. A., & Myers, C. E. (2001). Gateway to memory: An introduction to neural network modeling of the hippocampus and learning. Cambridge: MIT.

    Google Scholar 

  • Greenwood, J. D. (1999). Understanding the “cognitive revolution” in psychology. Journal of the History of the Behavioral Sciences, 35(1), 1–22.

    Article  Google Scholar 

  • Griffiths, P. E. (1997). What emotions really are: The problem of psychological categories (p. 114). Chicago: University of Chicago Press.

  • Haykin, S. S., Haykin, S. S., Haykin, S. S., & Haykin, S. S. (2009). Neural networks and learning machines (Vol. 3). Upper Saddle River: Pearson Education.

    Google Scholar 

  • Hong, L., & Page, S. E. (2001). Problem solving by heterogeneous agents. Journal of Economic Theory, 97(1), 123–163.

    Article  Google Scholar 

  • Hummel, J. E., & Biederman, I. (1992). Dynamic binding in a neural network for shape recognition. Psychological Review, 99, 480–517.

    Article  Google Scholar 

  • Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, 122–149.

    Article  Google Scholar 

  • Just, M. A., Carpenter, P. A., & Varma, S. (1999). Computational modeling of high-level cognition and brain function. Human Brain Mapping, 8, 128–136.

    Article  Google Scholar 

  • Kaplan, D. M., & Bechtel, W. (2011). Dynamical models: An alternative or complement to mechanistic explanations? Topics in Cognitive Science, 3(2), 438–444.

    Article  Google Scholar 

  • Kaplan, D. M., & Craver, C. F. (2011). The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philosophy of Science, 78(4), 601–627.

    Article  Google Scholar 

  • Kruschke, J. K. (1992). ALCOVE: An exemplar-based connectionist model of category learning. Psychological Review, 99, 22–44.

    Article  Google Scholar 

  • Levy, A., & Bechtel, W. (2013). Abstraction and the organization of mechanisms. Philosophy of Science, 80(2), 241–261.

    Article  Google Scholar 

  • Love, B. C., & Gureckis, T. M. (2007). Models in search of a brain. Cognitive, Affective, & Behavioral Neuroscience, 7(2), 90–108.

  • Love, B. C., Medin, D. L., & Gureckis, T. M. (2004). SUSTAIN: A network model of category learning. Psychological Review, 111, 309–332.

    Article  Google Scholar 

  • Machery, E. (2005). Concepts are not a natural kind. Philosophy of Science, 72(3), 444–467.

    Article  Google Scholar 

  • Millikan, R. G. (2012). Are there mental indexicals and demonstratives? Philosophical Perspectives, 26(1), 217–234.

    Article  Google Scholar 

  • Needham, P. (2011). Microessentialism: What is the argument? Noûs, 45(1), 1–21.

    Article  Google Scholar 

  • Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183(3), 283–311.

    Article  Google Scholar 

  • Prinz, A. A., Bucher, D., & Marder, E. (2004). Similar network activity from disparate circuit parameters. Nature Neuroscience, 7(12), 1345–1352.

    Article  Google Scholar 

  • Quine, W. V. O. (1969). Natural kinds. In N. Rescher et al. (Eds.), Essays in honor of Carl G. Hempel: A tribute on the occasion of his sixty-fifth birthday (Vol. 24). Dordrecht: Springer.

  • Robbins, S. E. (2004). On time, memory and dynamic form. Consciousness and Cognition, 13(4), 762–788.

    Article  Google Scholar 

  • Selverston, A. I. (1980). Are central pattern generators understandable? Behavioral and Brain Sciences, 3(04), 535–540.

    Article  Google Scholar 

  • Schindler, S. (2014). Explanatory fictions—for real? Synthese, 191(8), 1741–1755.

    Article  Google Scholar 

  • Shapiro, L. A. (2004). The mind incarnate. Cambridge: MIT.

    Google Scholar 

  • Shea, N. (2007). Content and its vehicles in connectionist systems. Mind & Language, 22(3), 246–269.

    Article  Google Scholar 

  • Siegelmann, H. T., & Sontag, E. D. (1991). Turing computability with neural nets. Applied Mathematics Letters, 4(6), 77–80.

    Article  Google Scholar 

  • Silberstein, M., & Chemero, A. (2013). Constraints on localization and decomposition as explanatory strategies in the biological sciences. Philosophy of Science, 80(5), 958–970.

    Article  Google Scholar 

  • Sporns, O. (2011). Networks of the brain. Cambridge: MIT.

    Google Scholar 

  • Stepp, N., Chemero, A., & Turvey, M. T. (2011). Philosophy for the rest of cognitive science. Topics in Cognitive Science, 3(2), 425–437.

    Article  Google Scholar 

  • Stork, D. G. (1989). Is backpropagation biologically plausible? In Proceedings of the international joint conference neural networks (IJCNN) (pp. 241–246). New York: IEEE.

  • Trout, J. D. (2002). Scientific explanation and the sense of understanding. Philosophy of Science, 69(2), 212–233.

    Article  Google Scholar 

  • Van Brakel, J. (2000). Philosophy of chemistry. Leuven: Leuven University Press.

    Google Scholar 

  • Walmsley, J. (2008). Explanation in dynamical cognitive science. Minds and Machines, 18(3), 331–348.

    Article  Google Scholar 

  • Weiskopf, D. (2011a). Models and mechanisms in psychological explanation. Synthese, 183, 313–338.

    Article  Google Scholar 

  • Weiskopf, D. (2011b). The functional unity of special science kinds. British Journal for the Philosophy of Science, 62, 233–258.

    Article  Google Scholar 

  • Weisopf, D. (forthcoming). The reality of cognitive models. In D. Kaplan (Ed.), Integrating mind and brain science: Mechanistic perspectives and beyond. Oxford University Press.

  • Woodward, J. (2005). Making things happen: A theory of causal explanation. Oxford: Oxford University Press.

    Google Scholar 

  • Ylikoski, P., & Kuorikoski, J. (2010). Dissecting explanatory power. Philosophical Studies, 148(2), 201–219.

    Article  Google Scholar 

Download references

Acknowledgments

I am grateful to Ken Aizawa, Colin Allen, Petri Ylikoski, audiences at Ruhr-University Bochum and University of Colorado-Boulder, and three anonymous reviewers for discussion and feedback on earlier drafts of this paper. This work was supported in part by a fellowship from the Alexander von Humboldt Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cameron Buckner.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Buckner, C. Functional kinds: a skeptical look. Synthese 192, 3915–3942 (2015). https://doi.org/10.1007/s11229-014-0606-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-014-0606-z

Keywords

Navigation