William C. Wimsatt: Re-engineering philosophy for limited beings: piecewise approximations to reality
- First Online:
- Cite this article as:
- Rosenberg, A. Biol Philos (2011) 26: 261. doi:10.1007/s10539-010-9199-1
Citing Archilochus as his source, Isaiah Berlin famously divided thinkers into hedgehogs and foxes: the former know one big thing and the latter know many small things. (To get a handle on the distinction, Plato was in Berlin’s view a hedgehog, while Aristotle was a fox.) Berlin did not seem to allow for the possibility of what we might call “hedgefoxes”—that rare thinker who knows several big things. Among philosophers of biology, and more generally among philosophers of science, William Wimsatt comes closest to filling that bill.
If Wimsatt had only introduced us to the notion of generative entrenchment, he would have had an enduring impact on the subject. But he did not stop there. He was perhaps the first to make much of the rolls of robustness, heuristics, mechanisms, aggregativity, and complexity in biology and in our understanding of it. Some of this happened so long ago, way back among the first generation of latter day philosophers of biology, that it has become part of the common patrimony of the subject. But all these big things were originally Bill’s ideas. In our subject no one is more of a hedgefox than Wimsatt.
In Re-engineering Philosophy for Limited Beings Wimsatt brings together and substantially revises, expands, updates many of the essays in which he introduced these and other concepts now in daily use by philosophers of biology and the special sciences. The original versions of some of the chapters date back to papers from the early 1970s, most appeared as book chapters or in PSA proceedings volumes. To these now revised pieces, Wimsatt has added 5 new chapters, and material that organizes this sample of his oeuvre.
As the title makes clear, Wimsatt’s aim, here and throughout his career, has been to identify methodological tools appropriate for well adapted but error-prone cognitive agents like us. “This must be central to any naturalistic account” of how people think—whether scientists, engineers, historians or sociologists of science or philosophers of science or that matter (p. 5). It is more than doubtful that all of these so different agendas of inquiry really require the same methodological tools. So it is no surprise that in what follows Wombat’s introduction the focus is almost exclusively on the conceptual tools and research strategies of the empirical scientist.
The first of a four part division of the book offers a fallibilist heuristic methodology for realist scientists, and the second part provides five chapters on real world problem solving-devices scientists use in the process of science and ones that philosophers largely neglect in their accounts of the product of science—how to tell the natural from the artifactual, how to gauge reality knowing that one’s methods are error prone, the way in which false and idealized models enable us to grasp the character of a real world, and how generative entrenchment—as a fact about the history of science, not just (although that would have been enough) a fact about the biology of developmental processes—captures the inertia of normal science.
Part three applies many of these insights to offer an account of emergence and reduction quite different from what the philosophy of biology has produced over the last generation. Here the focus is on how to integrate differing and sometimes incommensurable theories about phenomena at different levels, or the same processes viewed from different perspectives, or as a process cut out from “causal thickets”—situations of disorder with boundary ambiguities (yet another idea original to Wimsatt). Here is developed a functional account of the process of reduction in science and the correlative notions of “aggregativity” and “emergence”. I will have more to say about some of the chapters in this part of the book below.
It should be evident why a philosopher eager to reintroduce limited beings to real methodologies that recognize fallibility and exploit it to push back the frontiers of research will be not only be uninterested in, but will disparage arguments that employ such demons to attain otherwise unreachable conclusion by making epistemically indeterminable assumptions about unlimited computational powers. As one who has frequently helped himself to such demons, in what follows I will focus on Wimsatt’s claims about reduction and aggregativity, chapters 11 and 12, respectively among the oldest and the newest of Wimsatt’s papers here revised and republished.
“Dawkins/Kitcher/Sterelny “bookkeeping demon”: Keeps track of all contexts of all genes (including their genetic contexts to adjust for epistatic interactions) in all organisms so as to be able to calculate and update as necessary each generation (asynchronously, for different generation times of different organism) the net selection coefficients of all genes so as to plug into the “bottom up” genic theory of natural selection required for Dawkins reductionistic vision. (p. 363)
Unlike many, including the present writer, Wimsatt treats reduction as a process, not a product or a relationship between theories, or explanations, still less as relation between sentences in a deductive system. His aim is two-fold: to identify salient structural features of the practice of reducing one theory or model to another as an instrumental strategy that promotes the aims of science, particularly explanation (269), and to employ it to give an account of the nature of scientific change—a historico-descriptive aim.
As an instrument of scientific progress, Wimsatt writes, reduction becomes appropriate only when we cannot explain phenomena adequately “as the product of causal interactions at its own level.” This claim is far from anodyne. Some reductionists will argue that the explanatory adequacy of a “higher level” theory, T′, is no reason to surrender the search for a more fundamental theory, T″ about lower level constituents that will underwrite the explanatory adequacy of T′, while showing it to be a special case, or one instance of the same processes operating elsewhere.
Still other reductionists will demur from Wimsatt’s invocation of levels, in favor of a treatment of reduction as the discovery of how lower order properties realize processes described by higher order predicates, along the lines Kim (2000, 2007) has suggested. Kim’s approach, which rejects the description of processes in terms of levels, in favor of descriptions in terms of predicates of differing orders of quantificational complexity (like Ramsey-sentences of yore) all picking out properties on the same level, has some important metaphysical consequences (avoiding unacceptable eliminativism about “higher level” properties through “causal drainage” Block 2003). These are advantages to which an epistemically motivated approach to reduction will be indifferent, for course.
This is a hard passage to understand, despite its importance for Wimsatt’s treatment of reductionism as a research strategy. Beside the problem that he rejects as superfluous the nesting of higher level explanations in lower-level ones, Wimsatt seems to be saying that in the absence of an explanation at level N, we should either a) seek a lower level one, say one at N − i or b) an even higher level functional or selected effects explanations, at N + i (for some i equal to or greater than 1). But besides the fact that explanations at N + I could hardly be reductive, the notion that functional explanations will be evolutionary needs argument (though I have myself tried to provide such an argument, most recently with Neander and Rosenberg (2009). More important, the unstated assumption, here and throughout the book, that selectionist explanations for processes at level N must always be some higher level N + I is in real need of an argument. Indeed, the pay-off to such an argument would be great indeed: provision of one would enable Wimsatt to argue for a very strong metaphysical thesis of irreducibility of the biological.
If some phenomenon does not submit of an explanation in terms of causal interactions at its own level, then “we wish to be able to [show]… how it is a product of causal interactions at lower levels (a micro-level or reductive explanation), or at least probably and desirably in our reductionist conceptual scheme (but absolutely unavoidably in a world of evolution driven by selection processes), how it is a product of causal interactions at higher levels (more commonly a functional explanation) .
Wimsatt asserts as a methodological dictum that “When a macro-regularity has relatively few exceptions, redescribing a phenomena that meets the macro-regularity in terms of a exact micro-regularity provides no (or negligibly) further explanation.” The argument for this claim is simply that the macro-regularity’s predictive and explanatory power screens off (in Salmon’s and subsequently Brandon’s terms) the lower level description. This will work as an argument only if one adopts a thoroughly instrumentalist view of explanation as providing predictive resources. Otherwise, statistical screening off by itself is a poor reason to deny the systematic import of a reduction, even of close-to-exceptionless higher level (or as I prefer, higher order) generalizations.
When higher level or macro-regularities are subject to serious exceptions, anomalies, in Wimsatt’s terms, then, he says, recourse to micro-reductions is in order to seek stronger or stricter explanatory laws, especially when these partition the higher level cases into ones that confirm to the macro law and ones that do not. This provides a micro-explanation of the deviant cases, presumably leaving the non-deviant ones to be adequately explained by the gappy macro-regularity. Wimsatt does not address the alternative that such a micro-explanation shows that there never was a higher level macro-regularity to begin with, but only some accidental or artifactual or temporary regularity (of the sort Beatty’s 1995 evolutionary contingency thesis would lead us to expect). Wimsatt’s faith in the upper or macro-level as adequately explanatory, even in the presence of deviant cases, is strong indeed.
According to Wimsatt, several morals are to be drawn from this insight: (1) Mostly identity claims in science will turn out to be false, owing to the fact that (2) the strength of a numerical identity claim makes it easy to falsify. (3) Identity claims will always look wildly irresponsible to an inductivist, since they are made on the extensional equivalence of a small number of properties. (4) The fragility of identity claims is hidden by what amounts to a Kuhnian retrospective redefinition of the original objects of inquiry that hides discontinuities and incommensurabilities (of this thesis more below). Finally, (5) it’s no surprise that scientists prefer identity claims to mere correlational ones, not because of considerations of simplicity (here Wimsatt sites Kim 1966), but because they prefer stronger tools to weaker ones. [“I]n a dynamic view of science, only identity claims can effectively move science forward (269).” Does Wimsatt mean that without according to identity claims the sole power to move science, no view of science can be dynamic? This is a pretty strong claim. I do not see how it can be right. Presumably Wimsatt means that the only way science can change is via the use of strong identity claims. This seems equally hard to defend however. In any case the morals that Wimsatt draws can, I think, illuminate much of the development of molecular genetics and its quest for the chemical nature of the gene. It would have been a wonderful opportunity to apply this analysis. The philosophers’ debate as to whether the quest for its identity so reshaped the concept of the gene that the result was really a replacement and not a reduction would strongly vindicate the 4th of 5 morals Wimsatt draws from his treatment of the role of identifications in reduction. What consequences for reduction in biology follow if as L suspect, changes in the type identity-conditions for genes did not result in a replacement of the concept over the 20th century (but see Griffiths and Neuman 1999).
An identify claim, with its subsequent application of Leibniz’ law, provides the most rigorous detector of possible error or of a failure of fit of applicable descriptions at different levels…. The identity claim is… a tool to ferret out the source of explanatory failures, which, by its transitivity, allows one to delve an arbitrary number of levels lower if need be to pin point the mismatch, or by its scope, to any properties—however, diffuse or relational—to detect a relevant but ignored interaction (266–267).
In chapter 12 Wimsatt advances an account of when reduction is feasible and a correlative account of the notion of emergence. No claims are made that either account provides necessary and sufficient conditions for reductive explanations or emergent properties, but it is reasonable to assume that Wimsatt holds both accounts to accommodate a significant number of cases of each. We need to ask whether they really do? Wimsatt begins by noting that reductionists treat (or should treat?) emergence as a purely epistemic property of properties—it is a “temporary confession of ignorance (274).” Thus, he reconciles reductionism and the existence of emergent properties, and concludes that misunderstanding of the epistemic character of emergence is the source of its appearance of mystery to reductionists and some of the opposition to reduction by emergentists. If only the matter could be so easily settled. But many if not most parties to the question of whether there are emergent properties treat the matter as a metaphysical issue, not an epistemic one.
Nevertheless, observes Wimsatt, “some rather nice things fall out of a reductive account of emergence. …A reductive explanation of a behavior or property of a system is one that shows it to be mechanistically explicable in terms of the properties and interactions among the parts of the system. (275, italics in original).” If this is a definition, it rules out by fiat the possibility of reductive explanations that appeal to the operation of “non-mechanistic” regularities—say, those of natural selection, for example—operating at the reducing level. If the claim is not fiat, it seems disconfirmed by successful reductions as old as the explanation of oxygen transport by red blood cells.
Intersubstitution—invariance of property under rearrangement or interchange of parts.
Size scaling—qualitative similarity of system-property under addition or subtraction of parts.
Decomposition and ReAggregation of parts make no difference to system-property.
Linearity—no cooperative or inhibitory interactions among parts of system affect the property.
Wimsatt does not consider the answer that the failure of aggregativity, as defined above, is not a mark of emergence, either in the view of scientists or many philosophers of science. Instead, he suggests that the reason is the failure of scientists (and philosophers?) to recognize that some properties “look aggregative for some decompositions, but reveal themselves as emergent or organization-dependent for other decompositions or conditions” (304 italics in original). But
So why then is the temptation of “Nothing but-ism – [Dan Dennett’s (1995)] greedy reductionism so strong. We see statements quite regularly in science like “Genes are the only units of selection,”… “The mind is nothing but neural activity,”… If total aggregativity is so rare, why are claims like these so common (304).
This would be right as a claim about successful reduction, but given the fact that aggregativity is almost entirely absent about the level of mechanics, it makes no sense as an explanation of why “Nothing but-ism” is so widely to be met with in biology, neuroscience, and economics (vide the anti-Keynesian economists’ demand for micro-foundations).
…we will tend to see aggregative decompositions or more aggregative decompositions and their parts as instances of natural kinds, because these decompositions provide simpler and less context-dependent regularities, theories, and mathematical models for the behaviors they capture (304, italics in original).”
Students of scientific change may agree with Wimsatt’s Kuhnian diagnosis of why we have such a high opinion of reductionism, without, however, embracing his qualification on this high opinion. In particular the fact that philosophers and others have become adept at identifying the disguised changes in theory enforced by the reductionist paradigm of the natural sciences need not encourage them to treat the scientist’s respect for it as exaggerated. It is heuristic that has withstood the test of 4 centuries despite repeated and ever accelerating announcements of its demise. As such it deserves Wimsatt’s approbation, not his unmasking.
A crucial property of heuristics (one of six, see appendix A) is that a heuristic principle succeeds in part by transforming a problem into a different but related problem that is easier to solve. But if it does so very effectively, there will be a strong tendency to identify the new problem as the old one—saying “now that we have clarified the problem so it can be solved…”…In this way quite substantial changes in a paradigm can be hidden—particularly a cumulating string of such changes, each to small to be regarded as “fundamental.” I think this ex post facto reification is central to the exaggeratedly high opinion we have of reductionist methodologies… (310)
If this review has suggested that Wimsatt is a fallible creature, he will not be surprised, for he has always rightly held that we are all fallible, and it has been his aim to carve out an epistemology for creatures such as him and me. His job is well begun. Long may he persevere.