Biology & Philosophy

, Volume 26, Issue 2, pp 305–313

Review of Sandra D. Mitchell: Unsimple truths: science, complexity, and policy

The University of Chicago Press, 2009

Authors

    • Departments of Philosophy and BiologyDuke University
Review Essay

DOI: 10.1007/s10539-010-9245-z

Cite this article as:
Crawford, D.R. Biol Philos (2011) 26: 305. doi:10.1007/s10539-010-9245-z
  • 124 Views

Abstract

In Unsimple truths, Sandra D. Mitchell examines the historical context of current scientific practices and elaborates the challenges complexity has since posed to status quo science and policymaking. Mitchell criticizes models of science inspired by Newtonian physics and argues for a pragmatistic, anti-universalist approach to science. In this review, I focus on what I find to be the most important point of the book, Mitchell’s argument for the conceptual independence of compositional materialism and descriptive fundamentalism. Along the way, I provide a description of Mitchell’s overall project and a road map of the book.

Keywords

ComplexityCompositional materialismContingencyDescriptive fundamentalismEmergentismReductionismScientific lawsScientific method

Introduction

Sandra D. Mitchell begins Unsimple Truths with the observation that “[c]omplexity is everywhere” (p. 1).1 Complexity spurs Mitchell to examine the historical context of current scientific practices, and elaborate the challenges complexity has since posed to status quo science and policymaking. Throughout the book, Mitchell attempts to de-normalize models of science inspired by Newtonian physics. In opposition to the post-Newtonian reductionist tradition, she argues for a pluralistic, pragmatistic, anti-universalistic approach to science. Mitchell’s “jointly normative and descriptive project” (p. 4) has more to say to her reductionist foil however than “The world is messy; to each phenomenon its own theory.” Her position extends from her work elsewhere on ‘integrative pluralism’ and complexity and diversity in the biological world (e.g., Mitchell 2003).

Mitchell spends most of the book rebutting three arguments from the tradition of “new-age Isaac Newtons”(p. 18) in the philosophy of science. In this review, I focus on what I find to be the most important point of the book, Mitchell’s argument against the second of these views. Along the way, I provide a description of Mitchell’s overall project, and a road map of the book.

1

Mitchell begins her project by bringing attention to an important and well-known point: cross-talk and feedback are crucial for a healthy science and its philosophy. For example, Sklar points out that twentieth-century physics engaged in such a reciprocal relationship with philosophy in dealing with fundamental concepts such as causation and probability (Sklar 1992). Mitchell’s crucial contribution to this line of thought is her extension of this relationship across multiple scientific domains. Mitchell examines this process not only in physics but also in biology. Individually, these domain-specific cases demonstrate the feedback relationship; together, these cases from different domains demonstrate that this feedback may yield a more plural view of science than one based on physics alone. The question for Mitchell is not one of what science can tell us about the world, but one of what each science can tell us about each part of the world, and what an examination of sciences from different domains can tell us about scientific practice and knowledge in general.

Mitchell is concerned with the question of how Newtonian mechanics (or at least the method attributed to Newton) has influenced traditional views of science and knowledge. She frames her account with a meta-scientific observation:

This book is designed to begin the discussion of an expansion and revision of the traditional views of science and knowledge, codified in the nineteenth century by English philosophers, trying to make all scientists into new-age Isaac Newtons. (p. 18)

A “new-age Isaac Newton” holds some or all of the following views about Newtonian mechanics: (1) that it is the source of fundamental laws for all of science; (2) that it serves as a model of a fundamental, physical, nomological base for all of science; and (3) that it serves as a model for laws in general (whether fundamental or not) in the sciences. It isn’t hard to see how these views could go together: a simple, elegant, powerful set of laws such as Newton’s could easily suggest itself as the basis for other laws of nature, and indeed the nature of other laws.

Mitchell moves quickly through the faults of the view that Newtonian mechanics is the source of fundamental laws for all of science. It is widely understood that Newtonian mechanics has serious limitations. Although it has useful applications and is measurably valid “within the physical limits of normal human experience,” it gives bad results in other cases which are explained by quantum mechanics or relativistic physics (Jadrich 1996). Newtonian mechanics loses its grip when things are too small or moving too quickly. However, even after scientists realized the limitations of Newtonian mechanics, its influence on scientific practice and theorizing remained. Mitchell works to de-normalize such widespread epistemological and metaphysical remnants of Newtonianism. Newtonian mechanics is powerful in many contexts, but “much of the world escapes” its concepts and methods (p. 12). The explanatory scope of Newtonian mechanics is limited, as is its legitimacy as a model of science, its right to delimit the “scope of what counts as reliable knowledge of our complex world” (p. 12).

2

The second view Mitchell argues against is still popular today. Even if Newtonian mechanics has a rather limited scope, some other theory may fulfill its (supposed) promise of providing a physical basis for all of science. For example, Nagel recognizes the downfall of Newtonian “imperialism” in the nineteenth century, but nonetheless suggests that we may develop a “universal physical science” capturing “the ideal of a comprehensive theory which will integrate all domains of natural science in terms of a common set of principles and will serve as the foundation for all less inclusive theories” (Nagel 1961, 336). It is the possibility of such a universal physical science which underlies physicalist reductionism.

Mitchell’s major foil is Kim’s reductionism (Kim 1999), which is markedly different from classic Nagelian reductionism. Nagel argues that reduction and emergence concern logical relations: science α is reducible to science β if the experimental laws (and an adequate theory, if present) of science α are the logical consequences of the theoretical assumptions of science β (Nagel 1961, 352). The derivation of laws in science β from laws in science α may require ‘bridging’ assumptions to establish logical relations between terms in science α (but not science β) and terms in science β. (Nagel 1961, 352–354). Emergentist arguments concern the impossibility of logical derivation relationships between statements couched in specific theories: statements in theory α describe properties which are emergent with respect to theory β if those statements cannot be logically derived from theory β. Accordingly, emergence claims are theory-relative, but not claims of partial knowledge. Since theories themselves are dynamic, historical entities, their evolution can affect emergence claims; as Hempel puts it, “what is emergent with respect to the theories available today may lose its emergent status tomorrow” (Hempel 1965, 263).

Kim argues that Nagel’s inter-theoretical reduction is problematic because it admits “bridge laws as brute unexplained primitives” and this makes it consistent with substance dualism. Kim writes that bridge laws.

are standardly conceived as empirical and contingent, and must be viewed as net additions to our theory about the reduction base, which means that the base theory so augmented is no longer a theory exclusively about the originally given base domain (Kim 1999, 14; original italics).

For Kim, determining bridge laws may enable one to inductively predict events described by the (Nagel-style) reduced theory using the (Nagel-style) reducing theory, but bridge laws will not enable similar theoretical predictions. Reduction for Kim means conceptually connecting elements of the reduced theory to elements of the reducing theory such that the reducing theory provides a theoretical basis for predicting the bridge laws themselves (Kim 1999, 12–15). Kim turns Nagel’s bridge laws upside-down: for Kim, they are the prime candidates for emergence claims.

Kim’s physicalist reductionism is based on two assumptions: (what Mitchell labels) descriptive fundamentalism: “every material object has a unique complete” and exhaustive “microstructural description” in terms of the intrinsic properties of its elements and their relations to one another, which gives us “the total “relatedness” of basal constituents” and the “total microstructural (or micro-based) property” of the system (Kim 1999, 6–7; original italics); and that of mereological supervenience (cf. Mitchell’s “compositional materialism”): “Systems with an identical total microstructural property have all other properties in common. Equivalently, all properties of a physical system supervene on, or are determined by, its total microstructural property” (Kim 1999, 7). Taking these two assumptions, it is easy to see how his physicalist reductionism falls out. Since everything is physical, and physics completely captures the physical, “all things and phenomena… are explainable and predictable ultimately in terms of fundamental physical laws” (Kim 1999, 20).

According to Kim, emergentists share the reductionist’s commitment to compositional materialism and descriptive fundamentalism. Emergentists hold however that some higher-than-base-level properties reflect the fact that “the phenomena of this world are organized into autonomous emergent levels” (Kim 1999, 5) and are thus neither reducible to, explainable on the basis of, nor predictable from, laws governing lower-level entities, the subject matter of basic physics (Kim 1999, 10). Based on the emergentist position he describes, Kim’s objection to emergentist physicalism is clear: if higher-level entities are composed of nothing but lower-level entities, there isn’t much room for novel causal powers or unpredictable and unexplainable properties unless we introduce a richer (dualistic) ontology. Accordingly, Kim argues that emergentism is an unstable “halfway house”, precariously positioned between reductive physicalism and “more serious forms of dualism” (Kim 1999, 5).

Mitchell’s strategy for refuting Kim’s arguments against emergentism is first to establish the conceptual independence of the compositional materialism and descriptive materialism, and second to argue against the latter. (Compositional materialism is simply the widespread monistic position that everything is made up of the same basic physical stuff. This point isn’t an issue in this debate.) Mitchell argues that a commitment to compositional materialism does not entail a commitment to descriptive fundamentalism because many different views of description are consistent with compositional materialism. This first step in Mitchell’s argument against a “universal physical science” is, by itself, an important contribution to the debate between reductionists and emergentists. In effect, Mitchell reveals Kim’s argument to be enthymematic in a crucial way: Kim assumes that compositional materialism and descriptive fundamentalism must come together, or at least that emergentists accept that they do.

Mitchell defends an alternative to descriptive fundamentalism which is shared by a number of philosophers of science: descriptions (or representations) are “at best partial, idealized, and abstract” (p. 13). Even though everything is made up of the same stuff, she argues, it isn’t possible to capture everything in a single, universal theory:

The view that there is only one true representation of the world exactly mapping onto its natural kinds is hubris. Any representation is at best partial, idealized, and abstract… These are features that make representations usable, yet they are also features that limit our claims about the completeness of any single representation (p. 13).

Although physics describes the physical to a great extent at one level, it doesn’t necessarily capture the physical at every level. As Mitchell points out, the “physics” facts don’t capture all “physical” facts (p. 33). Importantly, this by no means entails the ontological feature of Kim’s emergentist position, the claim that “the phenomena of this world are organized into autonomous emergent levels”, insofar as that position conflicts with compositional materialism. Rather, the emergentist position it supports is one that recognizes the autonomy of different theories describing different domains. Kim’s strategy is not “wrong-headed”, but “incomplete” (p. 22). Mitchell is careful to point out that her pluralistic position is not “naïve relativism”: standards of “predictive use, consistency, robustness, and relevance” both justify a representation and guide its application (p. 14).

Mitchell’s rejection of descriptive fundamentalism allows her to separate the ontological position Kim assigns to emergentism from the metascientific one: emergent properties can be irreducible to properties described by physics even if, in the end, they’re made up of physical stuff. We can have domain-specific laws and keep our compositional materialism too.

3

In chapter 3, Mitchell builds on her critique of overzealous Newtonians and argues for an expansion of the view that proper scientific laws should resemble Newton’s laws of motion. She puts the 19th-century codification of this view as follows: “To be a law, a generalization must be universally true, exceptionless, and naturally necessary” (p. 49). She argues not only that some laws are domain-specific, but also that laws within and between domains may differ in the extent to which they are universal, exceptionless, and naturally necessary.

Mitchell argues for the third of three potential views of scientific laws: the normative view, whereby one bases nomological analyses on a definition of lawfulness; the paradigmatic view, whereby one bases nomological analyses on comparisons to exemplar laws; and the pragmatic view, whereby one bases nomological analyses on the role of laws (pp. 50–51). The normative and the paradigmatic views share much with Mitchell’s Newtonian foil. Newton’s laws of motion have traditionally served as the basis for norms defining lawhood or as exemplars of lawhood.

From the pragmatic position, Mitchell argues that the strictures the normative and paradigmatic views place on laws are too tight. Mitchell “re-invisage[s] lawfulness functionally”(p. 50): laws are “[c]laims that function reliably in scientific explanations and predictions” (p. 51). If non-universal, non-exceptionless generalizations can explain and predict just like universal, exceptionless generalizations, then why call only the latter laws? After all, “analysis of the types of generalizations that are common in sciences other than fundamental physics… shows that most accepted generalizations about the world are contingent, of limited scope, and exception-ridden”(p. 14–15).

Universality is a relatively straightforward property ascribed to laws in the traditional view: a law is universal if it applies without spatiotemporal limits. If such a law is also true, then it is exceptionless (p. 49). For Mitchell, the more difficult notion involved in the traditional view of laws is natural necessity. The intuition seems clear enough (Mitchell uses Goodman’s classic example of two spheres, one of gold, the other of uranium): it seems that accidental truths, unlike necessary truths, could have been otherwise. However, Mitchell argues that the “importation of the logical dichotomy [between logical necessity and logical contingency] to partition empirical scientific generalizations into naturally necessary or contingent fails” (p. 57). Firstly, Mitchell argues that all laws are logically contingent: “scientific laws describe our world, not a logically necessary world” (p. 56). Secondly, she argues that all laws are historically contingent to some extent, to the degree that they rely on certain conditions obtaining (pp. 55–56). Accordingly, the necessity/contingency dichotomy does little work when it comes to sorting laws of nature.

The necessity/contingency dichotomy not only fails because laws of nature fail to be logically necessary, but also because this dichotomy cannot capture differences in contingency which aren’t “all-or-nothing”(p. 57). These differences are important. Contingency affects the stability of the conditions under which laws are applicable. More specifically, there are different kinds of contingency, and they confer different degrees of stability (p. 63). Mitchell argues that by ignoring these differences, and instead treating contingency as part of a simple logical binary, we miss important features of laws that affect their functioning. The details of contingency tell us more about laws than that they aren’t logically necessary—they tell us something about the biological systems they describe and they affect our ability to predict and control these systems. For example, historical contingency characterizes to some extent all of the causal structures laws describe, but

Not all causal structures are equally historically contingent: some were fixed in the first three minutes after the big bang…and others are more recent and ephemeral, like retroviruses or human social arrangements (pp. 16–17).

Increasing our understanding of the historical contingency of laws may help us to discover and understand the conditions on which they are dependent and how those dependencies work (p. 63).

4

In the fourth chapter, Mitchell argues that accepting contingency and context-sensitive causal behavior as part of the world threatens a simple and monistic notion of “the scientific method” (p. 65). Mitchell begins with a discussion of genetic knockout experiments, which she argues follow John Stuart Mill’s method of inference:

If you can observe two systems that differ with respect to only one factor, all others being in agreement, then the differences in the effects between those two systems can be attributed to the differing test factor (pp. 67).

Mitchell explains that interpreting the results of knockout experiments is complicated by well-known features of genetic systems such as pleiotropy and epistasis. Additionally, at the network level things only get more complicated. Genetic networks show dynamic plasticity, maintaining functions despite network perturbations. Mitchell writes that this robustness (or “degeneracy”) of networks, displayed, for example, by flexible behavior or homeostatic mechanisms, “appears to be both ubiquitous and significant for biological systems” (p. 71).

James Woodward’s interventionist account of causation, “a sophisticated intellectual heir of Mill’s methods that focuses on how disruption of a cause, or intervention on a variable, makes a difference,” (p. 74) serves as Mitchell’s major foil in this chapter. According to Woodward’s model, explanatory cause-effect relationships are invariant, but not necessarily universally so, and thus their invariance may come in degrees. This accommodation of context-dependence fits nicely with Mitchell’s understanding of causal relationships in biology.

Mitchell argues that while Woodward makes some advances over Mill, his account fails to generalize to many (most?) cases of biological causation. Mitchell takes issue with Woodward’s position on a second type of stability. Woodward claims that causal processes possess modularity, i.e., “the property of separability of the different causal contributions to an overall effect” (p. 76). Modular causes are “separately disruptable” (p. 77), and so amenable to study using simple interventions. Mitchell argues that this requirement is too strong—cases such as the one she gives regarding genetic knockout studies show that modularity fails to characterize many significant biological processes.

Mitchell sees three ways out of this conflict between complex networks and the modularity requirement. The first way is to “bite the bullet” and simply accept that biological networks are not comprised of causal components because these components fail to satisfy the modularity requirement. She rejects this option because doing otherwise would betray “her naturalistic strategy”—she refuses to deny the scientific status of complex networks on the basis of a philosophical framework (p. 78). The second way involves saving both the empirical status of networks and the concept of causality by re-describing the networks to satisfy the modularity conditions. She rejects this option because it invites situations where a particular component is considered causal in some circumstances, but not in others (e.g., where it isn’t modular). The third way, which Mitchell (unsurprisingly) endorses, is to admit that “modular causes do not exhaust all the types of causality found in nature” (p. 78). Physical systems with “complex feedback mechanisms” will fail to be modular, but it would be a mistake to assume thereby that they don’t contain “component causes” (p. 82).

A corollary of this position is that the causal relations in complex networks may not be resolvable using “one-shot perturbation studies”—if the causes aren’t modular, they aren’t separately disruptable. Mitchell points to approaches such as “multifactorial perturbation” or those employed in systems biology as possible ways of acquiring knowledge about complex networks (pp. 83–84). Mitchell ends the fourth chapter with a pluralistic, pragmatic moral: there are different types of causation in the world, and they may require different scientific methods for their study (p. 84).

5

In the penultimate chapter, Mitchell advocates a revision for policymaking much like the one she advocates for scientific practice—in neither domain should we expect to encounter simple, infallible rules. Mitchell argues that the status quo policymaking “predict-and-act” models, in which one acts based on predictions in the form of value-and-probability—weighted outcomes (pp. 86–87), are insufficient for dealing with many parts of the world:

Having a simple and effective calculus for deciding on the best actions would be a great boon for humanity. Unfortunately, the uncertainties in a world of context-sensitive, dynamically responsive complexity present substantial challenges to aspects of the standard methods that scholars have developed for modeling and informing decision making (p. 85).

In a complex world, it is difficult (if not impossible) to individuate multifarious causal factors and reduce uncertainty to the point where one can produce useful value-and-probability—weighted outcome predictions. Importantly, Mitchell’s argument relies on her earlier point about ineliminable uncertainty in the world—ineliminable uncertainty in predictions means ineliminable risk in decisionmaking (p. 87). Although increasing knowledge and computational power can help to reduce and manage uncertainty, they cannot remove it. It reflects an inherently chancy ontology, not epistemic limitations. Just as scientists should adjust their expectations of scientific laws and theories, so should policymakers adjust their expectations of scientific knowledge, and reassess their models accordingly. Indeed, Mitchell points out that if policymakers fail to adjust their expectations of science, their position may “undermin[e] any scientific contribution to public policy decisions” (p. 91). When we’re dealing with complex systems, all-or-nothing decisionmaking won’t get us very far.

Mitchell spends much of this chapter introducing the reader to contemporary research in decision theory, risk analysis, and policymaking. She elaborates a few models which show some promise in accommodating the complexity of nature. More realistic models trade in the predict-and-act framework for one along the lines of evaluate-scenarios-and-adaptively-manage. Beyond prediction, scenario evaluation involves judging policies “by how well they perform over the entire scope of possible futures” (p. 90). These models embrace the ineliminable uncertainty inherent in complex systems and respond by determining the best course of action based not on a single most likely outcome but on the occurrence of “satisfactory outcomes in the largest range of future possibilities”(p. 93). Beyond action recommendation, adaptive management involves not only considering the dynamic nature of systems under study, but also accommodating changes in our knowledge of those systems. Thus, adaptive management is “an interative process of predict, act, establish metrics of successful action, gather data about consequences, predict anew”(p. 97).

Mitchell argues that by changing the way science informs policymaking, in the face of ineliminable uncertainty, not only will scientific contributions not be overlooked because they fail to provide absolute certainty, but the policymaking they inform can go beyond using precautionary guides demanding proof of risk or of no risk and endorse “a more flexible policy response than total ban or complete hands-off nonregulation”(pp. 101).

Closing remarks

Mitchell’s book provides a concise version of an often overlooked viewpoint in contemporary philosophy of science. The book is written to be accessible to a broad audience—philosophers, scientists, policymakers—and should serve as an argumentative complement to studies of physicalist reductionism. Although the book’s smaller size may leave some readers wanting further detail and elaboration, it is a confidently self-avowed beginning of a revisionary and expansionary project (p. 18); Unsimple Truths makes up for its brevity with its clarity and ample references to more extensive treatments of the topics it introduces.

Footnotes
1

All references to Mitchell refer to (Mitchell 2009) unless otherwise noted.

 

Copyright information

© Springer Science+Business Media B.V. 2011