1 Introduction

Consider Kleiber’s law (1932) in macroecology. It states that the metabolic rate of an organism, B, is proportional to the organism’s mass, M, raised to the power of three quarters, with a proportionality constant, B0:

$$B = B_{0} M^{3/4}$$

That is, the rate of metabolic chemical reactions (breaking down organic matter and building cell components), per mass of the animal, is greater in smaller organisms. So even though the mass of a cat is one hundred times that of a mouse, its metabolic rate is only thirty-two times greater, i.e. smaller organisms have greater rates of cell respiration per unit mass.

Kleiber’s law is a non-fundamental law of nature, or a special science generalisation (SSG). As such, it is projectible (and supports counterfactuals, as well as underwriting explanations and predictions at the physiological level). However, at the fundamental level, the particles within the organisms it generalises over move about on trajectories that are determined solely by the laws of physics. How is it that these fundamental particles, while being subject only to the laws of physics, manage to harmonise with each other in order to additionally bring about projectible generalisations at other levels? How do the particles which constitute an antelope ‘know’ to behave in such a way as to make Kleiber’s’ law turn out just right? What other information or constraints are they subject to? And how could they be subject to anything else if the domain of fundamental physics is closed? This apparent microscopic conspiracy can be found within any special science generalisation, and stands in need of explanation.

This article aims to dispel the mystique around the projectibility of SSGs and disarm the myth that fundamental particles are required to conspire with each other in order to produce and maintain patterns at higher nomological levels. I will argue that prominent accounts of the projectibility of SSGs fail to “naturalise” laws of nature sufficiently to incorporate the manifest interconnectedness of the subsystems which are described by the patterns we observe. A “naturalised” account of the relationship between observed generalisations draws on scientific facts about the connections between different subsystems, and recommends that cosmology, chemistry, and biology, say, be foregrounded in conceiving of the relations between special science laws and fundamental laws.

First, there are existing accounts to consider. Humeans divide into two camps as regards the relationship between SSGs and the fundamental physics. Some are “anarchists”; others are “imperialists”.Footnote 1 Imperialists (e.g. Albert 2000; Loewer 2007, 2009) believe in the supremacy of fundamental physics, and take a strict reductionist view of the special sciences. Anarchists (e.g. Callender and Cohen 2010; Cohen and Callender 2009; Weslake 2014; Frisch 2011) reject the supremacy of physics. Though they may accept that all things supervene on the physical, they believe that higher levels laws are in some ways independent of the base level.

In what follows, I’ll advance an approach to imperialism that embraces anarchism i.e. that permits and explains special-science lawhood without annexing special sciences from more fundamental laws and matters of fact. My conception of lawhood will be piloted by pragmatism—I will be concerned with optimising the explanatory utility of the laws—but it will accommodate the deeper need to explain their explanatory utility.

I will begin by offering a defence of Albert and Loewer’s (hereafter: AL) account of special science lawhood against the criticisms of Cohen and Callender (hereafter: CC). I will argue that AL’s claim—that particular initial conditions are imperative to an understanding of the origins and workings of SSGs—is defensible if understood as a heuristic, since in the strong sense they favour, it is difficult to ground. I will nonetheless endorse the anarchism of the “Better Best System” (BBS) theory of CC as the best way of understanding the lawhood of the special sciences. I will argue that the most productive way forward in tackling the conspiracy requires attention to the restricted space of possibilities that follows from the initial conditions of the universe, and from the relations that the subsequent subsystems bear to one another. This naturalised approach offers conceptual demonstrations of typicality for special science generalisations. I call this the “Subsystem Genealogy” (SG) amendment, and contend that it closes vital explanatory lacunae in otherwise appealing anarchist theories, and has the potential to offer a conceptual basis for the claims made by the AL theory. My objectives are therefore to:

  1. 1.

    Dissect the microscopic conspiracy and clearly state the ‘mystery’ that is under study (Sect. 2).

  2. 2.

    Improve upon CC’s set of desiderata for a theory of special science projectibility (Sect. 3).

  3. 3.

    Rehearse the AL account of special-science lawhood and defend it against CC’s objections (Sect. 4).

  4. 4.

    Rehearse CC’s “Better Best System” account of special-science lawhood and outline its failings (Sect. 5).

  5. 5.

    Propose a naturalised perspective—the “subsystems genealogy” view—which takes into account the origins of SSGs by reference to their subsystems’ histories, and the relation they bear to the initial conditions of the universe, and apply this to an example (Sect. 6).

  6. 6.

    Conclude by proposing an understanding of special science lawhood that is Humean, anarchist, pragmatist, and imperialist (Sect. 7).

2 The Microscopic Conspiracy

Fodor has another way of expressing the problem I described in relation to Kleiber’s law. He asks:

why is there anything except physics? […] Well, I admit that I don’t know why. I don’t even know how to think about why. I expect to figure out why there is anything except physics the day before I figure out why there is anything at all, another (and, presumably, related) metaphysical conundrum that I find perplexing.

I admit, too, that it’s embarrassing for a professional philosopher […] to know as little as I do about why there is macrostructural regularity instead of just physical regularity (Fodor 1997, 161).

For indeed, given just the fundamental particles and the laws that describe their behaviour, the existence of SSGs is surprising. And whilst it might be the case that SSGs are never inconsistent with the laws of physics, so that no alarm is due in that regard, it remains the case that the laws of physics are silent on the details and projectibility of these higher-level generalisations. One prima facie consequence of this reasoning is that the SSGs might have been different whilst the laws of physics remained the same; they are under-determined by the fundamental physics. Further, it seems that the SSGs could exist with a different supervenience base.

On closer inspection, dispelling Fodor’s mystery seems to require consideration of the initial conditions. Whilst the fundamental laws alone cannot explain the existence of high-order structures, when coupled with the initial conditions, the link is clearer. At the very least, the charge of underdetermination is removed, because the exact higher-level structure could presumably be determined by the initial conditions. The outstanding question is then: how do the fundamental particles ‘know’ about the initial conditions? More to the point, how do the non-projectible initial conditions manage to constrain projectible generalisations much later, and at higher levels?

3 A Check-List for a Theory of Special-Science Lawhood

There are two ways to respond to the microscopic conspiracy:

  1. 1.

    Accept it, as Fodor does, at face value: “the world, it seems, runs in parallel, at many levels of description. You may find that perplexing; you certainly aren’t obliged to like it. But I do think we had all better learn to live with it” (Fodor 1997, 162).Footnote 2

  2. 2.

    Deny it and attempt to provide a non-conspiratorial account of special-science projectibility which eliminates the underdetermination of the SSGs by the fundamental physics. Two theories take up the challenge: Albert (2000) and Loewer’s (2007, 2009) account of SSGs as probabilistic corollaries of the fundamental laws and initial conditions, and Callender and Cohen’s (2009, 2010) “Better Best System” account of lawhood.

Fodor is manifestly right that the world runs at many levels of description, but they are intimately connected in ways which seem to challenge the existence of a real ‘conspiracy’. Since the alternative is tolerating mystery, we should take the second option.

In the following two sections I adjudicate the theories just mentioned on the basis of how well they tackle the conspiracy, and ultimately find them incomplete. These adjudications are made on the basis of their performance with respect to the following criteria, which are adapted from desiderata for a theory of special-science lawhood offered by Callender and Cohen (2010, 249) in confronting the very same problem. I have amended these criteria to make them stricter, better-defined, and more ambitious, with the aim of rendering any successful theory optimally able to fend off the threat of the apparent conspiracy. As we shall see, these stricter desiderata end up ruling out the BBS theory from being a complete account of special science lawhood, and casting light on lacunae in the AL theory. In what follows, I paraphrase CC’s criteria, and present significant amendments in italics for clarity and emphasis. Therefore, a satisfactory theory of special-science lawhood (and, ipso facto, laws simpliciter) must meet the following criteria:

  • Projectibility of SSGs must be permitted and explained Our theory of lawhood must not only permit special science laws in addition to fundamental laws, it must also explain why it is that these generalisations are projectible.

  • Supervenience must be respected Higher-level kinds supervene on lower-level kinds: any macrostate is fixed by facts at the microscopic level without the need for additional ontology.

  • The laws of the special sciences must be autonomous, where autonomy is carefully defined That is, the lawhood status of SSGs should not depend on the fundamental physics: lawhood is earned at the level at which the law is relevant/useful. Specifically, I distinguish three senses of autonomy:

    1. (a)

      Methodological autonomy is the requirement that the selection criteria for deciding which SSGs are laws of nature do not make reference to the fundamental physics; lawhood is earned at the level of relevance.

    2. (b)

      Explanatory autonomy holds that explanations given at higherlevels are autonomous from the fundamental physics. Even if certain higher-level phenomena are ontologically reducible to the fundamental physics, in general, the explanations for those phenomena are not.

    3. (c)

      Ontological autonomy is the idea that special science objects and generalisations are autonomous from the fundamental physics in terms of what they are. Our theory should forbid ontological autonomy as that would seem to preclude explaining how and why special science objects exist.

  • No conspiracies Our theory of lawhood must not posit conspiracies between fundamental particles, but further: the theory must also explain away the illusion of conspiratorial behaviour.

4 The Albert–Loewer (AL) Theory of Special Science Lawhood

I will begin by giving a brief account of the AL theory, followed by Cohen and Callender’s objections (Sect. 4.1), and my responses (Sect. 4.2).

Albert and Loewer build upon a classical “Mill–Ramsey–Lewis” (MRL) “Best System” model of lawhood, within which laws of nature are those true, contingent, universal generalisations which belong to the deductive systems which have the most favourable combination of simplicity and strength. Systems compete against each other to be crowned optimally simple and strong summaries of the matters of fact. Comparison in these competitions requires that systems are expressed in languages which share common predicates. The traditional formulation of MRL therefore stipulates that the axioms of the system express properties that are, in Lewis’ terminology, “perfectly natural” (Lewis 1983).Footnote 3

Albert (2000) and Loewer (2009) propose that SSGs are obtained as probabilistic entailments from the fundamental laws combined with particular postulates over the initial conditions of the universe. That is, the current Best System is supplemented with two additional facts: (1) that the macrostate of the early universe was one of very low entropy, the “Past Hypothesis” (PH), and (2) that there was a probability distribution over the early universe which assigns a probability of nearly one to the macrostate associated with the PH, and is uniform over the microstates which constitute this macrostate: the “Statistical Postulate” (SP). These facts ensure that the Best System is able to additionally account for thermodynamic generalisations, which amounts to a substantial increase in the strength of the system at a small price in simplicity. Albert and Loewer propose that these additions to the Best System confer high probability upon the generalisations of the special sciences.

That the AL account is able to underwrite thermodynamic generalisations is programmatic: the spirit, if not the letter, of their theory is widely accepted.Footnote 4 Consider the following example. Drop a few cubes of ice into a glass of water, and observe that, two hours later, the ice has completely melted and the water is once again at room temperature. Multiple incidents of such behaviour lead to non-fundamental generalisations about the behaviour of ice in water. These generalisations are easily explained, say Albert and Loewer, by reference to the fundamental dynamical laws and the initial conditions of the universe. How does this relate to the PH and SP? Well, they set up a world in which such things—i.e. thermodynamic things—do happen. Placing a uniform probability distribution over the initial macrostate—specifically, the Lebesgue measure—means that all microstates are weighed the same, which means that those macrostates that are associated with greater measures of the phase space—that is, a greater proportion of microstates—are (almost certainly) going to be the ones that are instantiated. Thermodynamic behaviour is typical behaviour with respect to the probability distribution resulting from the Lebesgue measure.

It is a straightforward matter to relate thermodynamic generalisations to the PH and SP, since these postulates are themselves thermodynamic in nature: they use the same kinds. Further, thermodynamics is a pseudo-fundamental science: it is not truly fundamental, since it supervenes on the mechanics of particles, and of course entropy is not a “perfectly natural” property in Lewis’ (1983) sense. Yet it supervenes on the mechanics of particles directly: so it is nearly fundamental, or has something like ‘fundamentality-once-removed’. This means that the non-conspiratorial projectibility of thermodynamic SSGs is secured to a good level of confidence simply by making explicit the association between thermodynamic behaviour and the most likely behaviour given the particular probability distribution and initial conditions, as specified by the PH and SP. But how does this strategy carry over to other SSGs—those that are further removed from the fundamental physics than is thermodynamics?

Albert and Loewer maintain that their theory is able to provide probabilistic grounding for other generalisations by an extension of the same strategy, where the extension is in the direction of ascending nomological levels. As with thermodynamics, if they can show that the behaviour described by the SSGs corresponds to objectively probable histories, then the threat of conspiracy is eliminated. In fact, claims Albert, the theory can be used to assign high probabilities to all of the generalisations that we observe, whether laws or not. If this is so, the AL theory is enormously predictive and explanatory. Consider this excerpt:

Suppose I come upon an apartment about which I happen to have no direct empirical knowledge whatsoever other than the details of its architectural design and the fact that it contains a spatula. […] [I]f the distribution I use is the one that’s uniform over those regions of the phase space of the universe which are compatible both with everything I have yet been able to observe of its present physical condition and with its having initially started out with a big bang, then (and only then) there is going to be good reason to believe that (for example) spatulas typically get to be where they are in apartments only by means of the intentional behaviours of human agents, and that what human agents typically intend vis-à-vis spatulas is that they should be in kitchen drawers (Albert 2000, 94–95, emphasis in the original).

In other words, their proposal is that special-science patterns are simply those that are (albeit indirectly) heavily weighted with respect to the same probability distribution over initial conditions. So, in accordance with the fundamental laws plus this “primordial chance”, systems are overwhelmingly likely to follow trajectories that result in them jointly realising particular higher-order generalisations: our SSGs.

4.1 Problems with the AL Theory?

The extension from thermodynamic generalisations to higher-level SSGs is CC’s point of departure. They acknowledge the success of the AL theory in explaining the projectibility of thermodynamic generalisations, but reject the application to other SSGs, because they do not believe that non-thermodynamical SSGs are “likely according to the chance posited by physics” (Callender and Cohen 2010, 437). Their dissatisfaction with, and ultimate rejection of, the AL theory is built on four (related) claims.

4.1.1 Typicality is Lost in Translation

While it is possible to understand thermodynamics in terms of the trajectories of fundamental particles, and to therefore explain thermodynamic generalisations using a particular probability distribution (i.e. the Lesbesgue measure), it is not so easy to similarly ‘translate’ higher-level special sciences. As to whether these SSGs would turn out to be likely on that probability distribution, CC claim that they “have no idea, and neither does anyone else” (Callender and Cohen 2010, 437). They note that trajectories that are typical with respect to one probability measure can become atypical when the probability measure is changed. Then since different special sciences do use different probability measures, typicality is not transferred to higher-level SSGs.

4.1.2 Too Much Typicality

Even if some higher-level generalisations did turn out to be likely according to the Lebesgue measure on the phase space, the class of SSGs is so enormous that it seems unreasonable to expect that all of them should be typical. To hold such a view is “fantastic” and “extremely optimistic” (Callender and Cohen 2010, 440). Yet for AL’s theory to work as they propose, every SSG must be rendered typical.

4.1.3 Autonomy Threatened

Connecting the projectibility of SSGs to the fundamental physics, as the AL theory does, threatens the autonomy of the special sciences, since it seems to “amount to a long run constraint on the acceptability of laws in the special sciences” in that “those generalizations that fail to be probabilistic corollaries of fundamental physics are ipso facto not laws” (ibid., p. 439). Lawhood is not something that SSGs can earn on their own terms: whatever their merits in their own explanatory realm, it is their reduction to fundamental physics that will determine their eligibility for lawhood.

4.1.4 Too Strict, Yet too Permissive

Finally, they make the closely related point that the AL theory implies that SSGs which are successful on other counts (i.e. those with considerable explanatory power, or the ability to support counterfactuals) would not qualify as projectible generalisations in the same sense as those which are probabilistic corollaries of the fundamental physics. “To impugn [such a generalisation] for reasons entirely external to the science in which it plays a role strikes us as very much against the spirit of autonomy” (Callender and Cohen 2010, 439). By the same token, a generalisation that is typical with respect to the Lebesgue measure may end up being classed as a law of the special sciences “even though no scientist would have reason to think it is” (Callender and Cohen 2010, 438).

4.2 Defending the AL Theory

I contend that most of these claims stem from a misrepresentation or misunderstanding of Albert and Loewer’s theory, and tackle each in turn.

4.2.1 Typicality is Lost in Translation

True, other SSGs are more ‘nomologically distant’ from fundamental physics than are the generalisations of thermodynamics. Further, there is no doubt that whichever probability distributions may be appropriate when using higher-level special sciences in their own context will likely differ from the probability distribution used in thermodynamics. In order for AL to account for the projectibility of, say, the fact that spatulas are invariably found in kitchen drawers, they need to be able to deduce from the Lebesgue measure that the trajectories of spatulas are overwhelmingly likely to be located in particular spatial regions known as kitchens.

Whether or not spatulas being in kitchen drawers is typical according to the Lebesgue measure is an empirical question. It is one that could, in principle, be settled. But if we are speaking in empirical terms, let’s be cognisant of all practical considerations. Even for a gas expanding in a chamber, it is important to remember that Boltzmann’s typicality account, upon which AL draw,

is an explanation or an explanatory scheme—not a proof. As plausible as the conclusion may be, proving it in a rigorous fashion for any particular (reasonably complex) model remains an extremely difficult and largely unresolved problem in mathematical physics (Lazarovici and Reichert 2015, 694).

Even the simplest of typicality calculations requires solving the equations of motion for 1023 particles. Expecting that such a calculation could be carried out for something as complicated and multiply realised as a spatula is unrealistic. The problem seems to be that AL are asking us to nonetheless believe that spatulas being found in kitchen drawers will be typical according to the fundamental laws and initial conditions, and CC are very reasonably protesting that such a thing cannot be shown. AL are asking for us to accept that it is very likely that SSGs will turn out to be very likely according to their schema; CC respond that they see no reason “for such confidence” (Callender and Cohen 2010, 437). Both are trading intuitions.

It is helpful at this point to be clear on what a typicality approach promises to deliver. As Lazarovici and Reichert (2015, 712–714) convincingly argue, the expectationFootnote 5 that typicality explanations will use precise microphysical assumptions to logically imply thermodynamic behaviour is impractical (for the reasons just given) and is in any case undesirable. It is undesirable because any calculation we can carry out will require the use of highly idealized models. Since we want a theory we can use in the real world (and apply to generalisations which don’t use (perfectly) natural kinds, and which have ceteris paribus clauses), it is preferable that it does not “depend too rigidly on any such specific and narrowly defined mathematical premise” (Lazarovici and Reichert 2015, 714), or our tremendous efforts will have very limited applicability. If we cannot hope for rigorous mathematical explication of the typicality of SGGs, what can we do to move those who share CC’s scepticism? We can gesture towards effectively intractable mathematical problems until we’re blue in the face, or we can settle for something less rigorous but more attainable, and instead focus on showing that subsystems really do appear to be connected to the fundamental physics and initial conditions in roughly the way AL describe, not as a result of the equations of motion of fundamental particles, but as a result of what our best science tells us about the story of the universe and its subsystems. In the case under study, we must take a careful glance at the origins of the objects involved: spatulas, kitchen drawers, and the humans responsible for both, and their relations to other subsystems and to the initial conditions of the universe. From there, it is easier to see that once the fundamental laws and initial conditions are taken for granted, there is a story to be told according to which it is possible (if not probable) that complex structure (within which: intelligent beings) should exist somewhere in the universe. From there, it isn’t all that surprising to find that they produce objects with functional roles, and that these objects are generally grouped by function and location of usage. Sure, the story isn’t as clear-cut as that of the diffusion of gas through a chamber, but then nor is the generalisation itself so dependable—spatulas are also found on dining room tables, shops, and warehouses. Section 6 will be devoted to describing this method of naturalising the projectibility of SSGs in more detail.

A more pressing cousin concern is that caused by multiple realisability. Specifically, if an SSG refers to a property or relation that is multiply-realised, then the number of fundamental properties acting as realisers could in principle be open-ended, which would preclude the possibility of applying a probability measure (Callender and Cohen 2010; Fodor 1974, 1997; Weslake 2014). This is much more alarming, and gives additional reason for favouring a less formal, and more pragmatic grounding of SSGs.

4.2.2 Too Much Typicality

CC bemoan the fact that “there are so many cases to cover” (Callender and Cohen 2010, 438): they are incredulous as to how it could be that all SSGs are weighted heavily with respect to the Lebesgue measure. This view overlooks the relations between these generalisations, and instead treats them as isolated, individual patterns. When one sees generalisations as part of an interconnected web of patterns, it seems reasonable that typicality in just a few—even just one (i.e. the second law of thermodynamics)—of those generalisations is sufficient to affect the probabilities of all of the others in ways that favour AL’s theory. In fact, on this view the tables turn: noting the connectedness of subsystems would leave one surprised if it were not the case that all SSGs were heavily weighted with respect to the same measure. Consider an SSG which determines the observed movements of vertebrates, another that determines the behaviour of neuromuscular junctions,Footnote 6 and then the fundamental laws that determine the behaviour of Ca2+ ions. These three (sets of) laws are connected in a specific way: the third determines the second, which determines the first. Understanding why these three sets of patterns are typical in our world should not be taken as three individual tasks, but one (albeit complicated) undertaking.

4.2.3 Autonomy Threatened

It pays here to examine CC’s insistence on the autonomy of SSGs from fundamental physics. They demand that the “correctness”—by which they presumably mean qualification as laws of nature—of SSGs should not “depend on their eventual vindication by the metrics of physics” (Callender and Cohen 2010, 442), because practitioners of the special sciences do not, and should not, consult fundamental physics as part of their methodological practice. Similarly, Schrenk states that the fundamental laws and properties “are not consulted in the finding process of the less fundamental [laws]. They play no role whatsoever. […] Thus, the autonomy of [special science] laws’ nomicity is guaranteed” (Schrenk 2017, 474). This is critical to CC’s theory of lawhood, which I describe in the next section. However, claiming that, in practice, SSGs are not, and should not be, formulated by consulting fundamental physics (which is both true, and sound advice), is different to claiming that SSGs are ontologically independent of the fundamental physics (which is patently false).

Without entering into the thick of the debate on reductionism, a few words as to why I think CC have unreasonable expectations of the degree of autonomy that we should grant to SSGs. SSGs do not ‘float free’, they are observation statements about classes of objects in the world; those classes of objects bear relations to other classes of objects, some of which are more fundamental. True, special scientists need not to worry about the most fundamental of those classes, but that does nothing to alter the fact that the relations exist. If you have a law which governs the behaviour of shrews, you cannot deny (even if you can, to an extent, ignore) that shrews have bodily processes and organs which are subject to strict biochemical constraints in accordance with more fundamental physical laws. So agreed, they do not consider fundamental physics, but it nonetheless bears an important relation to the objects they do study. Further: agreed, they should not consider fundamental physics. Whilst I maintain that SSGs can be derived from the laws of physics, the advantages of carrying out this derivation are few, and academic, while many explanatory gains are made by operating solely at the level of interest of the SSG. And this is the most important—perhaps only important—sense in which the SSGs are autonomous: explanations for special science phenomena cannot be successfully reduced.

Consider Gresham’s law in economics, which states that “bad money drives out good” (Fetter 1932, 480). In other words, where two (or more) legal currencies exist, even if they have similar exchange value, the one that is considered to have greater intrinsic value (i.e. gold rather than paper money) is more likely to be hoarded, and thereby disappear from general circulation. Gresham’s law is an archetypal special-science law: its subject is money, which is a non-fundamental property and is multiply realised (indeed, the multiple realisation of money is precisely what the law depends upon!). Now consider that Gresham’s law ceased to hold in Zimbabwe in 2009.Footnote 7 Why? Two responses are available. The first specifies the positions and momenta of all relevant fundamental particles at some point in the 1990s, and applies the laws of physics to evolve these forward in time, with the resulting microstate corresponding to a macrostate for which—when the appropriate translations between nomic levels are applied—Gresham’s law is observed to no longer hold. The second explanation begins by citing the fact that following Mugabe’s land reforms, food and manufacturing output declined significantly, while money continued to be printed, causing the value of the Zimbabwe dollar to dwindle. In 2009, with the currency hyperinflated, it became legal to trade in other currencies. Uncertain about its future value, people began to refuse the Zimbabwe dollar, and to insist on currencies with more stable long-term values. This meant that the currency with lower perceived value (the Zimbabwe dollar) was driven out of circulation, while currencies of greater perceived value entered circulation: the reverse of Gresham’s law.

These explanations both refer to matters of fact that are true, but only the second is informative, in that it adequately answers the why-question posed. This is because “people”, “money”, “perceptions” etc. are not meaningful basic kinds at the fundamental level, even though they can in principle be reduced, as objects or collections of objects, to that level. Gresham’s law is expressed in terms of these kinds, the question about its violation is posed in terms of these kinds, therefore a meaningful explanation must also be given in terms of these kinds. Further, the first ‘explanation’ fails to provide important context which is imperative to the explanation: for one, it does not tell us why Gresham’s law failed to apply in 2009, rather than at any other time. In other words: “the explanation of the higher order state will not proceed via the microexplanation of the microstate which it happens to “be”. Instead, the explanation will seek its own level” (Garfinkel 1981, 59). This, as far as I can see, is entirely compatible with the sort of autonomy that Cohen and Callender encourage (2009), upon which their BBS is based. It is unclear to me why, in light of this, they should be concerned about ontological relatedness, given that it need not threaten the nomicity of SSGs.

To summarise: SSGs are reducible ontologically—i.e. in terms of what they are—to microphysics and the initial conditions, but their basic kinds, and the explanations that are expressed in terms of these kinds, are not eliminable by the fundamental physics. From the scientist’s perspective, this second sort of autonomy is surely more valuable. But when we ask, why is such-and-such law projectible, that question must be answered in terms of what the law—that is, the objects it cites and the generalisation it expresses—consists in. In this way, Albert and Loewer’s desire to reference fundamental physics and initial conditions is entirely appropriate.

4.2.4 Too Strict, Yet too Permissive

CC are concerned that the stipulation that SSGs are probabilistic corollaries from the fundamental physics will (a) automatically strip some ‘true’ SSGs of being candidates for lawhood, despite their other ‘lawlike’ virtues, while (b) allowing other generalisations, which play no role in science, automatic elevation to lawhood status simply on the basis of their typicality with respect to the Lebesgue measure. The class specified by (a) is empty: there are no such SSGs. All ‘true’ SSGs—if true means formulated by the normal practices of special scientists, which do not take into account fundamental physics—will be typical with respect to the Lebesgue measure, for the reasons given in the previous three points. As for (b), other than ‘true’ SSGs, I’m not entirely sure what kind of generalisations would fall within such a class. But in any case, nobody has suggested that every generalisation that is typical with respect to the Lebesgue measure is a law of nature. After all, Albert and Loewer, like CC, subscribe to a Best-Systems theory of lawhood, and would therefore be looking to considerations of simplicity and strength, rather than typicality, in order to discover laws of nature. They simply claim that the laws of nature resulting from the Best-Systems analysis are necessarily typical at the fundamental level.

None of the above is to suggest that the AL theory is trouble-free. AL rely on microphysical probabilities to ensure that the SSGs are projectible. They are not wrong, but, as I have discussed, the task of demonstrating that they are right—that is, of showing how statistical mechanical probability distributions render macroscopic propositions likely—is a strange and difficult one, and they do not give a rigorous account of how this might be done.

In principle, I think Albert and Loewer have the correct idea: the object and patterns described by SSGs are traceable to early conditions of the universe. However, I think they can achieve the same effect by pursuing the historical relations between SSGs and the early universe, rather than pursuing the elusive, and so far vaguely-stated, statistical link. Further, they do not offer an account of special-science lawhood that recognises the explanatory autonomy of SSGs. Whilst this may seem to be a very weak demand compared with the mammoth task of connecting SSGs of modern subsystems to the conditions of the early universe, it is a vital one, for without it we are left only with the promise of intractably complex statistical mechanical explanations for macroscopic events. In short, whilst the AL theory may be addressing real relations between levels, it cannot be the entire story, since our science obviously operates in ways that entirely bypass their theory. In other words, something in the spirit of the AL strategy seems to be a necessary component of accounting for SSG projectibility, but it is not a sufficient condition.

Compare the AL theory with the desiderata posed in Sect. 3:

  • Projectibility SSGs are projectible within the AL theory because the probability of the generalisations holding is always high when conditionalised on the PH and SP. SSGs simply trace out trajectories in state space that are objectively likely.

  • Supervenience AL is compatible with supervenience, since it does not require anything over and above the Humean mosaic.

  • Autonomy AL does not explicitly support methodological or explanatory autonomy of the special sciences, and, depending on how one conceives of the theory, may preclude methodological autonomy. These remain open questions. Ontological autonomy is denied.

  • Conspiracy The AL theory is conspiracy-free, since it shows that particles simply move to and fro according to the fundamental laws, and patterns emerge because certain macro-behaviours turn out to be more likely than others when conditionalised on PH and SP.

5 CC’s Theory of Special Science Lawhood

CC’s theory of lawhood—the “Better Best System” (BBS) approach—rivals the AL theory in its ability to cover the projectibility of SSGs. Their theory is also based on the MRL Best–System model of lawhood, but instead of the traditional “stipulative” MRL model, in which a universal set of natural kinds for lawhood is specified, CC propose a “relativised” MRL, known as the “Better Best System” theory (BBS), within which the set of kinds to which the axioms of the system may refer is selected on a case-by-case basis, relative to the particular explanatory needs presented by the situation.Footnote 8 With K as the chosen set of basic kinds, the approach is summarised thus:

a true generalization is a law relative to K (/PK [a specific choice of basic predicates]) just in case it appears in all the immanently Best Systems relative to the basic kinds K (/basic predicates PK) (Cohen and Callender 2009, 20).

BBS is committed to embracing the diversity of kinds within science. Therefore, rather than pursuing a single set of laws of nature, articulated in a particular set of (e.g. perfectly natural) kinds, BBS encourages the pursuit of multiple sets of laws, each successful with respect to regional—i.e. domain-specific—MRL simplicity-strength-balance competitions. Then laws of nature are declared relative to each area of science. This sits well with the practices of science: biologists do not consider physics when devising laws to best systematise biological patterns.

CC do not aim to resolve the conspiracy problem, but to dissolve it: to come to terms with conspiracies and learn to accept them. They “believe that a certain amount of ‘conspiratorial’ behavior is virtually inevitable if we demand that the higher-level is supervenient upon and yet autonomous from the lower-level but refuse to add anything designed to nomically ensure cooperation between levels” (Callender and Cohen 2010, 443). At this admission, their account begins to collapse into Fodor’s “embarrassing” resignation.

They suggest an interpretation which embraces the relativism of conspiracies. Ecologists, they say, are faced with other conspiracies: “Why do the rabbits fall down rather than up? Why do the rabbits’ speeds attain a maximum value where they do?” (Callender and Cohen 2010, 444). SSGs make no mention of dynamical or gravitational laws, yet massive special-science objects mysteriously follow predictable trajectories. From the vantage point of one scientific domain, projectible behaviour in (almost all) other domains will appear conspiratorial. Their conclusion is that “if every Best System has a conspiracy problem, then no Best System does” (Callender and Cohen 2010, 445).

I think they have sidestepped the problem, and offered solidarity rather than solution. For one, there is no need to “add anything” to ensure “cooperation between levels” because the cooperation is already there: systems are not isolated: they interact with other systems and are composed of subsystems. Epidemiologists who deal in regularities regarding disease transmission within populations must necessarily consider the nature and properties of the disease vector, and their epidemiological laws are directly contingent on those microscopic details; market researchers expect that market-level trends will be connected with patterns in the behaviours of consumers. CC’s multiplying of conspiracies does nothing but obfuscate matters: there is no real conspiracy at any level, and the illusion must be explained away. The fecundity of conspiracies only makes it more urgent that they be explained. Referring to the practices of scientists in particular limited situations isn’t helpful or representative: it is testament to the independence of domains that they can be identified and put to use while treating the remainder of science as a black box, but the black-box idealisation is just that, and relations to other domains can be reinstated if required. Special-science laws were welcomed into pay tribute to the (explanatory) value of different areas/levels of science, not to set up misunderstandings where there are none.

Central to the BBS theory is the democracy of domains of study within the composite field that is science; CC do not recognise the nomological hierarchy that is often taken for granted in mainstream philosophy of science. For this reason, to ask that their theory explain the relationship between fundamental physics and SSGs is to miss their point. Moreover, the conspiracy “problem” is only such if you have some prior, independently justified, idea that (a) there are levels in science, and (b) there are links between them, and a privileged direction of fundamentality (as a structural feature, if nothing else). CC do not acknowledge either of these, and are therefore under no obligation to account for apparent microscopic conspiracies. However, the flip side is that should there be good evidence for a link between the levels—a failure of autonomy of some kind—BBS will fail fatally short in accommodating this. As far as I can see, there is good evidence for both (a) and (b), which means that the unmodified BBS needs to say more. I’ll describe this evidence in Sect. 6. Meanwhile, there are several other concerns with their account.

First, and in spite of the democracy just described, CC patently steer clear of full-blooded pluralism with respect to the domains of science. Their desiderata betray this, and they are led into contradiction: first, they ask for supervenience to be respected—“we assume that ecological (and other higher-level) kinds supervene on lower-level physical kinds—there is no élan lapin whose exemplification fails to be fixed by the distribution of fundamental physical kinds” (Callender and Cohen 2010, 429)—yet they go on to claim that there is no “unique locus of projectibility” (Callender and Cohen 2010, 445); second, they ask for a theory of SSG projectibility that avoids “positing outlandish conspiracies” (Callender and Cohen 2010, 429), yet they later conclude that conspiracies are “virtually inevitable” (Callender and Cohen 2010, 445). A pluralist (i.e. someone who believes there is no privileged set of laws and kinds that trumps, or underwrites, other laws and kinds) might be expected not to engage with the worry about conspiracy between domains, but such a person could not then demand supervenience of all domains upon a privileged set of laws or kinds. This tension leaves CC’s conception of the BBS theory looking confused.

CC are Humeans: they start from the fact that the matters of fact in the world may be systematised in a number of different ways according to one’s choice of base kinds, and no one systematisation is better than any other. I agree with them: there is simply no basis for claiming that one system is a superior system, or the ‘true’ system. To posit that would require something independent of the mosaic, which negates the exercise. However, I do not understand why a Humean would deny that there are explicable relations between different systematisations of the matters of fact. We should expect relations between the levels: the data is the same! As Callender himself says elsewhere, “they are each just different windows onto the same world” (Callender 2011, 112). The aim of a Humean scientist is to discover patterns in the mosaic and develop systems which describe those patterns. When a Humean scientist discovers patterns within a domain that are not accounted for by that system—but which would critically harm the simplicity of the system if directly incorporated—she should look for the system that describes those patterns, and seek out the relation between the two systems. Neither system is devalued by such a relation, nor is one system privileged above the other. Humeans deal in patterns; they should be consistent and extend this analysis to meta-patterns.

As before, BBS can be measured against the desiderata for a theory of special-science projectibility:

  • Projectibility SSGs are permitted to be laws if they triumph over competitor regularities in their nomic peer group in terms of simplicity and strength. However, no explanation of SSG projectibility is offered.

  • Supervenience BBS is compatible with supervenience, since it caters for the projectibility of SSGs without requiring anything over and above the Humean mosaic.

  • Autonomy Laws are selected as a result of competitions within their own area of relevance; fundamental physics is at no point consulted: so we have methodological autonomy and we have explanatory autonomy. However, we also seem to have ontological autonomy, or, at the very least, an account of ontological dependence is not offered.

  • Conspiracy BBS accepts (indeed, expects) inter-system conspiracies, and does not deem this to be problematic.

6 Buttressing AL by Considering the Genealogy of Subsystems

While an anarchist theory (such as BBS) should be adopted as a basis for understanding the lawhood of SSGs, because it accords with the way in which we develop and use generalisations of higher-level patterns, such a theory does not provide any understanding of the patterns themselves, and how they relate to one another. These last desiderata are vital to a complete theory of special-science lawhood, as without them the patterns emerging from the Humean mosaic have not been fully characterised, but are arbitrarily cut short. Since the aim of the Humean is to systematise patterns, the project is not complete until all patterns have been accounted for. We can meet these aims by couching the BBS approach within a theory that explains the relationship between SSGs and the fundamental laws and initial conditions. Such a theory need not—indeed, should not—privilege any set of laws over any other, but seek only to understand the projectibility of SSGs, and to explain why each SSG takes on the form it does, rather than any other. The AL theory promises such a link, but does not make clear how the chances used in statistical mechanics are to be applied to SSGs. In this section I attempt to rehabilitate the AL theory by using a different strategy to motivate the typicality claim and thereby dispel the conspiracy.

I am not the first to attempt such a task. Schrenk’s (2017) project is similar to mine: he too finds BBS to be an attractive way of allowing SSGs to autonomously earn their status as laws, but looks elsewhere to ground their projectibility. His strategy is to describe an emergence relation which shows how SSGs might supervene upon more fundamental laws. In brief, he suggests that for any given SSG, all Fs are Gs, the properties F and G have constituent parts which are subject to their own lower-level laws (generated by other BBS competitions). The possibility space of the properties of these constituent parts is restricted by what is nomologically possible according to these lower level laws, and those laws restrict the possibilities for the F and G that they constitute such that it turns out that all Fs are Gs. Schrenk thereby offers a possible dependence relation for one set of laws upon another set of laws. He is careful not to overstate the clout of his relation, since whether or not it holds “is a contingent, empirical question on which I wish to remain silent” (Schrenk 2017, 7); the point is to show that the autonomy of SSGs and their dependence on more fundamental laws can be conferred independently.

My approach differs from Schrenk’s, but I see no reason why the two accounts should not be compatible. He proposes a formal dependence relation but does “not make the factual claim that it is actually the case” (Schrenk 2017, 475, Fn. 7). In what follows, I propose an informal dependence relation which operates via conceptual, scientific facts to show how SSGs might be grounded in both fundamental laws and the initial conditions. My approach is more empirical, but is ultimately, like his, merely illustrative. One advantage my account may have over Schrenk’s is that his falls prey to one of the worries that plagues the AL theory: there is no reason to believe that the work needed to demonstrate that his supervenience relation holds (even for just one case) is any less onerous than the work needed to demonstrate typicality of SSGs. Anarchist sceptics will likely be unmoved.

My “naturalised” approach is to focus on explicating the relationship between levels within the sciences using what science itself tells us about those relations. My strategy for doing so is to probe the origins of the special-science subsystems, both ontologically and historically. In a previous section I showed that CC’s objections to the AL theory are for the most part insupportable. Here, my aim is to show that the spirit of the AL theory—that SSGs are likely because of their relationship to the fundamental laws and initial conditions—can be grounded via another method. As I have said, calculations required to rigorously demonstrate that all SSGs are typical with respect to the Lebesgue measure are too lengthy and complex to be tenable. There is no substitute for such quantitative demonstration, but there are ways of convincing oneself, heuristically, that the apparent conspiracy is no such thing, because all SSGs can be linked to the PH, SP, and the history of the world from the Big Bang until the times at which they obtain, in a way which renders something like the AL theory very probably the correct one. Similar ideas, albeit in different contexts, may be found in MillikanFootnote 9 (1999) and Rohrlich (1988). I call my amendment the “subsystem genealogy” (SG) amendment to the AL theory, and note its compatibility with CC’s BBS in offering a Humean theory of lawhood that permits, and explains the existence of, laws of the special sciences.

The idea is that a subsystem of the universe to which an SSG applies has a unique phase-space trajectory that is determined by its causal history, dictated both by its current environment, and the historical interactions of both the system, and its environment. It is isolated in neither space nor time. This means that not “anything” goes: the subset of physically possible configurations of a subsystem now (i.e. of how laboratory gases, plants and stars actually behave) is narrow compared with the set of logically possible configurations. Certain behaviour isn’t observed because it isn’t consistent with the occurrence of earlier cosmological events, or with the existence of other subsystems to which it is related. One way to understand a subsystem’s current properties is to trace its thermodynamic ancestry.

The trick is to find the story of each SSG in the broader historical development of modern structures and processes—i.e. what its fundamental physical ancestry is—and to show that its existence is critically dependent on the PH and SP. I will take an example from biology and attempt to trace its physical analysis and ancestry.

6.1 Example: Breaking Down Metabolism

Recall that Kleiber’s law states that the metabolic rate of an organism, B, is proportional to the organism’s mass, M, raised to the power of three quarters, with a proportionality constant, B0:

$$B = B_{0} M^{3/4}$$

I will use this example to attempt to tease out what is meant by a “conspiracy” in the sense of Fodor (1997) and Loewer (2009), and to try to dissolve it. In this case, the alleged “conspiring” goes on between fundamental particles in cell organelles, which somehow ‘know’ to scale the rate of the chemical reactions within the organism to which they belong according to its total mass. One might be incredulous as to how they ‘know’ the total mass, or how they communicate that knowledge to each other in order to pull off the harmonised stunt. Put this way, there is indeed a sense in which the science seems to have gone awry: the particles behave as though they were ‘aware’ that they must cooperate to bring about certain stable, macroscopic patterns. Worse, not only must the regularity be operational within single organisms, it must also apply as a general allometric law across almost all animals. So the conspiracy in one organism must be calibrated with the conspiracies in all others! No wonder that Fodor is in awe of how these “unimaginably complicated to-ings and fro-ings of bits and pieces at the extreme micro-level manage somehow to converge on stable macro-level properties” (Fodor 1997, 160).

Of course, the conspiracy is seen as such only by those who are gazing myopically at the fundamental particles and trajectories, and neglecting to consider other important matters of fact: notably, that the particles are part of a system which is in turn part of a larger system. Had the fundamental particles been placed in a container and spontaneously united to produce a number of organisms, all with masses and metabolisms obeying Kleiber’s law, surprise would be in order. But note the following:

  1. 1.

    The particles belong to a system which is not in equilibrium with its environment: its entropy is, in general, lower than that of its surroundings.

  2. 2.

    The system maintains its low entropy state by undergoing metabolic processes, whereby low-entropy matter is ingested, and high-entropy waste products are excreted (the Second Law of thermodynamics dictates that this is the only way that any system can both do work and maintain a lower entropy than its surroundings).

  3. 3.

    The metabolic processes are controlled by complex feedback mechanisms,Footnote 10 otherwise the low-entropy state could not be maintained for reproductive timescales, which would have precluded the organism’s evolution. These metabolic pathways were settled upon and refined by the ordinary natural selection process, by which the genetic material which corresponded to feedback pathways that were most advantageous to the survival and reproduction of the individual was most likely to be present in successive gene pools.

  4. 4.

    Smaller organisms tend to have larger surface-area-to-volume ratios (bear in mind that physiological features are not as varied between animals as is their mass), and therefore dissipate heat faster, per unit mass, than larger organisms. This means they must generate heat faster (per unit mass) than larger organisms in order to maintain homeostasis and thereby sustain life. So as heat is radiated away, the feedback mechanisms detect changes in temperature, and adjustments are made to the rate of reactions within cells. Therefore, in smaller organisms, there will be a higher rate of exothermic reactions (i.e. an increased basal metabolic rate) per unit mass.

Yet why does Kleiber’s law take the specific mathematical form that it does? I see this as a particularly pertinent question in the context of AL’s inability to mathematically ground their theory, since, if it can be answered, it gestures to the way in which mathematical regularities (presumably including probabilistic ones) may be conceptually ‘translated’ between nomological levels.

To understand this, one needs to appreciate the intricacies of the geometry of evolved vascular systems. First, the rate of metabolism is determined by the flow of resources through the organism. In practice, this means the rate at which sugars are transported in the bloodstream to respiring cells.

Contrary to the most obvious intuition, one cannot derive the correct exponent simply by scaling the vascular system with size, which would assume that the basal metabolic rate was proportional to the surface area of the organism. Rather, the resource distribution system is found to be a “hierarchical branching network”.Footnote 11 Transport networks in organisms, through their fractal structure, create an additional de facto “internal” dimension. This means that the effective area of exchange is r3 and the effective volume of exchange is r4. Since the basal metabolic rate is proportional to the surface area of the organism, it turns out that B∝M3/4. Why the fractal? Natural selection favours maximised metabolic capacity by maintaining transport networks that occupy a fixed percentage of the volume of an organism’s body (about 7%) (West et al. 1999). Maximising the metabolic rate gives the three-quarters exponent. As masses get bigger, metabolism becomes more efficient.

One might argue that I have already faltered in my attempt to ground an SSG in just the fundamental laws and initial conditions by calling on natural selection, which is itself an SSG.Footnote 12 However, biologists working on abiogenesis have demonstrated that natural selection in fact operates as a molecular law (Follmann and Brownson 2009; Mills et al. 1967). Molecules better-suited to binding with other molecules, remaining stable, or possessing any other property advantageous to replication and domination of the material within the primordial soup are more likely to ‘survive’ and populate the next ‘generation’. Given that these chemical properties supervene directly on the microphysical atomic properties, it seems that the earliest, simplest forms of natural selection can be grounded in the fundamental laws. The genes of animals whose metabolism is subject to Kleiber’s law are much more complex, but are the descendants of those simple self-replicating molecules and their instantiation of this law. (Helpfully, in this case, genetics is an innately historical area of science.)

Now to rule out the conspiracy. If the conspiracy derives from a sense of mystery at the way in which the fundamental particles bring about the macroscopic pattern, then the points above should be a solid starting point for removing this mystery. Start by abandoning the vague notion of “fundamental” particles, and instead focussing on molecules (which supervene, not too distantly, on the fundamental particles). Consider the metabolic pathway (one of many hundreds) by which the organic molecule cholesterol is produced by an eleven-stage enzyme-facilitated reaction which starts with the organic molecule acetyl coenzyme A (acetyl coA) (Alberts et al. 1994, 83). This is a chemical reaction which could, in principle, be carried out under (meticulously) controlled conditions in the laboratory: it involves no more mystery or conceptual difficulty than many laboratory chemical experiments. In a cell, each pathway occurs for the same reason that it would in the laboratory: when certain molecules are in proximity to one another under the correct physical conditions, they react in ways that are dictated by their electron chemistry. The acetyl CoA doesn’t ‘know’ it is inside a cell any more than it ‘knows’ it is inside a mouse, or a cat, or a petri dish; it behaves strictly according to established chemical laws, which are—in comparative terms, at least—very well-understood from the point of view of fundamental physics. However, the role played by the cell is to manage the conditions and rate of the pathway using other biological pathways. Should, say, the acidity of the cell become unfavourable, other reactions will normalise the pH; should, say, too much cholesterol be produced, this will be detected and the activity of the enzymes which facilitate the reaction pathway will be reduced. For the same pathway in different animals, the activity of a particular enzyme will differ since the negative-feedback loop will activate at different levels of product, according to the different energy needs of the animal (determined partly by mass, according to Kleiber’s law).

Crucially, even with the same fundamental laws, metabolic reactions would not exist (nor would animals, or anything particularly interesting from a dynamical perspective) were it not for the existence of out-of-equilibrium sources of low-entropy energy, such as our sun.Footnote 13 All of the thermodynamic processes with which we are familiar, and upon which life-sustaining metabolic processes depend, entail an increase in entropy, and biological systems require low-entropy energy in order to create (entropy-reducing) complex structures. The sun provides the source of low-entropy energy, which is fixed into complex organic molecules matter by plants through photosynthesis, and then released via respiration within those plants or the animals that consume them (and so on), releasing higher-entropy energy but permitting the (smaller) localised reductions in entropy required for the creation and maintenance of complex organic structures. In order for these processes to be possible, the overall entropy ‘budget’ has to be sufficiently far from equilibrium as to permit future entropy increases, which is why the universe’s very low-entropy initial state (PH), is so important.

In some general sense, then, the AL account is on the right tracks: SSGs can be understood through their relations to the fundamental laws and initial conditions. This is not the same as saying that SSGs are likely when you conditionalise on the fundamental laws and the initial conditions, but the former is much more easily established than the latter, and there is no reason to subsequently doubt the second statement, and good reason to support it. So Albert and Loewer are partially right, but they set themselves a practically infeasible project when they could instead justify their claims in terms of subsystem genealogy. Is this enough to ground claims about SSG typicality? That depends. I doubt I have succeeded in making CC any more confident that AL’s probability claims will come out right every time, not least because it still is not clear what those calculations would look like, or whether they are even calculable, if the set of realisers isn’t closed. However, I hope I have done enough to make them realise that they need not worry about those calculations because there are other ways to have confidence in the relationship between SSGs and the laws and initial conditions they derive from. So while the AL theory might not be established, hopefully it has been buttressed. Another advantage of my account is that grounding SSGs in their conceptual, scientifically-observed relationships to other subsystems and the initial conditions avoids the worry about multiple realisability. Whatever the bases for the objects whose patterns SSGs pick out, stories such as the one just told ought to be possible, and their genealogies will have functional similarities even if their exact details differ.

One might object that this is all well and good, but I have simply demonstrated something other than the AL theory. There is, after all, a difference between showing that an SSG is typical (according to a particular measure, more to the point), and giving an explanation in light of which it seems more probable.Footnote 14 AL seek to establish the first, and CC charge them with having failed to do so; I have provided the second. To this, I have several responses. First, the distinction between the two need not be so clear-cut. Typicality as it establishes thermodynamic behavior operates as a probabilistic explanation: it explains, on the basis of combinatorial arguments, why thermodynamic behavior is overwhelmingly likely (Lazarovici and Reichert 2015). My story about Kleiber’s Law seems to do something similar (albeit on the basis of some vague sense of likelihood in relation to scientific objects and processes as whole, rather than a particular measure). Indeed, many theories of explanation require that an explanation renders the explanandum expectable, increases its probability, or reduces the surprise of the explainee (e.g. Hempel 1965; Mellor 1976). Demonstrating typicality for an SSG is not so easily distinguished from explaining it, and one would hope that they are correlated. My explanation of Kleiber’s law—if it is indeed explanatory, which only the reader can judge—makes it seem expectable to any charitable reader, and it does so by reference to the fundamental laws and initial conditions. Yet regardless, if I really have established something other than the AL theory, so be it; it may bring a fresh perspective to a debate in which locked intuitions have led to something of a stalemate.

This single, long-winded explanation just offered glosses many complicated details, but hopefully it helps to combat the charge of conspiracy by indicating the directions in which thorough, piece-by-piece scientific dissections of SSGs might be found. It also shows that the initial condition conspiracy can be straightforwardly averted by considering the ways in which subsystems descend from the initial conditions, according to the fundamental laws.

7 Conclusion

If the subsystem genealogy amendment to AL is coupled with an anarchist theory of SSGs, we are in possession of a theory of special-science projectibility that is at once able to accept laws of the special sciences, acknowledge and protect their explanatory autonomy, and forgo their ontological autonomy in order to explain their operation. Let’s measure this theory against the desiderata:

  • Projectibility SSGs are projectible, since BBS is set up that way. But we can also explain why they are projectible, by considering that the histories of systems and their environments severely restrict the ways in which they can behave.

  • Supervenience The theory is compatible with supervenience: there is the Humean mosaic, and there are laws, which are systematisations of the facts and events within the mosaic.

  • Autonomy The SSGs are methodologically autonomous from the fundamental physics because they are selected as a result of simplicity-strength competitions which are blind to the lower levels. But note, this blindness is only a black-box construct: since AL rejects ontological autonomy, the laws are selected in the knowledge that they do bear particular relations to the fundamental physics. So, methodological autonomy is, in a certain sense, superfluous, but still has practical value. Explanatory autonomy is supported.

  • Conspiracy The conspiracy is defeated because we acknowledge the connectedness and origins of subsystems within the universe at all scales, and therefore understand why their constituent particles are bound to behave in particular ways, ways which turn out to be the SSGs.

The table below summarises the merits of the three theories against the virtues required of a successful theory of special-science projectibility. Noting that ontological autonomy is expected to be lacking if one believes there is unity in science, the amended version of AL emerges triumphant.

 

AL

BBS

AL + SG

Projectibility

Yes

Maybe

Yes

Supervenience

Yes

Yes

Yes

No ontological autonomy

Yes

No

Yes

Methodological autonomy

No

Yes

Yes

Explanatory autonomy

Maybe

Yes

Yes

Conspiracy-free

Yes

No

Yes

The table seems to indicate that when ontological autonomy is renounced, conspiracies can be defeated, and wherever ontological autonomy is insisted upon, conspiracies persist. This correlation is not an accidental feature of the accounts that have been explored, rather it reveals the necessity of ontological relations for militating against ostensible mismatches between different perspectives on the Humean mosaic. The details of that correlation are not yet firmly in place, but it pays to pose the question this way: is there a plausible way of saying “yes” to ontological autonomy, and “no” to conspiracies? Is there a plausible way of understanding, or setting the stage for, conspiracies except in relation to, or formulated through, failures of ontological autonomy? If so, are approaches such as CC’s set to fail as a semantic, definitional matter? My instinct is that this is so. If I am right, this does not make AL (or its amended version) trivial, since the projectibility/ontological connection still needs explaining. In other words, the contention, or suspicion, that the unity of science is the right way to understand the mosaic, and that the laws should be understood accordingly, does not shorten the road ahead, it only ensures that it is the right one. Kleiber’s law is just one skeletal example; telling the story of each subsystem and the patterns in which it is implicated is a complicated task for scientists. Philosophers need only be convinced that it can be done; as in Carnap’s project, the in principle possibility of the grounding is what is important: the rest is just detail (1967). As Lazarovici and Reichert point out, while efforts to corroborate insights mathematically is a valiant aim, “Good physics and good philosophy of physics […] is also about appreciating where our understanding of an issue depends on rigorous formalization and technical proof and where it doesn’t” (Lazarovici and Reichert 2015, 714).

To conclude, then, subsystems of interest are inextricably related to the global universe system by (1) interactions with their immediate environment, and (2) their genealogy, and the historical interactions therein, as given by taking due account of e.g. the cosmological ancestry of the subsystem. Adopting a Humean view of laws that is relativised with respect to domains of study within science allows for generalisations of the special sciences to be admitted as laws of nature provided they have the best combination of simplicity and strength within their own domain. In this way, it brings into focus a unified picture of science, best seen as a collection of facts that are arranged into layers of generalisations, each legitimised by its current utility (e.g. its role within explanations), but inextricably linked to the other layers, not least because all are ultimately ontologically reducible to the Humean mosaic.