1 Introduction

It is common to encounter philosophers who recommend replacing one principle concerning theory choice, Ockham’s Razor:

OCKHAM’S RAZOR: Don’t multiply entities beyond necessity.

with an alternative principle that Schaffer (2015) dubs the Laser:

THE LASER: (i) Don’t multiply fundamental entities without necessity, and

(ii) Multiply non-fundamental entities all you like.Footnote 1

The Razor (as I’ll call it) and the Laser are associated with differing conceptions of what ontological parsimony (qua theoretical virtue) consists in: parsimony with respect to total ontology and parsimony with respect to fundamental ontology, respectively. Thus the Razor reflects the thought that it is theoretically advantageous to reduce the number of entities in one’s ontology as much as possible. The Laser instead reflects the thought that it is only theoretically advantageous to minimise the number of fundamental entities in one’s ontology, and that there is no theoretical advantage to be had in minimising the number of non-fundamental entities one posits.Footnote 2

An impressive battery of arguments have been brought to bear in favour of accepting the Laser over the Razor. Broadly speaking, they divide into three categories. Arguments from the nature of non-fundamentality attempt to motivate the Laser by appeal to various observations about what it is to be non-fundamental. Arguments from cases describe hypothetical or actual cases, and allege that only the Laser accords with our intuitive judgements about them. Arguments from analogy claim that ontological parsimony is analogous to conceptual economy, and that this analogy recommends the Laser.

My aim in this paper is to argue that all of these arguments fail. In doing so, I’ll demonstrate that those antecedently sympathetic to the Razor have no reason to ‘swap sides’ and accept the Laser instead.Footnote 3 I begin with arguments from the nature of non-fundamentality.Footnote 4

2 Arguments from the nature of non-fundamentality

Three supposed features of the nature of non-fundamentality have been cited in support of the Laser. Those features are: that the existence of non-fundamental entities is necessitated by that of the fundamental entities; that non-fundamental entities are in an important sense ontologically innocent; and that non-fundamental entities play no role in fundamental explanations. I’ll consider each such way of motivating the Laser in turn.

2.1 The argument from necessitation

Bennett proposes an argument for the Laser based on the observation that the existence of the non-fundamental entities is typically claimed to be necessitated by the existence (and nature) of the fundamental entities. She writes:

Let T− and T+ be two theories that agree on all fundamental matters. According to T− that’s all there is; according to T+, there are also a variety of nonfundamental matters. My claim is that T+’s extra ontological commitments do not tell against its simplicity in a way that makes it less likely to be true […]. The key point is that according to T+, its extra ontological commitments are necessitated by the fundamental matters. Thus T+’s statements about the non-fundamental matters NF are—by its lights—entailed by statements about the fundamental matters F. And the following is a theorem of the probability calculus:

if A ˫ B, Pr(A) = Pr(A&B).

It follows that according to T+, the probability of F is the same as the probability of F and NF. This means that—again according to T+–T+ is exactly as likely as T−.

(2017: 223, emphasis original).

Bennett concludes from this that adding non-fundamental posits to a theory does not make that theory any less likely to be true (as long as the theory says that these posits are necessitated by the fundamental entities already posited) (2017: p. 225). On the innocuous assumption that theoretical virtues are those the possession of which makes a theory ceteris paribus more likely to be true, Bennett’s argument would, if successful, establish that we should take the relevant theoretical virtue to be parsimony qua minimising only fundamental entities, rather than parsimony qua minimising entities simpliciter. This is in line with the Laser, not the Razor.Footnote 5

The problem with Bennett’s argument is that it misconstrues what the probability calculus tells us about how entailment claims affect probability.Footnote 6 Consider again the theorem that Bennett cites:

if A ˫ B, then Pr(A) = Pr(A&B).

In order to use this theorem in support of the Laser, we’d have to read it as telling us to disregard the effect of an entailed claim on the probability of the theory that entails it. For only then could we conclude from T+’s supposition that the existence of NF is entailed by the existence of F that the inclusion of NF in T+’s ontology has no negative impact on the probability that that ontology is correct. But the above principle does not tell us to disregard the effect of an entailed claim on the probability of the theory that entails it. If it did, then we wouldn’t be able to use the fact that a theory entails a contradiction as a reason to assign that theory a probability of 0, which is absurd. Rather, what the above principle tells us is that, if A entails B, then when a theory accepts A it is thereby incurring any negative impact on its probability associated with accepting B. Thus, for example, if A entails B, and B is a contradiction, then accepting A reduces the probability of a theory to 0 (even if that theory doesn’t explicitly accept B).

In our case, the above principle does not allow us to reason that, since T+ already claims that F exists and that F’s existence entails NF’s existence, any negative impact on the probability of T+’s ontology that would otherwise be associated with positing NF can be disregarded. Rather, it tells us only that, given T+’s claim that the existence of F entails the existence of NF, any negative (or indeed positive) impact on the probability of T+’s ontology associated with positing NF was already incurred when T+ posited F. This is entirely consistent with there being (ceteris paribus) a negative impact on the probability of T+’s ontology associated with positing NF, which is all that supporters of the Razor must claim here.

So, specifically, here is what supporters of the Razor should say about Bennett’s case. Initially, our evidence suggests that we should assign a certain probability, say 0.7, to the claim that F exists. If we then discover, in line with T+, that F necessitates NF, what we have discovered is that positing F brings with it commitment to further entities than we previously thought. Since, in line with the Razor, we think that (ceteris paribus) extra ontological commitments lower the probability of a theory’s truth, we should now in turn think that (ceteris paribus) the claim that F exists is less probable than we previously thought. On the other hand, if we decide, in line with T−, that F does not necessitate NF, then we have no reason to revise down the probability that we originally assigned to the claim that F exists. That is, given the Razor, the claim that F exists receives a different probability assignment depending on whether it is taken to require positing NF or not. This is perfectly consistent with the probability calculus, since it doesn’t involve any violation of the claim that if F necessitates NF then Pr(F) = Pr(F&NF): what supporters of the Razor are saying here is that if F doesn’t necessitate NF then Pr(F) = 0.7, but if F does necessitate NF then Pr(F) = Pr(F&NF) = n for some n < 0.7. In this way supporters of the Razor can maintain their claim that T+’s commitment to NF has a negative impact on the probability of its truth in comparison to that of T−.

In general, it is consistent with the probability calculus to regard extra non-fundamental posits as (ceteris paribus) having a negative impact on a theory’s probability, as the Razor implies. This is true even if non-fundamental posits are necessitated by the fundamental ones. Thus Bennett’s argument from necessitation does not succeed.

2.2 The argument from ontological innocence

Bennett also thinks the alleged ontological innocence of non-fundamental entities provides support for the Laser (2017: pp. 221–223). Her thought is this. Whilst it’s not unusual to hear philosophers’ (such as Lewis, 1991: p. 81) claiming non-fundamental entities to be ontologically innocent [or, equivalently, an ontological free lunch (Armstrong, 1989: p. 56), or nothing over and above the fundamental entities, etc.], it’s not obvious how we can make sense of this idea, especially given that non-fundamental entities are not generally taken to be identical to fundamental entities. But we apparently can explain this if we accept the Laser: claiming that non-fundamental posits don’t count against a theory’s parsimony in the way that matters for theory choice arguably captures and explains the sense in which those posits are ontologically innocent. This is a point in favour of the Laser.

But the argument from ontological innocence is unsuccessful, because there’s a way of capturing the thought that non-fundamental entities are ontologically innocent that doesn’t require accepting the Laser.Footnote 7 For, as Hawley notes (2014: §2), another way to secure the ontological innocence of the non-fundamental is to say that commitment to the fundamental entities automatically carries with it commitment to the non-fundamental entities. Non-fundamental entities would then be ontologically innocent in the sense that explicitly committing oneself to them doesn’t add to one’s ontological commitments at all (as commitment to them was already implicit in prior commitment to the fundamental entities). As I argued in the previous section, this is consistent with the thought that commitment to non-fundamental entities nonetheless still counts against the probability of a theory’s truth, and is thus consistent with the Razor. So the ontological innocence of the non-fundamental doesn’t give us any reason to abandon the Razor.

2.3 The argument from explanation

Korman argues that the relationship between (non-)fundamentality and explanation provides us with an argument for the Laser. He presents his argument as follows:

The most parsimonious theory is the one that explains all that needs to be explained using the fewest resources. Since fundamental objects are those in terms of which everything is explained, it only makes sense to measure ontological parsimony in terms of which items are taken to be fundamental.

(2015a: p. 306; repeated in 2015b: pp. 75–76).

Now, taken at face value, this argument is invalid. If the most parsimonious theory is the one that explains all that needs to be explained using the fewest resources, and if we can explain everything that needs to be explained by appealing only to fundamental objects, then what follows is not that our measure of parsimony should be blind to non-fundamental entities, but rather that the most parsimonious theory is the one that eliminates all non-fundamental objects (for on those assumptions, any theory that posits non-fundamental entities in addition to fundamental ones will explain nothing more but will posit more resources than a theory that posits only the fundamental entities). This conclusion is consistent with the Razor.

But immediately after the passage quoted above, Korman writes, ‘the mere fact that a theory’s fundamental mode of being is enjoyed by a wide range of objects is no strike against the parsimoniousness of that theory, since one need not suppose that those objects themselves all enter into fundamental explanations’ (2015a: p. 306; 2015b: p. 76; emphasis mine). With this in mind, I think we should take Korman to be arguing as follows:

  1. 1.

    The most parsimonious theory is the one whose fundamental explanations explain all that needs to be explained using the fewest resources.

  2. 2.

    Fundamental objects are those in terms of which everything is fundamentally explained.

  3. 3.

    So we should measure ontological parsimony in terms of which items are taken to be fundamental.

(1)–(3) is a valid argument for the Laser. But are (1) and (2) true?

To answer, we need to know what a fundamental explanation is. Unfortunately, Korman doesn't provide an explicit definition of this notion. But a clarification of what he means by it might be implicit in what he says about what it is for an object (or an ‘item’) to be fundamental:

The basic idea is that an item’s fundamentality should be a function of the way in which it features in metaphysical explanations. [… We] could say that (i) A is fundamental simpliciter iff it features in facts that do not obtain in virtue of any other facts, and (ii) A is more fundamental than B if some B-involving facts obtain partly or wholly in virtue of A-involving facts and never vice versa.

(2015a: p. 305; repeated in 2015b: p. 73).

This is suggestive of the following definition:

FUNDAMENTAL EXPLANATION: p is a fundamental explanation = df p is a fact in virtue of which at least one further fact obtains but which does not itself obtain in virtue of other facts.

The problem is, if (1) and (2) are read as employing the notion of a fundamental explanation at issue in FUNDAMENTAL EXPLANATION then Korman’s argument becomes question-begging against supporters of the Razor.Footnote 8 For, as is clear from the above quotation, Korman takes it to be definitive of what it is for an item to be fundamental that that item features in facts that do not obtain in virtue of any other facts. But given FUNDAMENTAL EXPLANATION, every fundamental explanation is a fact that does not obtain in virtue of any other facts. It follows that, by definition, fundamental explanations only make reference to fundamental entities. So (1) in the argument above is equivalent to:

  • (1*) The most parsimonious theory is the one whose explanations that only make reference to fundamental entities explain all that needs to be explained using the fewest resources.

‘Resources’ as it appears in (1*) can only mean fundamental resources; only those antecedently sympathetic to the Laser will agree that parsimony is a matter of reducing the numbers of only those resources, as opposed to all resources, including non-fundamental ones.

Perhaps Korman could attempt to make his argument non-question-begging by continuing to accept FUNDAMENTAL EXPLANATION but divorcing the definition of a fundamental entity from that of a fundamental explanation.Footnote 9 That would prevent it from being analytic that a fundamental explanation is one that only features fundamental entities, and so would block the analytic equivalence between (1) and the question-begging (1*).

But it’s still not clear this would prevent the argument from begging the question against those who don’t antecedently accept the Laser; at the very least, (1) still seems unmotivated without a background commitment to the Laser. For what reasons have we been given for thinking that parsimony only concerns minimising the amount of resources required for fundamental explanations, rather than all explanations? Those attracted to the Razor will presumably be inclined to think that considerations of parsimony pressure us to instead eliminate non-fundamental explanations and the non-fundamental entities they make reference to. Nothing Korman has said motivates thinking otherwise. I conclude that his argument from explanation for the Laser does not succeed.

3 Arguments from cases

I turn now to two attempts to motivate the Laser by appealing to hypothetical or real-world cases.

3.1 The argument from the case of Esther and Feng

Schaffer asks us to consider the following case. Suppose that Esther formulates a scientific theory according to which there are 100 types of fundamental particle, and that her theory is widely accepted. Then:

Feng comes along and – in a moment of genius – builds on Esther’s work to discover a deeper fundamental theory with 10 types of fundamental string, which in varying combinations make up Esther’s 100 types of particle. This is intended to be a paradigm case of scientific progress in which a deeper, more unified, and more elegant theory ought to replace a shallower, less unified, and less elegant theory. Feng’s theory is evidently better in every relevant methodological respect.

(2015: p. 648)

This, says Schaffer, tells in favour of the Laser. For given the Razor, there is apparently at least one respect in which Feng’s theory is not better than Esther’s: since Feng posits everything Esther posits plus the additional strings, Feng’s theory would apparently be disfavoured by the Razor, all else being equal. The Laser, on the other hand, correctly favours Feng’s theory for its smaller fundamental ontology. As Schaffer has it:

‘So, by the lights of The Razor, Feng’s theory is an affront to ontological economy for positing these additional strings. It is to be strongly dispreferred, all else equal. This is obviously backwards, as far as sound methodological counsel is concerned.

Feng’s theory is obviously no affront to ontological economy, but – when judged purely by the methodological virtues – is evidently a more economical, tighter, and more unified improvement. It is The Laser that gets this right.’

(648)

In response, others have pointed out that it’s consistent with the Razor to think that Feng’s theory is superior to Esther’s all things considered, because Feng’s theory secures other, weightier theoretical virtues such as explanatory power and theoretical unification (Baron & Tallant, 2018: p. 599; Da Vee, 2020: p. 3681; Fiddaman & Rodriguez-Pereyra, 2018: p. 343). This is all well and good, but it leaves the door open to Schaffer to rejoin by insisting again that his intuition in this case is that Feng’s theory is ‘obviously no affront to ontological parsimony’, not merely that Feng’s theory is all things considered superior to Esther’s. Pointing to the greater explanatory power (etc.) of Feng’s theory over Esther’s does nothing to show how that Razor is consistent with this intuition. Absent some reason for thinking that Schaffer’s intuition can safely be discounted here, it’s not clear that this line of response to Schaffer’s argument for the Laser from the case of Esther and Feng is successful.

Instead, supporters of the Razor should offer a way of explaining away the intuition that Feng’s theory is ‘no affront to ontological parsimony’ that doesn’t require accepting that ontological parsimony is blind to non-fundamental ontology. They should say that that intuition is instead generated by the perception, created by Schaffer’s description of the case, that Feng’s theory is true, or at least correct in positing the extra layer of fundamental entities below the entities that Esther posits. Schaffer says a number of things to encourage this perception: he describes Feng’s formulation of his theory as a ‘moment of genius’, and the theory itself as a ‘deeper fundamental theory’ in comparison to Esther’s; most tellingly, he says that Feng’s building upon Esther’s theory to propose his own ‘is intended to be a paradigm case of scientific progress’ (2015: p. 648). To be clear, I don’t say that Schaffer’s description of the case of Esther and Feng is logically inconsistent with the idea that Esther’s theory might be true after all (so e.g. it is probably consistent—though a bit odd—to describe Feng’s replacement of Esther’s theory with his own as a ‘moment of genius’ even though Esther’s theory is true): my point is just that it’s easy and natural to read what Schaffer says and assume that in the universe he describes Feng is right to say that there is an extra layer of entities below the ones that Esther posited. And if we do read Schaffer in this way, then we have a way of explaining the intuition that Feng’s theory is no affront to ontological parsimony that doesn’t require swapping the Razor for the Laser. For the Razor only tells us to avoid positing unnecessary entities. But if we assume that the extra entities that Feng posits exist, then we must also accept that it was necessary to posit them. Thus Feng’s theory doesn’t offend against the Razor at all, since it doesn’t multiply entities beyond necessity. So on the assumption that, in the universe that Schaffer describes, Feng’s extra entities really do exist, we can hold on to the Razor whilst agreeing that Feng’s theory is no affront to ontological parsimony.

Schaffer might reply that the implication that Feng’s theory is true is a red herring, and that the Razor would get the wrong result in an analogous case to that of Esther and Feng that doesn’t smuggle in or encourage the assumption that Feng’s theory is true. But it seems to me that the presumed truth of Feng’s theory is precisely what drives the intuition that Feng’s theory is no affront to ontological parsimony, at least to those not independently drawn to the Laser. Strip away all the admiring language with which Schaffer describes Feng’s theory, and what we are left with is a case in which we have two theories, T1 and T2, where T1 posits 100 fundamental entities, and T2 posits 10 fundamental entities and 100 non-fundamental entities. Nothing about this case seems to add anything to dialectic: those who find the Laser independently plausible will judge that T2 is ontologically simpler in the way that matters, those who find the Razor independently plausible will judge that T1 is ontologically simpler in the way that matters, and undecided parties will continue to be undecided, having been given no reason to make up their minds one way or the other. What was apparently so compelling about Schaffer’s original argument was that even those not antecedently sympathetic to the Laser (including those who find the Razor independently plausible) were likely to have the intuition that Feng’s theory is no affront to ontological parsimony, and to the extent that this intuition was suggestive of the Laser, everyone therefore had a reason to be attracted to the Laser. But once we strip away the confounding assumption that Feng’s theory is correct in positing its extra layer of fundamental entities below those of Esther’s, those not antecedently sympathetic to the Laser are no likelier than they were before considering the case of Esther and Feng to have the pro-Laser intuition.

So, in sum: either the case of Esther and Feng is described in such a way as to encourage the assumption that Feng’s extra entities really do exist, in which case the Razor agrees with Schaffer’s intuition that Feng’s extra entities are no affront to ontological parsimony, or it is described without this implication, in which case only those who already find the Laser plausible on independent grounds will think that the Razor is wrong to regard Feng’s extra entities as genuine (though outweigh-able) costs of his theory. Either way, the argument from the case of Esther and Feng doesn’t give us any reason to abandon the Razor for the Laser.Footnote 10

3.2 The argument from a Bias Towards the Built

Bennett claims that a commitment to the Laser is latent in actual scientific practice (2017: pp. 220–221).Footnote 11 In particular, she argues that we are implicitly attracted to a scientific methodology that includes a bias towards the built. For Bennett, to be built is to be non-fundamental, and so a bias towards the built is the bias in favour of claiming entities in our ontology to be non-fundamental, rather than fundamental. Such a bias would suggest an implicit preference on our part for the Laser over the Razor, as only the Laser is consistent with the thought that there is theoretical advantage to be had in shifting ontological commitments away from our fundamental ontology—thereby minimising the amount of fundamental entities we posit—and into our non-fundamental ontology.

Is it plausible that a bias towards the built is implicit in scientific methodology? Bennett provides the following motivation for thinking so:

[… W]e all think that things ought to be explained wherever possible. We don’t rest content believing in water; we want to know what water is made of, and how exactly those components come together to behave as water does. This is what drives science: we want to account for some things in terms of other things. All else equal, we prefer things to be built. Indeed, we prefer things to be built from components to which we are already committed. When scientists are faced with some interesting new phenomenon, they first try to explain it in terms of things they already believe in. Of course, they may eventually have to posit some new fundamental entity or force to explain it, or may even have to accept the phenomenon as itself fundamental. But that is a last resort […] All this is to say that we have a bias towards the built.

(2017: p. 221)

To evaluate this, we need to know what ‘phenomenon’ means. On one plausible interpretation, ‘phenomenon’ means something like empirical data. On this reading it is plausible that scientists do indeed try to explain new phenomena ‘in terms of things they already believe in’, but this fact is suggestive only of a bias on the part of scientists against unnecessarily positing new entities, fundamental or not, rather than of a bias towards the built. This is consistent with the Razor. So Bennett must be using ‘phenomenon’ to mean something like object or process or event, such that accepting the phenomenon as genuine or real (as opposed to illusory) is equivalent to accepting ontological commitment to that phenomenon. Then a bias towards explaining new phenomena in terms of entities that are already in our ontology would amount to a bias towards the built (as long as the sort of explanation at issue is metaphysical explanation, since saying that a phenomenon is built or non-fundamental is to say that it is metaphysically explained by the entities of which it is derivativeFootnote 12).

But on this reading of ‘phenomenon’, we don’t have any reason to think that scientists have a generalised preference for saying that new phenomena are (metaphysically) explained by other entities already in their ontology. First, whilst it’s true that scientists in general ‘don’t rest content’ simply believing in a given phenomenon, and rather try to find out what it is made of, this is evidence only of the fact that scientists want to find out whether a given phenomenon is built out of smaller components, rather than that they are antecedently inclined to believe that the phenomenon is built out of smaller components (compare: a scientist who attempts to discover whether some particle is charged or not need not have a ‘bias towards the charged’). Further, in cases in which it does seem plausible that scientists would intuitively prefer to regard the new phenomenon as built out of smaller components, we can explain this preference without appeal to a general bias towards the built. Whilst a scientist who (for example) discovers water for the first time may well prefer a theory that predicts that water is made up of smaller components, this seems likely to be because experience has taught her that macroscopic phenomena have in the past always turned out to be made from smaller particles (perhaps accompanied by the intuition that macroscopic extended simples are inherently dubious). Absent a case in which such considerations don’t plausibly explain a scientist’s preference for regarding the new phenomenon in question as probably built out of smaller components, we don’t have a good reason for agreeing with Bennett that scientists (or we) have an implicit bias towards the built.

So I don’t think that Bennett’s appeal to actual scientific practice demonstrates that we have a bias towards the built, and so I don't think her argument from a bias towards the built for the Laser succeeds.

4 Arguments from analogy

Schaffer (2015: §§4–5) offers two arguments for the Laser based on an alleged analogy between ontological parsimony and conceptual economy, which concerns the minimisation of concepts invoked by a theory.

4.1 The Argument from the Conceptual Laser

Schaffer’s first argument from analogy with conceptual economy is the most direct. He begins (2015: p. 649) by asking us to consider two candidate principles concerning conceptual economy:

THE CONCEPTUAL RAZOR: Do not invoke concepts without necessity.

THE CONCEPTUAL LASER: (i) Do not invoke primitive concepts without necessity.

(ii) Multiply non-primitive concepts all you like.Footnote 13

Each principle is suggestive of a different way of measuring conceptual economy. The Conceptual Razor suggests that conceptual economy consists in minimising total number of concepts; the Conceptual Laser suggests that conceptual economy consists in minimising the total number of primitive concepts, and that multiplying the number of defined (aka derivative) concepts invoked doesn’t count against a theory’s conceptual economy at all.

Schaffer thinks that it’s defeasibly reasonable to suppose that conceptual economy and ontological parsimony are analogous, and thus that ‘it is defeasibly reasonable to expect that the apt measures of economy will be parallel’ between conceptual economy and ontological parsimony (p. 649). Since primitive concepts appear analogous to fundamental entities, and defined concepts appear analogous to non-fundamental/derivative entities, the Conceptual Razor appears to be directly analogous to the (i.e. Ockham’s) Razor whilst the Conceptual Laser appears to be directly analogous to the Laser. Further, Schaffer thinks that the Conceptual Laser is the right measure of conceptual economy, arguing that only the Conceptual Laser is consistent with our intuitions in various cases he cites (pp. 649–651). By analogy, then, he concludes that the Laser is the right measure of ontological parsimony.

Now, to be clear, if conceptual economy is to be analogous to ontological parsimony in a way that might support the inference from the Conceptual Laser to the Laser, then we must understand conceptual economy as being truth conducive (as opposed to e.g. merely making for more aesthetically pleasing or pragmatically useful theories). The Razor and the Laser are supposed to imply that theories that do not multiply entities (or fundamental entities) beyond necessity are, all else equal, more likely to be true than more ontologically profligate theories. If the Conceptual Razor and the Conceptual Laser are to be analogous to these principles, then we must understand them as implying that theories that do not multiply concepts (or primitive concepts) beyond necessity are, all else equal, more likely to be true than more conceptually profligate theories.

With this in mind, I think Schaffer is right to say that the Conceptual Laser is the correct measure of conceptual economy, but wrong to conclude from this that the Laser is by analogy the correct measure of ontological parsimony.Footnote 14 Indeed, I think that examination of the reason why the Conceptual Laser is the correct measure of conceptual economy reveals a crucial disanalogy between conceptual economy and ontological parsimony, which suffices to block the conclusion that the Laser is the correct measure of ontological parsimony.Footnote 15

First, then, the reason we should endorse the Conceptual Laser has to do with the way in which defined concepts are eliminable from the theories that employ them, in the following sense: for any theory Td that employs defined concepts d1, …, dn that it defines in terms of primitive concepts p1, …, pn, there is a theory Tp that is equivalent to Td but that replaces each di of Td with its definiens, and that thus employs only p1, …, pn. This much follows from the nature of a definition, which guarantees that we can always preserve meaning (and certainly truth) by replacing an instance of a definiendum with an instance of its definiens. For example, suppose that T1 says that objects a and b overlap, and defines ‘overlap’ in terms of the primitive concept of parthood, in the usual way: then there is an equivalent theory, T2, that makes no mention of overlap, and that instead says only that a and b have a part in common.

Now assume for reductio that defined concepts count against a theory’s conceptual economy. Then T1 is less conceptually economical than T2, because T1 employs both parthood and overlap, whilst T2 employs only parthood. So T1 is less likely to be true than T2. But T1 and T2 are equivalent. So T1 and T2 must be equally likely to be true. Contradiction. So defined concepts do not count against a theory’s conceptual economy. That means that the Conceptual Laser is the right measure of conceptual economy (cf. Cowling, 2013: p. 3893).

In general, since we can always reformulate a theory that employs defined concepts in addition to primitive concepts so that that it only employs its primitive concepts, without changing the meaning or probability of truth of that theory, it makes sense to ignore the defined concepts when measuring conceptual economy.

The reason that conceptual economy and ontological parsimony are crucially disanalogous, then, is that that same motivation doesn’t carry across to the ontological case, because non-fundamental ontological commitments are not eliminable in the requisite sense.Footnote 16 That is, we cannot ‘reformulate’ a theory that contains ontological commitment to both fundamental and non-fundamental entities in such a way as to remove all ontological commitment to non-fundamental entities without thereby changing the meaning of the theory. For example, suppose we start with a theory that posits both fundamental simples and some non-fundamental mereological fusions of those simples; then suppose we strip away from that theory all ontological commitment to the non-fundamental mereological fusions, reformulating it so that any sentence implying the existence of a composite F is replaced with a sentence instead implying the existence only of simples arranged F-wise. The result of this would be a theory distinct from (i.e. non-equivalent to) the one we started with, for the theory we’d end up with would be consistent with mereological nihilism, whilst the one we started with was not. The non-fundamental fusions that our original theory posited, then, were not eliminable in the requisite sense, and so we cannot infer that they shouldn’t count against a theory’s ontological parsimony.

To sum up, then: the very consideration that warrants ignoring defined concepts when calculating a theory’s conceptual economy, namely the eliminability of defined concepts, does not apply in the case ontological parsimony, for non-fundamental entities are not eliminable in the requisite sense. In this way, ontological parsimony is importantly disanalogous to conceptual economy. As such, there are no grounds for inferring from the fact that the Conceptual Laser is the right measure of conceptual economy that the Laser is the right measure of ontological parsimony.

4.2 The Argument from Bang for the Buck

Schaffer (2015: pp. 651–653) proposes a further argument for the Laser from analogy with conceptual economy. His argument takes as its starting point the thought that ontological parsimony is relevant to theory choice only insofar as it is relevant to the question of how much bang for the buck a theory secures. The best theories are those that, ceteris paribus, find the best balance between minimising their buck whilst maximising their bang. Schaffer thinks that the right formulation of the principle of ‘bang for the buck’ that concerns ontology is:

ONTOLOGICAL BANG FOR THE BUCK: Optimally balance minimisation of fundamental entities with maximisation of derivative entities (especially useful ones).

ONTOLOGICAL BANG FOR THE BUCK plainly recommends the Laser over the Razor, as only the Laser is consistent with the injunction to minimise fundamental posits whilst maximising non-fundamental ones. Given this, it’s clear that no-one who isn’t antecedently attracted to the Laser will find ONTOLOGICAL BANG FOR THE BUCK independently plausible. But Schaffer thinks that this principle is well-motivated by the fact that conceptual economy and ontological parsimony are analogous, and the following principle concerning conceptual economy is correct:

CONCEPTUAL BANG FOR THE BUCK: Optimally balance minimisation of primitive concepts with the maximisation of defined concepts (especially useful ones).

This time we need not dispute that conceptual economy and ontological parsimony are analogous in the way that Schaffer needs them to be to support his argument. For CONCEPTUAL BANG FOR THE BUCK cannot be correct in the first place: it cannot be that, ceteris paribus, theories are more likely to be true if they optimally balance minimisation of primitive concepts with the maximisation of defined concepts (especially useful ones). The reason for this has to do again with the notion that defined concepts are eliminable from the theories that employ them. Consider T3, a theory that uses a mixture of primitive concepts and defined concepts, and T4, which is the result of reformulating T3 so as to replace its defined concepts with combinations of its primitive ones. As I argued in the previous section, the nature of definition means that T3 and T4 are equivalent. But CONCEPTUAL BANG FOR THE BUCK says that T3 is superior to—i.e. more likely to be true than—T4. This is a contradiction, so CONCEPTUAL BANG FOR THE BUCK must be false. Thus Schaffer’s argument from this principle for ONTOLOGICAL BANG FOR THE BUCK, and therefore for the Laser, fails.Footnote 17

5 Conclusion

I’ve examined all seven arguments for replacing the Razor with the Laser that appear in the literature: three that appeal to various facets of the nature of non-fundamentality, two that appeal to hypothetical or real-world cases, and two that appeal to an alleged analogy with conceptual economy. I’ve argued that none of these arguments for the Laser succeed.

In closing, it is worth considering a final way of trying to justify accepting the Laser over the Razor, one that doesn’t appeal to any argument that has the Laser as its conclusion: perhaps those who support the Laser can simply maintain that they find the idea that non-fundamental ontological posits do not count against the ontological simplicity of a theory directly intuitively plausible. Indeed, the very fact that there are so many philosophers offering arguments for the Laser may be taken to be suggestive of the prevalence of this intuition.Footnote 18 It would help to shore up the defence of the Razor presented here if something could be said in response to this way of justifying the Laser.

To that end, I think supporters of the Razor can first legitimately raise doubts about the idea that the Laser is really as directly intuitive as it perhaps appears to be. Isn’t it possible that what supporters of the Laser really find intuitive is that non-fundamentals entities are ontologically innocent, or that it is theoretically virtuous to explain things wherever possible, etc., and that they claim to find the Laser directly intuitive simply because they are (perhaps implicitly) convinced of the arguments from these sorts of claims to the Laser? If this is right, then what I’ve done here in showing that the Razor is consistent with the ontological innocence of non-fundamental entities (§2.2), and with the thought that it is theoretically virtuous to explain things wherever possible (§3.2), and more generally in finding fault with arguments for the Laser from apparently intuitively true premises, is to show that there is no intuitive grounding for the Laser after all.

That said, supporters of the Razor ultimately need not rely on the claim that no-one really finds the Laser directly intuitive (and so they need not worry about recalcitrant supporters of the Laser who insist that they find it to be an intuitive principle independently of considerations of ontological innocence, etc.). For they can retreat to the defence that, whilst some philosophers may well find the Laser to be directly intuitive, there’s evidence that other philosophers think that it’s the Razor that is the intuitively correct principle. After all, if the existence of arguments for the Laser counts as prima facie evidence that the authors of those arguments find the Laser to be directly intuitive, then the existence of defences of the Razor against those arguments (including that of this paper), as well as of positive arguments for accepting the Razor over the Laser (see Baron & Tallant, 2018: 603ff. Da Vee, 2020: §3), should similarly count as prima facie evidence that the authors of those arguments find the Razor to be directly intuitive. We seem to have no reason, then, for thinking that there’s any more intuitive support for the Laser than there is for the Razor.Footnote 19

So the extant arguments for the Laser fail, and consideration of the alleged bare intuitiveness of the Laser doesn’t seem to tip the scales in its favour either. I conclude that we currently have no reason to replace the Razor with the Laser.