Before looking into the MOND arguments in detail from that angle, we need to briefly introduce the main ideas behind meta-empirical theory assessment. Meta-empirical theory confirmation (Dawid, 2013, 2019, 2022) claims that a significant degree of trust in a theory’s viability can be generated in the absence of empirical confirmation based on certain meta-level observations. These observations are not of the kind that can be predicted by the theory under scrutiny. Therefore, they cannot empirically confirm the theory. But, it is argued, they do change the credence that the theory is viable in an indirect way.
While the role of the described meta-level observations can be seen most clearly in cases where theories are trusted in the the absence of empirical confirmation, Dawid (2018) argues that their significance is not confined to the assessment of theories that lack empirical support; the trust invested in an empirically confirmed theory must be based on meta-empirical considerations as well. In straightforward cases of empirical testing, those considerations are not made explicit and remain uncontroversial. In cases where the nature and significance of empirical support is more difficult to evaluate, the meta-empirical aspects are sometimes addressed explicitly. To cover cases where meta-level observations are deployed in support of empirical confirmation, the broader concept of meta-empirical theory assessment has been introduced (Dawid, 2021). It will be our claim that discussing the epistemic significance of the MONDian’s reasoning in support of their theory requires an explication of these meta-empirical aspects.
According to meta-empirical assessment, there are specific characteristics of the research process that can be expected if very few or no possible alternatives to the given theory exist, but that are very improbable if there are many alternatives. Observing those characteristics can therefore serve as an indicator that the number of possible alternatives to the theory under scrutiny is very small or zero, or, in other words, that scientific underdetermination is strongly limited. If there are very few or no scientific alternatives to the theory, and if one assumes that a viable scientific theory in the given context exists at all, chances are good that the scientific theory one has found is actually viable. This conclusion provides an epistemic basis for trusting the given theory.
What are these meta-level characteristics? Three meta-level observations support three specific arguments of meta-empirical assessment: (i) The no alternatives argument (NAA): Scientists tend to trust a theory if they observe that, despite considerable efforts, no alternative theory that can account for the corresponding empirical regime is forthcoming. (ii) The unexpected explanation argument (UEA): Scientists tend to trust a theory if they observe that the theory turns out to be capable of explaining significantly more than what it was built to explain. (iii) The meta-inductive argument (MIA): Scientists tend to have increased trust in a theory that fulfills the first or the first two criteria if it is their understanding that previous theories in their research field that satisfied those criteria have usually turned out empirically successful once tested.
Importantly, none of the three can carry a high degree of significance in isolation. Each of the three meta-empirical observations in isolation could be explained without giving reason to assume a small number of possible alternatives to the theory to which meta-empirical assessment is applied. The fact that scientists don’t find alternatives could be explained by their limited capability or diligence. Unexpected explanation could be explained by the viability of some deeper underlying principle that was shared by the theory under scrutiny and did not have possible alternatives, rather than by the lack of alternatives to the theory under scrutiny itself. The fact that there has been a tendency of predictive success of theories that in some respect were similar to the theory under scrutiny could be countered by pointing at dissimilarities between those theories and the theory under scrutiny.
Note, however, that each specific argument of meta-empirical assessment requires a different alternative explanation to be countered. The force of these alternative explanations can be considerably weakened if more than one meta-empirical argument can be formulated. For example, a strong tendency of predictive success of theories that have been supported by a NAA renders the hypothesis that scientists are incapable of finding the possible theories less plausible. Thus, in order to be significant, meta-empirical assessment needs to be based on at least two if not all three arguments in conjunction. In context of the MONDian defense, we identify two possible arguments in support of a NAA for MOND and one in support of UEA for MOND.Footnote 10
MOND’s attempt at a no alternatives argument, part 1
Unlike other allegedFootnote 11 instances of successful meta-empirical confirmation or meta-empirical assessment in science (the Higgs particle before its discovery, string theory, inflation), MOND has a rival to reckon with that is supported by a majority in the discipline. Any NAA in favor of MOND can therefore only be effective if it can be argued that dark matter, despite its popularity, cannot constitute an adequate rival theory for galaxy phenomenology.Footnote 12 One strategy is to argue that \(\Lambda \)CDM is just unscientific given some acceptable demarcation criterion. We already rejected the strictly Popperian or Lakatosian readings of the MONDian arguments as being at odds with the goal of the defenders of MOND. Here, we read them through the lens of meta-empirical assessment, that is, with explicit epistemic import.
As described in Sect. 3, defenders of MOND aim to argue that dark matter cannot be a properly scientific rival to MOND because it is unfalsifiable–both at the level of \(\Lambda \)CDM and at the level of dark matter candidates. The rejection of dark matter as unscientific provides the foundation for a no alternatives argument in favor of MOND. Dark matter and MOND are currently the only two available theories of galaxy scales. If dark matter is rejected because it is unscientific, that leaves MOND as the only possible alternative.Footnote 13 Note that one might argue that the NAA in this case is even stronger: dark matter and MOND can be assumed to exhaust the space of possible theories of galaxy scales (cf. also Sect. 2). With the rejection of dark matter, MOND remains not just as the only developed, but the only possible theory of galaxy scales.
MOND’s attempt at a no alternatives argument, part 2
The previous NAA relies on the acceptance of specific scientificality conditions and the assessment that dark matter fails to satisfy them. These are two strong claims to make. The second line of argument in support of a NAA for MOND is more charitable towards dark matter, in that it does not reject dark matter as unscientific. Instead, it argues that, even if the dark matter-hypothesis were scientific, it cannot constitute a rival to MOND because it fails to adequately explain MOND phenomenology. At face value, this claim may sound wildly implausible. After all, dark matter was first introduced because of anomalous observations at galactic scales (the flat rotation curves). How can MOND claim that dark matter cannot be part of a rival explanation for galaxy phenomenology?
The key to the MONDian claim lies in how dark matter could figure in an explanation of galaxy phenomenology, in context of \(\Lambda \)CDM. The first concern is that, in and of itself, \(\Lambda \)CDM makes no clear predictions for galaxy scalesFootnote 14:
The observed mass discrepancy–acceleration relation does not occur naturally in \(\Lambda \)CDM. Indeed, \(\Lambda \)CDM makes no clear prediction for individual galaxies. One must resort to model building. The argument then comes down to what constitutes a plausible model. I have spent many years trying to construct plausible \(\Lambda \)CDM models. I have never published any, because none are satisfactory. All I can tell you so far is what does not work. (S McGaugh, 2015, p. 6)
What McGaugh refers to here is that the success of \(\Lambda \)CDM on cosmological scales, at which gravity is the dominant interaction, does not obviously extrapolate to galactic scales. When deriving predictions from simulations implementing \(\Lambda \)CDM for galactic and cluster scale phenomenology, non-gravitational interactions are no longer negligible. Thus, simulations need to include some representation of astrophysical processes (e.g. star formation, stellar evolution, supernova feedback, feedback from active galactic nuclei), but McGaugh submits that there is currently no plausible way of doing so.
In contrast to McGaugh’s skepticism, various hydrodynamical simulations have claimed success in recovering BTFR and MDAR. For instance, the Illustris simulation project—with tagline “towards a predictive theory of galaxy formation”—reported some initial success in reproducing the BTFR within the available observational constraints, although they recognize that their results still show more scatter than McGaugh’s (2012) (Vogelsberger et al., 2014, pp. 1541–1542). Similarly, SIMBA, a different suite of galaxy formation simulations, proved capable of broadly reproducing the observed BTFR. The authors recognize that there are various possible sources of slight deviation from observations depending on what definition of the circular velocity is used. However, insofar as they aimed to prove that SIMBA could be used to study BTFR, they claim success (Glowacki et al., 2020).
Those successes do not make \(\Lambda \)CDM any more explanatory with respect to galaxy phenomenology, according to defenders of MOND, however:
The failure of the natural \(\Lambda \)CDM galaxy formation model drives simulators to consider feedback. Feedback in the context of galaxy formation invokes the energy created by baryonic processes like supernovae to rearrange the distribution of mass in model galaxies. This is an inherently chaotic process, so it does not naturally lead to the observed organization. [...] Such models are of necessity highly fine-tuned. Fine-tuning is always possible in dark matter models. There are many free parameters, and we are always free to add more. So I do not doubt that it is possible to mimic the data. [...] The question then becomes whether the real universe operates that way. My fear is that feedback has become a modern version of the epicycle. (S McGaugh, 2015, p. 7)
So, simulations implementing feedback cannot be explanatory because they are inevitably fine-tuned. And if \(\Lambda \)CDM fails to explain MOND phenomenology, it cannot constitute a proper rival to MOND, implying once again that there is no alternative to MOND.
Our reconstruction of this second argument in support of a NAA is in line with Massimi (2018)’s reconstruction of the debate. Massimi argues that, while \(\Lambda \)CDM is successful on cosmological scales (where MOND clearly fails), it fails to explain galaxy phenomenology:
In spite of its extraordinary success at explaining large-scale structure (i.e. structure formation, the matter power spectrum, galaxy clusters, and so on), \(\Lambda \)CDM is not equally well-equipped to explain phenomena such as [BTFR] and MDAR at the scale of individual galaxies [...]. This scale has been traditionally regarded as favoring alternative models, such as MOND, which naturally explains [BTFR] and MDAR because they are natural consequences of MOND formalism. (Massimi, 2018, p. 33)
The problem that \(\Lambda \)CDM faces at galactic scales is that, due to the complexity and so-called context-sensitivity of computer simulations, \(\Lambda \)CDM is incapable of offering satisfactory causal explanations of, e.g., BTFR.
Note, however, that Massimi does not fully buy into the MONDian assessment of \(\Lambda \)CDM’s failure to explain MOND phenomenology. She agrees that if computer simulations are able to retrieve MOND phenomenology, “this is success enough, and must count as success enough for \(\Lambda \)CDM” (Massimi, 2018, p. 34). Massimi’s caveat suggests that the MONDian argument that \(\Lambda \)CDM lacks explanatory power assumes certain standards of explanation that go beyond mere empirical adequacy. As will be discussed in detail in Sect. 6.1, it is this shifting of standards that makes this second argument for an NAA in support of MOND unwarranted.
MOND’s attempt at an unexpected explanation argument
The final argument we identify in the MONDian defense is spelled out as a novel confirmation argument. In a sense, this argument is almost an inverse of the second NAA’s rejection of \(\Lambda \)CDM: while \(\Lambda \)CDM is incapable of MOND phenomenology, MOND itself provides a simple explanation of a wide range of phenomena on galactic scales, including galaxy rotation curves, BTFR, MDAR and more. It is surprising that MOND explains such a large set of observations since it had been developed to account for a much narrower class of phenomena. Indeed, a lot of the work written in defense of MOND uses the same argumentative structure: a long list of predictions is derived from Milgrom’s proposal. For each prediction, it is shown that (1) the prediction is a ‘natural consequence’ of the MOND formalism even though the MOND formalism was not developed with this prediction in mind (the exception, of course, being flat galaxy rotation curves); and, (2) the prediction is corroborated by observations.
Examples of this argumentative structure can be found in recent work from three of the most vocal defenders of MOND. Consider the following, from Milgrom:
Today one can ask: ‘Without the umbrella of MOND, why should the \(a_0\) that enters and determines the asymptotic rotational speed in massive disc galaxies be the same as the \(a_0\) that enters and determines the mean velocity dispersions in dwarf satellites of the Milky Way and Andromeda galaxies? And why should these be the same \(a_0\) that enters and determines the dynamics in galaxy groups, which are hundreds of times larger in size and millions of times more massive then the dwarfs [...]? And why should these appearances in local phenomena in small systems be related to the accelerated expansion of the Universe at large? (Milgrom, 2020, p. 175)
The obvious conclusion, according to Milgrom, is that this unexpected success of MOND must be due to the fact that MOND is getting ‘something’ right.
In a similar vein, McGaugh (2020) goes through fourteen different properties of galaxies and for each of them asks whether (1) the data corroborates the prediction from MOND; (2) whether the prediction was made a priori; and, (3) what dark matter predicts. McGaugh concludes:
We have been surprised at every turn: these were startling facts, when new. Only one theory succeeded in predicting these phenomena in advance: MOND. It has met the gold standard of scientific prediction repeatedly for a wide variety of phenomena. [...] I do not see how this can be a fluke. (S. McGaugh 2020, pp. 22–23)
McGaugh recognizes that there are three possible conclusions one could draw from these findings: the data corroborates MOND because there is something to it, galaxy formation somehow mimics MOND, or some new yet undiscovered physics is responsible. McGaugh submits that the first of these three is the most plausible [ibid., p. 24].
Finally, Merritt (2021a) contrasts the novelty of the MONDian predictions with the mere accommodation by \(\Lambda \)CDM as well:
Several of Milgrom’s successful predictions [...] clearly satisfy both of Leplin’s conditions for novelty. Information about these observed regularities did not contribute in any way to the formulation of Milgrom’s theory: indeed they were not observationally established until some years after 1983. And, [...] the competing theory (the standard cosmological model) provides no “viable reason to expect” these regularities to exist. And at least since the addition (c. 1980) of the postulates relating to dark matter, the standard model can claim no comparable successes of novel prediction. Merritt (2021a, p. 204)
As discussed in Sect. 3, such successful novel prediction is part of Merritt’s argumentation for MOND’s progressiveness as a research program.
The argument is further strengthened, according to the MOND-defenders, by the fact that different observed correlations that were predicted by MOND, like BTFR or MDAR, lead to the same value for Milgrom’s constant \(a_0\)Footnote 15, as already suggested by the above quote from Milgrom. Merritt similarly draws explicit parallels between converging values for \(a_0\) providing support for MOND as a theory, and Perrin’s determination of Avogadro’s number or early measurements of Planck’s constant providing evidence for atomic theory or quantum mechanics, respectively. Now, Merritt admits that mere convergence of measurements of a specific parameter does not obviously lend confirmation to the theory in which that parameter plays a role. However, in certain cases (like those of Perrin and Planck, and, allegedly, MOND), such convergence can confirm the broader theory:
This, perhaps, is a basis for the intuitive judgments of Perrin and Planck: namely that the convergence of the measured value of a ‘constant of nature’ implies a tight connection between facts that would otherwise not have been considered related. (Merritt, 2020, p. 217)
So, the argument in support of MOND goes beyond the explanation of general regularity patterns. There is empirical convergence on a specific value for Milgrom’s constant between those different regularity patterns that, at face value, would not be expected to be obviously related to one another.
For this argument to work in favor of MOND, it is necessary that novel confirmation gives some additional confirmation value to a hypothesis, over and above mere accommodation of observations. This means that defenders of MOND need to rely on a philosophical perspective that can provide an epistemic foundation for acknowledging confirmation value that reaches beyond the formal comparison of a theory’s predictions and empirical data. Note that this is no trivial task. For instance, if defenders of MOND were to adopt a fully Popperian view (as they seem to do at face value), a novel confirmation argument would be meaningless since Popper rejects the concept of confirmation across the board. And even moving away from a strictly Popperian perspective, a wide range of philosophers of science (logical empiricism, empiricist readings of Bayesian confirmation theory) who do acknowledge the usefulness of the concept of confirmation nevertheless deny the extra confirmation value of novel confirmation over accommodation.
In line with this paper’s agenda, we will analyse an embedding within meta-empirical assessment, where, as we will show, novel confirmation does provide additional confirmation value over accommodations. We don’t deny that other philosophical embeddings could be possible, and that these may play out differently in the given case. But we take our analysis to demonstrate that some embedding must be provided, since the nature of such an embedding has strong effects on the epistemic significance of novel confirmation.
So where does the additional confirmation value come from in cases of novel confirmation, according to meta-empirical theory assessment? This is based on UEA. Recall that UEA claims that scientists tend to trust a theory if that theory can explain more than what it was built to explain. Consider the following scenario. Let us assume that a given number of scientific problems wait to be solved in a given scientific context. Let us further assume that the given scientific context (that is, scientific background knowledge and the scientifically well explained set of phenomena) only allows for a very small number of scientific theories that can be constructed. In such a scenario, one can expect that a theory developed in order to solve one problem will solve other problems as well. The scarcity of unconceived alternatives enforces that theories that can be built, will solve more than one problem on average. If, to the contrary, far more theories can be developed in the given scientific context than there are problems to be solved, no such expectation is justified. Therefore, if a theory is found that solves one problem, and that theory then provides significant unexpected explanation, this increases the credence that only few theories can be constructed in the given context. This, in line with all meta-empirical assessment, increases the credence in the given theory’s viability.
Initially, UEA was analysed in cases where unexpected explanations that did not amount to agreement with novel empirical data. UEA can, however, also be applied to cases of novel empirical confirmation (Dawid, 2021). The argument of novel confirmation is based on the observation that a theory turns out to be capable of predicting or explaining significantly more empirical data than what it was built to explain. The reasoning described above can be fully applied in this case. From the perspective of meta-empirical theory assessment, the reason why a theory is ‘more confirmed’ if a novel empirical prediction is corroborated than if that theory post-hoc accommodates the same observation, is based on UEA.
Returning to the case at hand, UEA provides exactly the conceptual basis needed for establishing the epistemic significance of novel confirmation that is asserted by MONDians. Ostensibly, MOND was a phenomenological theory, introduced for the sole purpose of explaining (some) galaxy rotation curves. But after its introduction, it has become clear that MOND can account for a broad range of phenomena at galactic scales in a ‘natural’ way. This is taken to provide significant support for MOND, as it would be according to a UEA.