1 Introduction

The bibliography is divided into seven sections. The first section (17.2) covers methodological issues in astrophysics: how do astrophysicists make observations, interpret data, and solve problems that arise in the process? The section includes case studies on gravitational waves, astroparticle physics, dark matter, extra-galactic objects, and others.

The second and largest section (17.3) covers topics related to astrophysical modelling and computer simulations: their epistemic value, their limits, and their application to major problems in the field. This section contains case studies on galactic modelling, analogue experiments, cosmological simulations, and code comparisons.

The third section (17.4) concerns perhaps the oldest debate within the contemporary philosophy of astrophysics, namely between astrophysical realists and anti-realists, which was initiated by the work of Ian Hacking in the 1980s. The section contains case studies on astroparticle physics, gravitational lensing, dark matter, as well as stellar physics and classification.

The fourth section (17.5) covers the relationships between astrophysical theory, observation, confirmation, and more. This section contains case studies on singularities, general relativity, dark matter, interstellar interlopers, and gravitational waves.

The fifth section (17.6) is somewhat tangential to mainstream work in the philosophy of astrophysics, but nevertheless contains a number of articles of which philosophers should be aware and with which they should be prepared to engage. The section covers issues in the sociology of scientific knowledge (SSK), as well as other social issues related to astrophysics. The section contains case studies on gravitational waves, the Hubble Space Telescope, astronomy and high-energy physics, the Herschel Space Observatory, “star-crushing”, the Gemini Telescopes, Pluto, and the use of visualizations.

The sixth section (17.7) contains works on typicality, the anthropic principle, and extra-terrestrial life. Most of the existing literature on typicality has an explicitly cosmological focus, but philosophers of astrophysics may offer fresh perspectives to these debates. The articles were chosen due to their potential interest for philosophers in this field, though few have explicit astrophysical content. Section six, then, serves as an invitation for philosophers of astrophysics to explore a field largely untouched by those with their knowledge and skillset.

The seventh and final section (17.8), compiled by Siska De Baerdemaeker, explores recent work related to dark matter and MOND (Modified Newtonian Dynamics) on both astrophysical and cosmological scales. This section is by no means a comprehensive overview of the philosophical literature on MOND, but the entries included have been chosen for their specific relevance to the philosophy of astrophysics.

At the end of each section, there is a list of articles which deal with the section’s theme, but whose primary theme warranted placing them somewhere else.

The reader will notice a number of articles which focus on cosmology or astronomy, rather than astrophysics. These articles were chosen in virtue of their potential applicability to problems in the philosophy of astrophysics, as judged by myself in discussion with other philosophers in the field. Every effort was made to avoid inflating the bibliography beyond its natural bounds, but some material from adjacent fields was necessary to provide a comprehensive overview of the state and future of the philosophy of astrophysics.

Especially given the relatively small size and recent vintage of this field (there are only 87 entries in this bibliography, 66 of which are from 2010 onwards, and 32 since 2020) the articles in this volume constitute a significant and timely addition.

2 Methodologies in Astrophysics

  • Anderl, S. (2016). Astronomy and astrophysics. In P. Humphreys (Ed.), The Oxford Handbook of Philosophy of Science (Vol. 1). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199368815.013.45.

    • A comprehensive, readable introduction to the main debates in philosophy of astronomy and astrophysics, this article offers a great starting point for those new to the field. Anderl (Frankfurter Allgemeine Zeitung; Institut de Planétologie et d’Astrophysique de Grenoble) argues that astrophysics is not vulnerable to Ian Hacking’s charge of antirealism due to its unique methodology, which incorporates aspects of both the historical and experimental sciences (including the “cosmic laboratory”), as well as simulations, models, and analyses of large amounts of data.

  • Cleland, C. E. (2002). Methodological and epistemic differences between historical science and experimental science. Philosophy of Science, 69(3), 447–451. https://doi.org/10.1086/342455.

    • A useful introduction to the distinction between historical and experimental sciences – a distinction central to debates over the reliability of astrophysical findings. Cleland (CU Boulder) contends that the different kinds of evidential reasoning practiced by experimental and historical scientists are underwritten by an objective feature of nature, namely, the time asymmetry of causation between present and past events, and present and future events. Historical sciences exploit information about the present-past events, while experimental science exploits information about present-future events. Thus, each type of science is doing something different, and neither is more objective or rational than the other.

  • De Baerdemaeker, S. (2021). Method-driven experiments and the search for dark matter. Philosophy of Science, 88(1), 124–144. https://doi.org/10.1086/710055.

    • Given target X, how do scientists argue that their method(s) will be effective in probing X? De Baerdemaeker (Stockholm University) discerns two “logics” of method choice, namely “target-driven” and “method-driven”, and argues that scientists employ the latter in situations where previous knowledge about the target system is sparse or unreliable, as illustrated by dark matter production and detection experiments. However, the use of method-driven logic poses difficulties for the employment of traditional robustness arguments due to the assumptions involved in using this logic.

  • Elder, J. (2020). The Epistemology of Gravitational-Wave Astrophysics. Ph.D. dissertation. University of Notre Dame. https://curate.nd.edu/show/3f462517k8t.

    • The first comprehensive study in the epistemology of gravitational wave (GW) astrophysics, Elder (Black Hole Initiative) discusses the distinction between “direct” and “indirect” observations of gravitational waves, raises a circularity problem facing model-dependent observations (and explains how it is mitigated by GW astronomers), and elaborates on the virtues of multi-messenger astrophysics for creating more robust dependency relations between sources and traces of data, among other topics.

  • Elder, J. (2022). On the “direct detection” of gravitational waves [unpublished manuscript]. https://www.jameeelder.com/uploads/1/2/1/6/121663585/elder__2021__direct_detection_du_cha%CC%82telet.pdf.

    • The authors of the LIGO-Virgo collaboration’s “discovery paper” for the binary black hole merger GW150914 claim to have made a “direct detection” of gravity waves and a “direct observation” of the merger. Elder (Black Hole Initiative) seeks to disambiguate the meaning of terms like “direct”, “indirect”, “observation” and “measurement” in a way which is both philosophically adequate and true to how scientists use these terms. Elder argues that the LIGO-Virgo team can only be said to have indirectly detected a binary black hole merger due to their reliance on model-based inferences, thereby raising some important epistemic challenges that gravitational wave astrophysicists must overcome.

  • Falkenburg, B. (2014). On the contributions of astroparticle physics to cosmology. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 46, 97–108. https://doi.org/10.1016/j.shpsb.2013.10.004.

    • Although cosmology proceeds top-down (from theory to data and from large-scale to small scale) and astroparticle physics proceeds bottom-up (from detection of particles to theorizing about their cosmic sources), Falkenburg (TU Dortmund) argues that these disciplines pursue complementary strategies of scientific explanation while aiming at theoretical unification – a fact inadequately captured by contemporary philosophical accounts of scientific explanation and realism. Given this, Falkenburg urges the philosophical community to pay greater attention to astroparticle physics and the way in which it contributes to the empirical basis of cosmology.

  • Hudson, R. G. (2007). Annual modulation experiments, galactic models and WIMPs. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 38(1), 97–119. https://doi.org/10.1016/j.shpsb.2006.05.002.

    • Groups studying WIMPs have generated apparently incompatible data. Hudson (University of Saskatchewan) argues that this data is only incompatible given certain ancillary assumptions involved in data processing, and that we can reconcile the discordant results into an empirically adequate model à la van Fraasen (we cannot be realists about this model).

  • Hudson, R. G. (2009). The methodological strategy of robustness in the context of experimental WIMP research. Foundations of Physics, 39(2), 174–193. https://doi.org/10.1007/s10701-009-9271-3.

    • Although central to the methodologies of sciences like psychology, robustness is not valued as highly among astroparticle physicists, who often pursue alternative strategies such as “model-independence” in assuring the reliability of their results. Hudson (University of Saskatchewan) contends that in these experimental contexts, robustness may be pragmatically fruitful (it may give us multiple lines of support to fall back on in response to countervailing evidence) while adding no epistemic value.

  • Hudson, R. G. (2013). Dark matter and dark energy. In R. Hudson, Seeing Things: The Philosophy of Reliable Observation. https://doi.org/10.1093/acprof:oso/9780199303281.001.0001.

    • Using 2006 observations of the Bullet Cluster and mid- to late-1990s observations of Type 1a supernovae as his case studies, Hudson (University of Saskatchewan) argues that robustness reasoning does not play a significant justificatory role in astrophysical theorizing about dark matter or dark energy. Instead, Hudson contends that astrophysicists in these contexts employ the epistemically meritorious methodological strategy of “targeted testing”, wherein multiple techniques are used to address an observational question (à la robustness) but where alternate techniques are aimed at a specific “strategic goal”. For Hudson, mere convergence of results should not be considered epistemically significant in the absence of this targeted approach, despite how some astrophysicists have reflectively justified their conclusions.

  • Meskhidze, H. (2021). Can machine learning provide understanding? How cosmologists use machine learning to understand observations of the universe. Erkenntnis. https://doi.org/10.1007/s10670-021-00434-5.

    • Can cosmological “black-box” machine leaning algorithms provide genuine scientific understanding? Meskhidze (UC Irvine) distinguishes between black-boxes themselves and black-boxing as a methodology – what she calls the “method of ignoration” – and argues that machine learning algorithms can deliver scientific understanding when they are used as part of this “method of ignoration” to investigate emergent statistical relations in the simulations within which they are employed. More broadly, Meskhidze contends that the epistemic value of machine learning algorithms is heavily context-dependent.

  • Salmon, W. C. (1998). Quasars, causality, and geometry: A scientific controversy that did not occur. In W. Salmon, Causality and Explanation. Oxford University Press. https://doi.org/10.1093/0195108647.003.0026.

    • Astrophysicists have argued on the basis of a “causal argument” that the rapid variability in the brightness of quasars requires that their sources be extremely compact. Salmon (d. 2001, form. University of Pittsburgh) identifies the “cΔt size criterion” – according to which the region of brightness-variation cannot be larger than the distance light travels in its time of variation – as a crucial premise in this causal argument. Salmon claims that scientists have treated this criterion (or at least have often appeared to treat it) as a law of nature derived from special relativity, but that in fact it is “egregiously fallacious”. If the criterion has any use at all, it is as a plausibility principle for fixing Bayesian priors when attempting to construct quasar models, and not as a physical requirement which such models must satisfy.

  • Shapere, D. (1982). The concept of observation in science and philosophy. Philosophy of Science, 49(4), 485–525. https://doi.org/10.1086/289075.

    • A classic and wide-ranging discussion of observation and inference in science which uses the detection of solar neutrinos as its primary case study. Shapere (d. 2016, form. Wake Forest University) contends that philosophical skepticism regarding the use of the term “observation” in astrophysics (for instance, in the claim that solar neutrinos allow us to “observe” the sun’s interior) and other domains is unwarranted, especially since even ordinary and uncontroversial cases of observation involve inference and filtering through one’s beliefs and background context.

  • Valore, P., Dainotti, M. G., & Kopczyński, O. (2020). Ontological categorizations and selection biases in cosmology: The case of extra galactic objects. Foundations of Science. https://doi.org/10.1007/s10699-020-09699-5.

    • Using Gamma Ray Bursts (GRBs) as a case study, the authors argue that philosophical analysis of ontological categorizations in astrophysics can help illuminate the limits and distortions of our scientific methods, as well as the theoretical and metaphysical presuppositions which undergird them and our understanding of reality as a whole.

  • Weinstein, G. (2021). Coincidence and reproducibility in the EHT black hole experiment. Studies in History and Philosophy of Science Part A, 85, 63–78. https://doi.org/10.1016/j.shpsa.2020.09.007.

    • Weinstein (University of Haifa) analyzes the Event Horizon Telescope (EHT) black hole experiment in light of philosophical themes from Ian Hacking, Nancy Cartwright, and Peter Galison. The author argues that EHT scientists employed an “argument from coincidence” in order to establish trust in their results, but that this method is problematic when used for this purpose.

  • Wilson, K. (2021). The case of the missing satellites. Synthese, 198(S21), 1–21 https://doi.org/10.1007/s11229-017-1509-6.

    • Wilson (University of Melbourne) provides an overview of the missing satellites problem in galactic astrophysics and analyzes how researchers have attempted to solve the problem. According to Wilson, these researchers have “black-boxed” their simulations by treating them as self-contained worlds in which simulated phenomena are epistemically significant, and they have blended these simulated results with real-world observations in generating their solution to the problem. This process of blending can make simulated worlds not merely possible, but plausible.

For further articles relevant to this category, see Boyd 2018, Curiel 2019, De Baerdemaeker and Boyd 2020, Gueguen 2020, Gueguen 2021, Massimi 2018, and Meskhidze 2017.

3 Models and Simulations

  • Anderl, S. (2018). Simplicity and simplification in astrophysical modeling. Philosophy of Science, 85(5), 819–831. https://doi.org/10.1086/699696.

    • Should astrophysical models strive to be “complete” (i.e., to capture all the details of the available data), or simple? Anderl (Frankfurter Allgemeine Zeitung; Institut de Planétologie et d’Astrophysique de Grenoble) argues that in many cases, simplicity is a valuable representational ideal because simple models facilitate (1) faster, more comprehensive exploration of the parameter space, and (2) internal validation of a model and the concomitant use of “physical intuition” which is so important for good model building.

  • Bailer-Jones, D. M. (2000). Modelling extended extragalactic radio sources. Studies in History and Philosophy of Modern Physics, 31(1), 49–74. https://doi.org/10.1016/S1344-2198(99)00028-3.

    • This article discusses practical and epistemological issues associated with scientific modeling of novel phenomena, using extended extragalactic radio sources (EERSs) as a case study. Bailer-Jones (d. 2006, form. University of Heidelberg) argues that models are ways of representing the causal mechanisms behind poorly understood phenomena (“representation [caus.mech.]”), and that they also serve as conventional means of representing the unity of explanations of such mechanisms (“representation [conv.]”). Although models are “a central form of knowledge about empirical phenomena” (69), they can rarely be taken to constitute a definitive statement of what the world is really like; their epistemological status is thus quite complicated.

  • Boyd, N. M. (2015). Are astrophysical models permanently underdetermined? [Unpublished manuscript]. http://jamesowenweatherall.com/wp-content/uploads/2014/10/Boyd_SoCal_060615.pdf.

    • Against Hacking (1989) and Ruphy (2011), Boyd (Siena College) argues that we ought to be more optimistic about the prospects of breaking underdetermination in representation-driven astrophysical modeling. Using case studies from research into supernovae, dark matter, structure formation, and gamma ray bursts, Boyd articulates a framework according to which models with identifiable distinguishing features can be evaluated separately in light of new empirical evidence. In other words, Boyd argues that the underdetermination of astrophysical models is more often transient than permanent, and that the epistemic status of such models therefore remains significant.

  • Crowther, K., Linnemann, N. S., & Wüthrich, C. (2021). What we cannot learn from analogue experiments. Synthese, 198(S16), 3701–3726. https://doi.org/10.1007/s11229-019-02190-0.

    • Contrary to Dardashti et al. (2017; 2019), Thébault (2019), and Evans and Thébault (2020), Crowther et al. argue that analogue experiments used to investigate inaccessible target phenomena (for instance, fluid “dumb holes” used to investigate astrophysical black holes) are no more confirmatory than analogical arguments – which is to say, hardly confirmatory at all. More specifically, the authors argue that analogue experiments cannot confirm whether a particular inaccessible phenomenon (such as Hawking radiation) actually exists, and they criticize their opponents for unjustifiably assuming the physical adequacy of analogue modelling frameworks, thereby begging the question. Despite this, the authors admit that analogue experiments can be useful scientific tools for exploring the relevant modeling framework and for demonstrating robustness of the phenomena of which they are designed to be analogues.

  • Dardashti, R., Thébault, K. P. Y., & Winsberg, E. (2017). Confirmation via analogue simulation: What dumb holes could tell us about gravity. The British Journal for the Philosophy of Science, 68(1), 55–89. https://doi.org/10.1093/bjps/axv010.

    • Using Hawking radiation as a case study, the authors argue that analogue models of inaccessible astrophysical phenomena can be used to confirm predictions about such phenomena given (1) a robust syntactic isomorphism between the modelling frameworks of the analogue and the target systems, (2) diverse analogue realizations of the phenomena under study, and (3) valid universality arguments.

  • Dardashti, R., Hartmann, S., Thébault, K. P. Y., & Winsberg, E. (2019). Hawking radiation and analogue experiments: A Bayesian analysis. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 67, 1–11. https://doi.org/10.1016/j.shpsb.2019.04.004.

    • Extending Dardashti et al. (2017)’s discussion of universality arguments, the authors provide a quantitative Bayesian model for investigating the inferential structure and confirmatory power of analogue black hole experiments. Their formal model shows how to link evidence about analogue systems to target systems, accounts for the confirmatory relevance of “saturation” (when multiple types of analogues are used to probe the same targets), and shows that the more confident we are about the physics underlying a particular analogue, the less it can teach us about the target system.

  • Evans, P. W., & Thébault, K. P. Y. (2020). On the limits of experimental knowledge. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 378(2177), 20190235. https://doi.org/10.1098/rsta.2019.0235.

    • Using stellar nucleosynthesis and Hawking radiation as case studies, Evans (University of Queensland) and Thébault (University of Bristol) analyze how scientific models and experiments (both conventional and analogue) justify inductive inferences about unmanipulable and/or inaccessible target systems. The paper is framed as a response to inductive skeptics who doubt the possibility of gaining inductive knowledge. The authors argue that scientists can use inductive triangulation – the validation of one mode of inductive reasoning via independent modes of inductive reasoning – to justify claims about unmanipulable and/or inaccessible target systems and to assuage reasonable doubt about inductive knowledge.

  • Field, G. (2021a). Putting theory in its place: The relationship between universality arguments and empirical constraints. The British Journal for the Philosophy of Science. https://doi.org/10.1086/718276.

    • Field (Cambridge University) argues that universality arguments such as those discussed in Dardashti et al. 2017 and 2019 cannot fill the empirical gap between analogue black hole experiments and their target systems unless at least one of the following conditions is met: (1) we know that the micro-physics of the two systems are relevantly similar, or (2) we can empirically access the macro-behavior of the systems. These conditions help clarify the confirmatory status of analogue black hole experiments, while emphasizing the need for empirical evidence in determining this status.

  • Field, G. (2021b). The latest frontier in analogue gravity: New roles for analogue experiments [Unpublished manuscript]. http://philsci-archive.pitt.edu/20365/.

    • In this preprint, Field (Cambridge University) offers an interpretation of the role of analogue black hole experiments which is at odds with conventional interpretations thereof (such as those due to Crowther et al. 2021, Dardashti et al. 2017 & 2019, Evans and Thébault 2020, and Thébault 2019). According to Field, analogue black hole experiments are valuable not only (or primarily) for their ability to confirm the existence or characterize the behavior of some inaccessible target phenomenon (usually Hawking radiation), but also for the way they can be used – and increasingly are being used – to directly detect instances of more general gravitational phenomena (in this case, the “Hawking process”), to explore the intrinsically interesting behavior of the analogue systems themselves, and to investigate the robustness of predicted phenomena which may contribute to a two-way knowledge flow between analogue and target systems. The paper also contains a helpful discussion of the history of analogue black hole experiments, and explains how old experiments may be reinterpreted using the author’s framework

  • Gueguen, M. (2020). On robustness in cosmological simulations. Philosophy of Science, 87(5), 1197–1208. https://doi.org/10.1086/710839.

    • Scientists use numerical simulations to determine the mass distribution of dark matter halos. Numerical results are normally taken to be confirmatory when they are robust, i.e., when they resist some degree of fluctuation in the values of certain underlying parameters, as typically explored in “convergence studies”. However, Gueguen (Institute of Physics of Rennes 1) argues that robustness analysis in the form of convergence studies fails to exclude numerical artifacts, and that in fact convergence can result from artifacts; we need a better criterion for determining the trustworthiness of our simulations.

  • Gueguen, M. (2021). A tension within code comparisons [unpublished manuscript]. http://philsci-archive.pitt.edu/19227/.

    • While convergence studies like those discussed in Gueguen (2020) are meant to test for the “internal robustness” of astrophysical simulations, code comparisons (which look for shared results across different simulation codes) appear to test for “external robustness”. However, Gueguen (Institute of Physics of Rennes 1) argues that the presence of shared results across different astrophysical simulations has little epistemic significance in practice, and that even in principle (with a perfectly constructed ensemble of codes to compare), the requirement that the codes bear on comparable targets is inevitably in tension with the requirement that they differ with respect to their components. Thus, code comparisons cannot help us decide whether to trust a given simulation.

  • Jacquart, M. (2020). Observations, simulations, and reasoning in astrophysics. Philosophy of Science, 87(5), 1209–1220. https://doi.org/10.1086/710544.

    • Using collisional ring galaxies as a case study, Jacquart (University of Cincinnati) argues that computer simulations in astrophysics play three epistemic roles: (1) hypothesis testing (eg., testing possible explanations for how a galaxy could form a ring-shape), (2) exploring possibility space (eg., to establish the parameter boundaries in which ring galaxy-formation occurs), and (3) amplifying observations (i.e., using the simulation to develop a context in which to interpret observational data).

  • Jebeile, J. (2017). Computer simulation, experiment, and novelty. International Studies in the Philosophy of Science, 31(4), 379–395. https://doi.org/10.1080/02698595.2019.1565205.

    • Can computer simulations provide genuinely new knowledge? Using the “dark” galaxy called VirgoHI21 as a test case, Jebeile (University of Bern) argues affirmatively that although only concrete experiments can confound scientists and refute theories, simulations can still provide new knowledge qua knowledge obtained for the first time which adds to existing knowledge (this is the “first time” criterion of novelty). Importantly, the ability of simulations to generate new knowledge does not depend on features that they share with experiments.

  • Jebeile, J., & Kennedy, A. G. (2015). Explaining with models: The role of idealizations. International Studies in the Philosophy of Science, 29(4), 383–392. https://doi.org/10.1080/02698595.2015.1195143.

    • On the typical representationalist view of model explanation, idealized models are less explanatory than de-idealized models. Using galactic simulations as a case study, Jebeile (University of Bern) and Kennedy (Florida Atlantic University) contend that de-idealization is not always in itself explanatorily beneficial; sometimes, comparisons between idealized and de-idealized models allow researchers to extract important explanatory information not available in the de-idealized model alone. Furthermore, the authors argue that model explanation ought to be understood not as a product or feature of models, but as a user-dependent activity.

  • Massimi, M. (2018a). Perspectival modeling. Philosophy of Science, 85, 335–359. https://doi.org/10.1086/697745.

    • It is intuitive that using a plurality of models to represent one target system stifles the quest for scientific realism. However, Massimi (University of Edinburgh) argues that the problem of inconsistent models can be solved if we reconceptualize the role of models as representing not actual or fictional states of affairs, but possibilities in a possibility space. If we do so, using and testing a plurality of models can help narrow this space, which is inherently valuable for the realist goal of achieving true or approximately true theories. This article provides an interesting rebuttal to the model anti-realism of Ruphy (2011), and represents a potentially fruitful framework for interpreting inconsistent astrophysical models.

  • Meskhidze, H. (2017). Simulationist’s regress in laboratory astrophysics [unpublished manuscript].

    • Extending the idea of the “experimenter’s regress” from Collins (1985) and of the “simulationist’s regress” from Gelfert (2011),Footnote 1 Meskhidze (UC Irvine) argues that the widespread use of modular models and bootstrapping methods in astrophysics renders the field susceptible to irresolvable situations of regress. The paper also includes interesting discussions of internal, external, and construct validity.

  • Reutlinger, A., Hangleiter, D., and Hartmann, S. (2018). Understanding (with) toy models. The British Journal for the Philosophy of Science, 69(4), 1069–1099. https://doi.org/10.1093/bjps/axx005.

    • Can simplified and idealized scientific models (“toy models”) provide genuine understanding? In this paper, the authors divide such models into two types – those which are embedded within an empirically well-confirmed framework theory, and those which are autonomous from any such framework – and argue that the former can provide “how-actually” understanding and the latter “how-possibly” understanding. Given that astrophysical models are sometimes quite simplified and idealized, this article provides a framework for understanding the epistemic role of such models.

  • Ruphy, S. (2011). Limits to modeling: Balancing ambition and outcome in astrophysics and cosmology. Simulation & Gaming: An Interdisciplinary Journal, 42, 177–194. https://doi.org/10.1177/1046878108319640.

    • Ruphy (École normale supérieure – PSL) argues that in galactic astrophysics, there are often numerous empirically adequate submodels available for researchers to choose from, and the choice of a particular submodel at a given stage constrains the range of available submodels at later stages. This renders models path-dependent and contingent. Combined with the plasticity and stability of such models, these features can lead to persistent incompatible model pluralism, which thwarts the goal of accurately representing the world; accordingly, we should be anti-realists about galactic models.

  • Thébault, K. (2019). What can we learn from analogue experiments? In R. Dardashti, R. Dawid, and K. Thébault (Eds.), Why Trust a Theory? Epistemology of Fundamental Physics (pp. 184–201). Cambridge University Press. https://doi.org/10.1017/9781108671224.014.

    • Analogue experiments, for example the use of fluid models to investigate Hawking radiation, can provide us with evidence of the same confirmatory type (and plausibly even of the same confirmatory degree) as conventional experiments. This is because, according to Thébault (University of Bristol) we can externally validate analogue black holes and thus take them to stand in for their astrophysical cousins.

For further articles relevant to this category, see Anderl 2016, Elder 2020, Elder 2021/2, Elder 2022, Meskhidze 2021, Salmon 1998, Suárez 2013, Sundberg 2010, Sundberg 2012, and Wilson 2021.

4 Realism and Antirealism

  • Falkenburg, B. (2012). Pragmatic unification, observation and realism in astroparticle physics. Journal for General Philosophy of Science, 43(2), 327–345. https://doi.org/10.1007/s10838-012-9193-1.

    • This article discusses how the historical and contemporary practices of astroparticle physicists evince a commitment to scientific realism. Falkenburg (TU Dortmund) argues that scientists working in astroparticle physics employ various strategies of pragmatic unification and theories of observation which can only be explained in realist terms, and thus that a commitment to realism is necessary for the coherence of the discipline. See Gava (2019) for a constructive empiricist response to Falkenburg.

  • Gava, A. (2019). Astroparticle physics, a constructive empiricist account. Science & Philosophy, 7(1), 21–40. https://doi.org/10.23756/sp.v7i1.450.

    • A direct response to Falkenburg (2012), Gava (Paraná State University) contests Falkenburg’s claim that the theory and practice of astroparticle physics are unintelligible except from a realist perspective. Instead, Gava argues that astroparticle physicists’ realist-sounding claims can be recast in an antirealist light, without doing injustice to the science itself (similar to arguments made by Bas van Fraasen in relation to other disciplines).

  • Hacking, I. (1982). Experimentation and scientific realism. Philosophical Topics, 13(1), 71–87.

    • This article contains an early formulation of Hacking’s “argument from engineering”, according to which the reality of unobservable entities in experimental physics (and science more generally) is guaranteed by our ability to manipulate the entities’ causal powers in order to generate new phenomena – i.e., to interfere with nature. Hacking (University of Toronto, emeritus) elaborates on this argument in his classic book, Representing and Intervening (1983), and uses it to explicitly advocate for antirealism about astrophysical entities in (1989).

  • Hacking, I. (1989). Extragalactic reality: The case of gravitational lensing. Philosophy of Science, 56(4), 555–581. https://doi.org/10.1086/289514.

    • The locus classicus for contemporary astrophysical antirealism. Despite being quite confident that gravitational lens systems exist in certain regions of the sky, Hacking (University of Toronto, emeritus) argues that our inability to manipulate those systems or observe them directly, combined with the fact that they are usually explained using different, incompatible, and literally false models, shows that we can only be constructive empiricists – rather than realists – about them.

  • Martens, N. C. M. (2022). Dark matter realism. Foundations of Physics, 52(1), 16. https://doi.org/10.1007/s10701-021-00524-y.

    • Given the current lack of empirical evidence regarding the nature of dark matter, Martens (University of Bonn) argues that we ought to be anti-realists about it, at least for now. He advocates for a form of “semantic” anti-realism in light of the thinness and vacuousness of the concept of dark matter, but leaves open the possibility that further discoveries will thicken the concept and thereby discredit his anti-realist stance.

  • Rockmann, J. (1998). Gravitational lensing and Hacking’s extragalactic irreality. International Studies in the Philosophy of Science, 12(2), 151–164. https://doi.org/10.1080/02698599808573589.

    • In this critical response to Hacking’s astrophysical antirealism, Rockmann (Deutsche Lufthansa AG) offers a realist interpretation of gravitational lenses which is grounded in their observability, in astrophysical common cause arguments, and in “home truths”.

  • Ruphy, S. (2010). Are stellar kinds natural kinds? A challenging newcomer in the monism/pluralism and realism/antirealism debates. Philosophy of Science, 77(5), 1109–1120. https://doi.org/10.1086/656544.

    • Breaking new ground in the debate between natural kind monists/pluralists and realists/antirealists, Ruphy (École normale supérieure – PSL) argues that monism and realism about stellar kinds are both untenable. Furthermore, essentialism (the view that members of natural kinds share essential properties) and structuralism (the view which defines kind membership in terms of structural properties) can come apart, despite usually being presented as a package deal.

  • Sandell, M. (2010). Astronomy and experimentation. Techne, 14(3), 252–269. https://doi.org/10.5840/techne201014325.

    • Sandell (Discover Hawaii Science) argues that since Ian Hacking’s experimental realism requires that unobservables be used in the production of “real” (as opposed to artefactual) experimental data, and since real experimental data are produced by something extra-instrumental, Hacking needs independent justification for realism in order for his own version of realism to work. Furthermore, even if Hacking’s view was correct, astronomy would still count as an experimental science because astronomers do manipulate the causal powers of the objects they study.

  • Shapere, D. (1993). Astronomy and antirealism. Philosophy of Science, 60(1), 134–150. https://doi.org/10.1086/289722.

    • A wide-ranging critique of Ian Hacking’s experimental realism and conception of science, Shapere (d. 2016, form. Wake Forest University) claims that astronomy is as much of a science as any other, and that Hacking’s antirealism depends on an overly static understanding of science. Furthermore, Shapere argues that Hacking’s 1989 article on gravitational lenses cherry picks its data, interprets these data too narrowly, and falsely concludes that the use of incompatible models renders realistic treatment of astrophysical phenomena impossible.

  • Suárez, M. (2013). Fictions, conditionals, and stellar astrophysics. International Studies in the Philosophy of Science, 27(3), 235–252. https://doi.org/10.1080/02698595.2013.825499.

    • Using models of stellar structure as a case study, Suárez (Complutense University of Madrid) contends that the main assumptions of such models are best understood as useful fictions, but that scientists can nevertheless maintain a realist agenda by (1) treating such assumptions as background knowledge required for the generation of “fictional conditionals”, or (2) treating such assumptions as components of the antecedents of these conditionals and employing a non-truth-functional semantics for them.

For further articles relevant to this category, see Anderl 2016, Boyd 2015, Hudson 2007, Massimi 2018, and Ruphy 2011.

5 Theories and Testing

  • Boyd, N. M. (2018). Scientific Progress at the Boundaries of Experience. Ph.D. dissertation. University of Pittsburgh. http://d-scholarship.pitt.edu/id/eprint/33843.

    • In this PhD dissertation, Boyd (Siena College) articulates a new empiricist philosophy of science and a non-internalist conception of scientific progress according to which the accumulating corpus of empirical data available to us (despite its theory-ladenness) constrains viable theories and constitutes growing knowledge about the world. Although Boyd offers a general account of scientific progress, her discussion is largely furnished with examples from the observational sciences, especially astrophysics and cosmology. Case studies include Arecibo telescope data, Babylonian astronomical tables, dark energy, and cosmic inflation. For the published version of the dissertation’s third chapter, see Boyd, N. (2018). Evidence enriched. Philosophy of Science 85, 403–421. https://doi.org/10.1086/697747.

  • De Baerdemaeker, S., & Boyd, N. M. (2020). Jump ship, shift gears, or just keep on chugging: Assessing the responses to tensions between theory and evidence in contemporary cosmology. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 72, 205–216. https://doi.org/10.1016/j.shpsb.2020.08.002.

    • When comparing predictions from the ΛCDM model with high-resolution astronomical observations, we face three dark matter-related “small-scale challenges”: the Missing Satellites problem, the Too Big to Fail problem, and the Cusp/Core problem. De Baerdemaeker (Stockholm University) and Boyd (Siena College) note three potential responses scientists can take to these problems, namely to jump ship (i.e., abandon ΛCDM for something like MOND), to switch gears (i.e., modify ΛCDM with something like warm dark matter), or to keep on chugging (i.e., focus on improving ΛCDM simulations by incorporating known baryonic physics). Based on the heuristics of epistemic conservatism and individuating causal factors, the authors argue that scientists ought to keep on chugging, and they conclude by outlining potential future scenarios in dark matter research.

  • Elder, J. (2023). Black hole coalescence: Observation and model validation. In L. Patton and E. Curiel (Eds.), Working Towards Solutions in Fluid Dynamics and Astrophysics: What the Equations Don't Say. Springer. ISBN-13: 9783031256851.

    • The models of binary black hole mergers used by researchers at the LIGO-Virgo collaboration are vital for connecting high-level gravitational theory with the observational data produced by the instruments, thereby granting empirical access to gravitational waves and their sources. However, recalling Collins’ (1985) “experimenter’s regress”, Elder (Black Hole Initiative) suggests that these models pose an epistemic circularity problem insofar as they are used to validate the observations, while the accuracy of the observations depends upon the validity of the models. LIGO-Virgo scientists attempt to circumvent this circularity using a variety of tests, including the “residuals test” and the “IMR consistency test”.

  • Horvath, J. E. (2009). Dark matter, dark energy and modern cosmology: The case for a Kuhnian paradigm shift. Cosmos and History: The Journal of Natural and Social Philosophy, 5(2), 287–303. https://www.cosmosandhistory.org/index.php/journal/article/view/161.

    • Horvath (University of São Paulo) argues that current debates over dark matter and dark energy are marked by features characteristic of pre-paradigm shift science, including attempts to isolate and characterize the problematic explanandum, the flourishing of philosophical/methodological analysis, the accelerating proliferation of proposed alternatives, and a sense of despair and discomfort within the community.

  • Kosso, P. (2013). Evidence of dark matter, and the interpretive role of general relativity. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 44(2), 143–147. https://doi.org/10.1016/j.shpsb.2012.11.005.

    • Kosso (Northern Arizona University) offers a lucid and accessible discussion of the theory and history of dark matter, with special attention paid to the question of whether it is possible to detect dark matter independently of general relativity (GR). Using the Bullet Cluster as his primary case study, Kosso contends that the part of GR employed in detecting dark matter through gravitational lensing (namely, the Einstein Equivalence Principle) is common to all metric theories of gravity. Given that all viable theories of gravity are metric, any such theory can be employed when investigating dark matter lenses -- the specifics of GR or any other theory are only required to determine the amount of dark matter present. Thus, contrary to the "dark matter double-bind" proposed by Vanderburgh (2003; 2005), Kosso claims that dark matter can be detected without assuming the truth of GR. See Sus (2014) and Vanderburgh (2014b) for responses.

  • Martens, N. C. M., & Lehmkuhl, D. (2020a). Dark matter = modified gravity? Scrutinising the spacetime–matter distinction through the modified gravity/dark matter lens. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 72, 237–250. https://doi.org/10.1016/j.shpsb.2020.08.003.

    • Most proposed solutions to the dark matter problem are either matter-based (eg., WIMPS) or gravity-based (eg., MOND). These types of solutions are typically represented as conceptually distinct, owing to a deeper distinction between matter and spacetime. In this paper, Martens and Lehmkuhl (both University of Bonn) argue that a strict matter-spacetime distinction is untenable, and likewise for the distinction between matter vs. gravity-based solutions to the dark matter problem. Their analysis draws heavily from the recent literature on superfluid dark matter, the scalar field φ of which they interpret both as a kind of dark matter, and as a modification of gravity. This paper constitutes the first part of a pair of articles, the second being Martens and Lehmkuhl (2020b).

  • Martens, N. C. M., & Lehmkuhl, D. (2020b). Cartography of the space of theories: An interpretational chart for fields that are both (dark) matter and spacetime. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 72, 217–236. https://doi.org/10.1016/j.shpsb.2020.08.004.

    • Following up from Martens and Lehmkuhl (2020a), the authors advance a “cartographic” taxonomy of interpretations for “Janus-faced” theories like superfluid dark matter (according to which a single scalar field is both a dark matter field and a modification of gravity, in certain contexts). Their taxonomy contains three classes of interpretations with nine subclasses, and they argue that four such subclasses remain viable ways of understanding superfluid dark matter. See p. 231 for their chart of interpretations.

  • Matarese, V. (2022). ‘Oumuamua and meta-empirical confirmation. Foundations of Physics, 52(4). https://doi.org/10.1007/s10701-022-00587-5.

    • Astrophysicist Abraham Loeb has suggested that the interstellar interloper 1I/2017 ‘Oumuamua is a piece of alien technology. To empirically confirm or confute his hypothesis would require significant expenditure of financial and intellectual resources – for instance by sending a probe to ‘Oumuamua, as proposed by Project Lyra. How can we be sure that Loeb’s hypothesis is viable and thus worth pursuing at all? Matarese (University of Bern) argues that we should use a meta-empirical framework to answer this question, one which provides information about the capacity of Loeb’s hypothesis to adequately represent potential future empirical data. Furthermore, Matarese contends that meta-empirical confirmation does not violate the empiricist spirit since it can be fruitfully applied even in empirically grounded research contexts such as this one.

  • Patton, L. (2020). Expanding theory testing in general relativity: LIGO and parametrized theories. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 69, 142–153. https://doi.org/10.1016/j.shpsb.2020.01.001.

    • Using LIGO as a case study, this paper explains how parametrized theories – specifically the parametrized post-Einsteinian (ppE) framework – can allow for more and better tests of General Relativity (GR). Patton (Virginia Tech) argues that formal reasoning on the theoretical structure of GR can broaden its empirical reach by removing barriers to empirical testing that have been encoded into the theory’s formal structure (and into existing testing frameworks, such as those used in the creation and interpretation of LIGO results).

  • Sus, A. (2014). Dark matter, the Equivalence Principle and modified gravity. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 45, 66–71. https://doi.org/10.1016/j.shpsb.2013.12.005.

    • In this critical response to Kosso (2013), Sus (University of Valladolid) argues that although all viable alternative theories of gravity satisfy the Einstein Equivalence Principle (EEP), Kosso is wrong to think that gravitational lensing (the primary source of evidence for dark matter in the case of Bullet Cluster observations) is a direct consequence of the EEP. Specifically, Sus claims that different metric theories of gravity (including MONDian alternatives like TeVeS) may countenance different conclusions concerning the location and physical properties of the lensing matter. Sus also accuses Kosso of being unclear about whether he takes his argument to support the very basic conclusion that gravitational lensing provides evidence for matter which cannot be luminously detected, or for the more controversial claim that this matter is non-baryonic. See also Sus’s interesting discussion of direct vs. indirect evidence.

  • Vanderburgh, W. L. (2001). Dark Matters in Contemporary Astrophysics: A Case Study in Theory Choice and Evidential Reasoning. Ph.D. Dissertation. Western University. https://philpapers.org/rec/VANDMI-4.

    • Vanderburgh’s (CSU San Bernardino) PhD dissertation covers the foundations of the dynamical dark matter problem in twentieth century astrophysics, raises the “dark matter double bind” as an in-principle difficulty we must face when solving the problem, and attempts to identify and evaluate patterns of inference involved in evidential arguments for candidate solutions thereof.

  • Vanderburgh, W. L. (2003). The dark matter double bind: Astrophysical aspects of the evidential warrant for general relativity. Philosophy of Science, 70(4), 812–832. https://doi.org/10.1086/378866.

    • Is our confidence in the applicability of general relativity (GR) to galactic and supra-galactic scales warranted, given currently available tests of GR? Vanderburgh (CSU San Bernardino) answers in the negative, noting that in order to evaluate the empirical adequacy of competing theories of gravitation at galactic scales, the mass distribution of test galaxies must first be known; however, because of the well-known discrepancy between dynamical mass and luminosity mass, we cannot feel confident in our measurements of mass distribution. In order to infer the distribution, we must assume a gravitational law (whether GR, MOND, Weyl gravity, or something else), but this is illegitimate given that the validity of our gravitational laws is precisely what is being tested. This is the “dark matter double bind”.

  • Vanderburgh, W. L. (2005). The methodological value of coincidences: Further remarks on dark matter and the astrophysical warrant for general relativity. Philosophy of Science, 72(5), 1324–1335. https://doi.org/10.1086/508971.

    • A follow-up to his (2003), Vanderburgh (CSU San Bernardino) addresses the question of whether apparent agreement between four ways of measuring the masses of galaxies and larger structures – namely through rotation curves, the Virial Theorem, observed X-ray emissions, and gravitational lensing – gives us strong evidential warrant for the applicability of general relativity at those scales. At least compared to its rivals, Vanderburgh contends that these measurements do lend support to GR, but this support is weak and defeasible.

  • Vanderburgh, W. L. (2014a). Quantitative parsimony, explanatory power and dark matter. Journal for General Philosophy of Science, 45(2), 317–327. https://doi.org/10.1007/s10838-014-9261-9.

    • Alan Baker (2003)Footnote 2 has argued that quantitative parsimony (the principle that theories which posit fewer entities are superior) is legitimately virtuous since quantitatively parsimonious theories have greater explanatory power. Using dark matter as a case study, Vanderburgh (CSU San Bernardino) challenges Baker’s account, and argues more generally that we ought to avoid artificially separating quantitative parsimony from other varieties of parsimony in actual theory choice situations.

  • Vanderburgh, W. L. (2014b). On the interpretive role of theories of gravity and ‘ugly’ solutions to the total evidence for dark matter. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 47, 62–67. https://doi.org/10.1016/j.shpsb.2014.05.008.

    • Peter Kosso (2013) has argued that evidence from observations of the Bullet Cluster provides evidential warrant for the equivalence principle in such a way which avoids Vanderburgh’s (2001; 2003) “dark matter double bind”. Vanderburgh (CSU San Bernardino) responds that even if this is the case, we are still unable to perform the kind of precision tests of general relativity that would confirm its applicability to galactic and supra-galactic scales. Vanderburgh also countenances the possibility that we cannot rule out “ugly” solutions to the dark matter problem which incorporate both dark matter and modified theories of gravity.

6 SSK and Social Issues

  • Collins, H. M. (1985). Changing Order: Replication and Induction in Scientific Practice. Sage Publications. ISBN-10: 0226113760.

    • A classic text in the sociology of scientific knowledge which introduces the notion of the “experimenter’s regress” using Joseph Weber’s attempts to detect gravitational waves as a case study. According to Collins (Cardiff University), it often happens in frontier science that the best or only check on a result is the proper functioning of the apparatus used to generate it, while the best or only check on the proper functioning of the apparatus is the result; this is the experimenter’s regress. For Collins, there is usually no rational way out of the regress – instead, scientists resort to heuristics, rhetoric, compulsion, etc. For those interested specifically in the Weber case study and regress, see Chapter 4 (pp. 79–111).

  • Collins, H. M. (2004). Gravity’s Shadow: The Search for Gravitational Waves. University of Chicago Press. ISBN: 9780226113791.

    • The authoritative social history of gravitational waves from the 1960s-2004, Collins’ (Cardiff University) book touches on key issues of scientific knowledge, expertise, and consensus. It serves as an important resource for philosophers interested in gravitational waves who seek to understand the scientific process at a more concrete level. For Collins’ other books on the science and sociology of gravitational waves, see Collins, H. (2010). Gravity’s Ghost: Scientific Discovery in the Twenty-first Century. University of Chicago Press; and Collins, H. (2017). Gravity’s Kiss: The Detection of Gravitational Waves. MIT Press.

  • Curiel, E. (2019). The many definitions of a black hole. Nature Astronomy, 3(1), 27–34. https://doi.org/10.1038/s41550-018-0602-1.

    • In this article, Curiel (Munich Center for Mathematical Philosophy; Black Hole Initiative) discusses the phenomenon whereby different communities of physicists define black holes in distinct and conflicting ways. The author presents a helpful sample of such definitions in three boxes, the first of which focuses on those offered by astrophysicists of various specializations. Given that physicists across different fields seek to collaborate on questions of mutual interest, Curiel recommends that each investigative team fix an explicit list of properties and phenomena which they take to be characteristic of black holes in order to ensure shared understanding and to avoid miscommunication.

  • English, J. (2017). Canvas and cosmos: Visual art techniques applied to astronomy data. International Journal of Modern Physics D, 26(04), 1,730,010. https://doi.org/10.1142/S0218271817300105.

    • An extensive overview of the production as well as cultural, aesthetic, and scientific value of astronomical outreach images (such as those created by the Hubble Heritage Team). English (University of Manitoba) contends that such images, which are created by applying techniques from visual art to representations of data, contribute meaningfully to both the “culture of science” and “culture of art”, retaining scientific significance despite being crafted to satisfy aesthetic ends (as evidenced by their widespread incorporation into both research papers and popular media).

  • Greenberg, J. (2004). Creating the ‘Pillars’: Multiple meanings of a Hubble image. Public Understandings of Science, 13, 83–95. https://doi.org/10.1177/0963662504042693.

    • Using the public reception and interpretation of the HST’s original (1995) image of the Eagle Nebula as a case study, Greenberg (A. P. Sloan Foundation) argues that when scientific images are black-boxed (presented as pure, unquestionable scientific objects, usually by the media and/or by scientists’ press releases), it becomes easier for lay-people to augment them with additional, “non-scientific” meanings which build on their supposed status as unadulterated representations of reality. This is a helpful article for those interested in the public perception of astronomy and the sociology of astronomical knowledge.

  • Heidler, R. (2017). Epistemic cultures in conflict: The case of astronomy and high energy physics. Minerva, 55(3), 249–277. https://doi.org/10.1007/s11024-017-9315-3.

    • The discovery of dark energy suddenly increased the mutual dependency between astronomy and high energy physics, such that physicists had to rely on astronomical instruments and data to answer their own questions (functional dependency), while both disciplines integrated and coordinated their scientific goals (strategic dependency). Heidler (German Research Foundation) argues that these dependencies fostered an epistemic conflict between the disciplines, leading to transgression of social and cognitive boundaries, turbulence in epistemic practices, and self-reflection on the scientists’ identities and the moral economy of which they are a part.

  • Jebeile, J. (2018). Collaborative practice, epistemic dependence and opacity: The case of space telescope data processing. Philosophia Scientiæ. Travaux d’histoire et de Philosophie Des Sciences, 22(2), 59–78. https://doi.org/10.4000/philosophiascientiae.1483.

    • Employing Susann Wagenknecht’s (2014) distinction between opaque and translucent epistemic dependence,Footnote 3 Jebeile (University of Bern) analyzes the social epistemological relationships between collaborators working with the Herschel Space Observatory. Jebeile identifies cases of opaque epistemic dependence therein, and argues that sources of opacity include not only lack of expertise, but also the non-disclosure of data, failure to understand relevant instrumental processes, and epistemic inaccessibility of numerical calculations.

  • Kennefick, D. (2000). Star crushing: Theoretical practice and the theoretician’s regress. Social Studies of Science, 30(1), 5–40. https://doi.org/10.1177/030631200030001001.

    • An important study in the sociology of astrophysical knowledge and simulations which uses the 1990s controversy over “star-crushing” as a case study. Kennefick (University of Arkansas) introduces the notion of the “theoretician’s regress” – a play on Harry Collins’ “experimenter’s regress” (Collins 1985) – to explain why the controversy over the star-crushing effect could not be resolved by strictly “scientific” debate.

  • McCray, W. P. (2000). Large telescopes and the moral economy of recent astronomy. Social Studies of Science, 30(5), 685–711. https://doi.org/10.1177/030631200030005002.

    • McCray (UC Santa Barbara) contends that the Gemini 8-Meter Telescopes Project illustrates a tension in American optical astronomy between the “haves” (those with access to telescopes through their institutions) and the “have-nots” (those who must compete for time at federally funded national observatories), while also showing how non-scientific political concerns play into the debate around investment in big science.

  • Messeri, L. R. (2010). The problem with Pluto: Conflicting cosmologies and the classification of planets. Social Studies of Science, 40(2), 187–214. https://doi.org/10.1177/0306312709347809.

    • This paper offers a historical and sociological account of the “demotion” of Pluto by the IAU in 2006. Messeri (Yale University) focuses on the relationships between scientific and cultural cosmologies, and the ways that the public influenced the debate surrounding the definition of “planet”. Messeri argues that the IAUs definition of “planet” privileged one cosmology over others, thereby fracturing discourse about planets and Pluto in particular.

  • Metzger, P. T., Grundy, W. M., Sykes, M. V., Stern, A., Bell, J. F., Detelich, C. E., Runyon, K., & Summers, M. (2022). Moons are planets: Scientific usefulness versus cultural teleology in the taxonomy of planetary science. Icarus, 374, 114,768. https://doi.org/10.1016/j.icarus.2021.114768.

    • In this lengthy and detailed article, Metzger et al. contend that the IAUs 2006 definition of “planet” paid too much credence to folk taxonomy while ignoring the importance of scientific taxonomy. They argue that a purely geophysical definition of “planet”, according to which a planet is an object with a certain amount of geological complexity, has stronger historical and pragmatic grounding when compared to the current dynamical definition (and that contemporary planetary scientists already use the geophysical definition anyways). As such, the authors claim that we ought to revise our educational materials for the sake of an improved scientific and cultural planetary taxonomy.

  • Sovacool, B. (2005). Falsification and demarcation in astronomy and cosmology. Bulletin of Science, Technology & Society, 25(1), 53–62. https://doi.org/10.1177/0270467604270151.

    • In this sociological article, Sovacool (University of Sussex; Aarhus University) analyzes how and to what extent contemporary astronomers and cosmologists rely on the ideas of Karl Popper to resolve crises related to methodology, legitimacy, and testability. Sovacool concludes that Popper’s ideas play an important implicit and explicit role in such crises, and he argues more broadly for the relevance of philosophy to scientific practice.

  • Sundberg, M. (2010). Cultures of simulations vs. cultures of calculations? The development of simulation practices in meteorology and astrophysics. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 41, 273–281. https://doi.org/10.1016/j.shpsb.2010.07.004.

    • Sundberg (Stockholm University) uses “modern” and “postmodern” “computer cultures” as a lens for analyzing contemporary numerical simulation practices in meteorology and astrophysics. Sundberg argues that, by and large, there seems to be a shift occurring towards a more “postmodern” culture of simulations which emphasizes the value of surface-level research using black-box simulations, entertaining visualizations of simulation results, and the playful exploration of simulation codes as a learning tool.

  • Sundberg, M. (2012). Creating convincing simulations in astrophysics. Science, Technology, & Human Values, 37(1), 64–87. https://doi.org/10.1177/0162243910385417.

    • Sundberg (Stockholm University) examines the methods that astrophysicists use to convince themselves and others of the reliability and credibility of their simulations, especially when those simulations deliver outputs which are uncertain or difficult to interpret. In the process, Sundberg analyzes the distinction between “numerical” and “real” effects, arguing that they cannot be distinguished on the basis of what they derive from.

For another article relevant to this category, see Hudson 2007.

7 Typicality and Extra-Terrestrials

  • Ćirković, M. M. (2006). Too early? On the apparent conflict of astrobiology and cosmology. Biology and Philosophy, 21(3), 369–379. https://doi.org/10.1007/s10539-005-8305-2.

    • Olum’s problem, a generalized form of Fermi’s paradox, relies on a dehistoricized understanding of the universe. In opposition to this dehistoricized stance, Ćirković (Astronomical Observatory of Belgrade; Future of Humanity Institute) argues that the universe becomes more hospitable to life as time passes, meaning that it is too early in cosmological history for the absence of detected extraterrestrial life to constitute a genuine paradox.

  • Lacki, B. (2021). The noonday argument: Fine-graining, indexicals, and the nature of Copernican reasoning [unpublished manuscript]. arXiv:2106.07738v1 [physics.hist-ph].

    • Lacki (UC Berkeley) offers a new theory of typicality called Fine Graining with Auxiliary Indexicals (FGAI). He argues that it avoids the paradoxes (such as the Doomsday Argument) faced by other theories of typicality by fine-graining our macrotheories with microhypotheses and by separating indexical from physical facts.

  • Lewis, G. F. & Barnes, L. A. (2021). The trouble with “puddle thinking:” A user’s guide to the Anthropic Principle. Journal & Proceedings of the Royal Society of New South Wales, 154(1), 6–11. ISSN 0035–9173/21/010006–06.

    • A short, popular introduction to the anthropic principle and to Douglas Adams’ notion of “puddle thinking”, Lewis (University of Sydney) and Barnes (Western Sydney University) argue that the problem of fine-tuning does not concern the question of how we find ourselves in a universe with conditions amenable to human life, but the question of why a universe with such conditions exists at all. This question is, they concede, a necessarily philosophical one.

  • Satta, M. (2021). Evil twins and the multiverse: Distinguishing the world of difference between epistemic and physical possibility. Synthese, 198(2), 1153–1160. https://doi.org/10.1007/s11229-019-02092-1.

    • Brian Greene and Max Tegmark have claimed that if the universe is infinite and matter is roughly evenly distributed within it, then every possible material arrangement of particles must exist in an infinite number of instantiations. Satta (Wayne State University) argues that Green and Tegmark’s claims rely on a conflation of physical with epistemic possibility, and that they ignore potential macro-level constraints on possibility from psychology, biology, and sociology.

For another article relevant to this category, see Matarese 2022.

8 Dark Matter and MOND

  • Abelson, S.S. (2022). The fate of tensor-vector-scalar modified gravity. Foundations of Physics, 52(31). https://doi.org/10.1007/s10701-022-00545-1.

    • Abelson (Indiana University, Bloomington) reviews the case of the LIGO neutron star merger detection against TeVeS, long considered the most plausible relativistic extension of MOND. Abelson argues that the physicists’ use of language of falsification in a strict Popperian sense was unwarranted. However, Abelson offers an alternative interpretation of the result as a corroboration of the null-hypothesis, along the lines of Mayo’s error-statistical account.Footnote 4

  • De Baerdemaeker, S. & Dawid, R. (2022). MOND and meta-empirical theory assessment. Synthese 200(344). https://doi.org/10.1007/s11229-022-03830-8.

    • De Baerdemaeker and Dawid (both Stockholm University) critically examine some of the philosophical arguments that have been offered by defenders of MOND in terms of different views on theory assessment. They argue, first, that on a standard reading of Popper and Lakatos, the arguments fail. Second, they argue that the strongest philosophical defense of MOND takes the form of meta-empirical theory assessment, but that, according to that account as well, the arguments fail to be convincing.

  • Jacquart, M. (2021). ΛCDM and MOND: A debate about models or theory? Studies in History and Philosophy of Science Part A, 89, 226–234. https://doi.org/10.1016/j.shpsa.2021.07.001.

    • Jacquart (University of Cincinnati) extends Massimi’s (2018b) analysis of the debate between proponents of ΛCDM and MOND as one about the challenges of multiscale modeling. Instead of interpreting the debate in terms of theoretical disagreement (as is commonly done), Jacquart advocates for a model-based understanding. According to that interpretation, both rivals are successful within their intended domain of application, but there is a need to be critical of attempts to extend them to new domains. Such extension requires justification that the model accurately represent explanatory dependencies in the target system.

  • Martens N. C. M., Carretero Sahuquillo, M. A., Scholz, E., Lehmkuhl, D., & Krämer, M. (2022). Integrating dark matter, modified gravity, and the humanities, Studies in History and Philosophy of Science, 91, A1-A5. https://doi.org/10.1016/j.shpsa.2021.08.015.

    • This editorial introduces the aims of a Special Issue on dark matter and MOND. The authors all work on an interdisciplinary research project, The Epistemology of the LHC, of which one sub-project is on LHC, dark matter, and gravity. They provide two motivations for interdisciplinary work on the interface between dark matter and modified gravity. The first is to improve communication and reduce the polemics between physicists on either side of the divide. The second is to start extending the philosophical literature on dark matter—despite dark matter being one of the central problems of contemporary fundamental physics. The editorial includes an extensive reference list of philosophical discussions of dark matter and MOND.

  • Massimi, M. (2018b). Three problems about multi-scale modelling in cosmology. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 64, 26–38. https://doi.org/10.1016/j.shpsb.2018.04.002.

    • Massimi (University of Edinburgh) argues that the debate between ΛCDM and MOND in contemporary cosmology can best be understood as a debate about challenges to multi-scale modeling. Massimi argues for five claims (i) ΛCDM and MOND work best at different scales, i.e., the macro-scale and the meso-scale, respectively; (ii) Both face challenges when modeling across more than one scale; (iii) The downscaling problem for ΛCDM is one of explanatory power, while the upscaling problem is one of consistency with general relativity; (iv) Hybrid models, which try to unify the best of ΛCDM and MOND, face a problem of predictive novelty; (v) Ultimately, a successful cosmology, in order to be successful cannot avoid having to solve these problems.

  • McGaugh, S. (2015). A tale of two paradigms: The mutual incommensurability of ΛCDM and MOND. Canadian Journal of Physics, 93(2), 250–259. https://doi.org/10.1139/cjp-2014-0203.

    • McGaugh (Case Western University) reviews the ongoing disagreement between ΛCDM and MOND in terms of a Kuhnian picture of incommensurable paradigms. McGaugh submits that both have significant empirical support, but nonetheless offer inconsistent worldviews. However, McGaugh is concerned about the detectability of dark matter: without a positive dark matter particle detection, there are concerning parallels between the dark matter hypothesis and aether theory.

  • Merritt, D. (2017). Cosmology and convention. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 57, 41–52. https://doi.org/10.1016/j.shpsb.2016.12.002.

    • Merritt (Rochester Institute of Technology) assesses the current concordance model of cosmology according to Popper’s definition of conventionalist stratagems. According to Merritt, dark matter and dark energy were ad hoc auxiliary hypotheses introduced to save the concordance model from falsifying evidence. Moreover, the usual convergence arguments offered in support of the concordance model are argued to fail. As such, cosmology has, plausibly, entered the phase of what Lakatos called a degenerative problem shift.

  • Merritt, D. (2020). A Philosophical Approach to MOND: Assessing the Milgromian Research Program in Cosmology. Cambridge University Press. https://doi.org/10.1017/9781108610926

    • Merritt (Rochester Institute of Technology) offers a book-length review of the history of MOND, from when it was first proposed in the early 1980s, until today. The philosophical framing of the book is in terms of Lakatos’ theory of progressive and degenerative research programs. Merritt argues that especially the earlier theories of the research program were highly successful at predicting novel facts (assessed according to different proposed philosophical accounts of novelty). While the very latest theories are potentially less successful qua novel prediction, the overall research program is deemed to be progressive.

  • Merritt, D. (2021). Feyerabend’s rule and dark matter. Synthese 199, 8921–8942. https://doi.org/10.1007/s11229-021-03188-3.

    • Building on Feyerabend’s work, Merritt’s (Rochester Institute of Technology) starting point is the claim that, under specific circumstances, the lack of an experimental result can refute a theory while confirming another. Merritt applies this to the current concordance model of cosmology, and argues that there are several examples of such refuting negative results, including the failure of dark matter particle detection, the failure to detect primordial dwarf galaxies, and dynamical friction.

  • Milgrom, M. (2020). MOND vs. dark matter in light of historical parallels. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 71, 170–195. https://doi.org/10.1016/j.shpsb.2020.02.004.

    • Milgrom (Weizmann Institute) reviews the case for MOND, from its initial proposal, up until its scientific status today. The paper draws multiple comparisons between the history of MOND and well-known episodes in the history of science, including the Copernican revolution and the development of quantum theory. It also draws parallels between dark matter and aether-theory.

For further articles relevant to this category, see De Baerdemaeker 2021, De Baerdemaeker and Boyd 2020, Horvath 2009, Hudson 2007, 2009, 2013, Kosso 2013, Martens 2022, Martens and Lehmkuhl 2020a,b, Sus 2014, and Vanderburgh 2001, 2003, 2005, 2014a,b, Wilson 2021.