1 Introduction

In (French 2020) it is claimed that the reification of scientific theories can be found both implicitly and explicitly within the philosophy of science. It is implicit in recent discussions of representation, for example, where paintings or drawings typically function as the relevant aesthetic comparators. It is also explicit, firstly in certain interpretations of the so-called Semantic Approach, according to which theories are identified as abstract set-theoretic entities (Giere 2008); and secondly, and famously, within Popper’s ‘Three Worlds’ framework, according to which both theories and artworks inhabit ‘World Three’ (Popper 1972; cf. Thomasson 2004, where artworks are argued to be ‘abstract artefacts’). Such reificationist views have been criticised on the grounds that they are unable to accommodate a detailed understanding of the heuristics of theory development and generate an assortment of problems more generally (French 2020).

These problems can be sidestepped, it is argued, if we adopt an eliminativist stance towards theories (French 2020). Of course, to say that there are no such things as theories does not imply that there are no such things as those entities to which theories are apparently referring—eliminativism towards theories does not imply a similar stance towards unobservable entities.Footnote 1

Nevertheless, it must be recognised that such a stance is not an easy one to take, not least because of the discourse of scientists, philosophers of science and ‘lay’ folk. We talk about theories, their development, their evidential support or lack thereof and we attribute certain qualities to them: we say that quantum mechanics, for example, is ‘elegant’ or ‘difficult to understand’ or ‘empirically successful’ and so on. How can we make sense of such talk if there are no theories. Here we can co-opt a form of ‘truth-maker’ theory previously deployed in the service of a similarly eliminativist account of artworks (Cameron 2008). On this view, a statement can be regarded as true, even if the things it apparently refers to do not exist, because the relevant truth-makers are taken to be something else. So, we can utter the statement, ‘Michelango’s statue of David is beautiful’, and regard it as true, whilst denying that there are such things as statues per se, by taking the truth-makers of the statement to be the requisite set of elementary particles arranged in a particular way (obviously there is more to say here, not least about what is meant by ‘arrangement’ in this context). In the case of statements about theories, the relevant truth-makers are taken to be the associated practices undertaken by the scientific community. Thus, the statement that ‘quantum mechanics is empirically successful’ is made true by certain experimental practices, whereas the claim ‘quantum mechanics is elegant’ is made true by certain theoretical practices, involving, for example, the manipulation of symbols on paper or whiteboards and so on (French 2020, 191–202).

It should be noted that this is not to reduce theories to such practices, whether these are theoretical or experimental or some combination of the two. To do so would be to assert that theories are constituted by these practices, at least partially (cf. Cartwright, Shomar and Suárez 1996), raising all the usual mereological concerns with such a constitutive relationship. As in the case of mereology more generally, adopting an eliminativist stance allows its advocate to sidestep such concerns, given the insistence that there are no theories to be constituted or reduced in the first place! Nor does this move amount to replacing reference to theories by reference to practices—as already stated, we can still talk about, make claims about and refer to theories, on the understanding that what makes such talk, claims and referrals true are the relevant practices.

Nevertheless, there are clearly connections to be drawn with what has come to be referred to as ‘philosophy of science in practice’ (see for example, Ankeny et al. 2011). Adherents of such a move have insisted that in large part the philosophy of science has operated in ‘nearly complete isolation from real scientific practice’ (Ankeny et al. 2011) and in response have urged ‘the promotion of conscious, detailed, and systematic study of scientific practice that nevertheless does not dispense with concerns about truth and rationality’ (Ankeny et al. 2011). The question then naturally arises as to the extent to which the approach advocated here and in (French 2020) differs from such urging, in practical terms.Footnote 2 In large part this depends on how the above concerns about truth and rationality are accommodated within practice-based accounts. Indeed, it has been alleged that by not paying sufficient attention to such concerns, such accounts have swung too far away from the core aims of the philosophy of science as standardly conceived (see Dresow 2020, 60).

However, as French has argued (French 2020, 225–239) and as just noted, a realist stance can still be adopted within the theory-eliminativist framework and in this regard it might be viewed as offering a median approach within the philosophy of science, between ‘standard’ and practice-based accounts. Certainly, the inclusion of practices as truth-makers within this framework allows us to accommodate talk—by scientists as well as historians and philosophers of science—about theories and their various features, such as their truth, empirical adequacy, beauty or whatever, without being committed to their existence. In this respect it might be compared to van Fraassen’s ‘constructive empiricism’ in that the latter can accommodate scientists’ discourse about theories without the contortions that earlier forms of empiricism had to endure.Footnote 3 Similarly, by shifting focus to the practices, the approach being explored here may not seem to be that different from practice-based accounts but, crucially, it allows us to retain many of the devices and moves employed within the philosophy of science, appropriately understood, as we shall see.

What this all amounts to, then, is that when comparing such approaches we should be appropriately sensitive to how we frame such comparisons ‘in practical terms’. When it comes to the initial description of a particular scientific episode, there may seem to be little difference in these terms between this form of eliminativism and practice-based accounts. However, insofar as the former allows us to retain theory-talk and the kinds of meta-level devices that philosophers of science find useful, in those terms the two may diverge, perhaps considerably.

Having said all that, one might wonder about the nature of these practices that are playing the role of truthmakers in this context and, in particular, where we should draw the line between those practices that count as ‘scientific’ and those that do not. Obviously, this looks very much like the old problem of demarcation rearing its ugly head again (see, for example, Turunen et al. 2021) and, as with the latter, we could adopt a laissez-faire attitude and allow the class of truthmakers to include all kinds of different practices or, more plausibly perhaps, we could insist that where we draw the line is a local and contextual matter, dependent on both the scientific domain in question and perhaps other factors.Footnote 4

Leaving this issue to one side, my aim here is to tease out some of the implications of this approach first of all, for how we, philosophers of science, should view the history of science; secondly, for how we should understand the devices that we use in our own philosophical practices; and thirdly, for how we might think about the relationship between the history of science and the philosophy of science.

2 Implications for How we (Philosophers) Should View History of Science

In a well-known and much discussed critical analysis of then extant approaches to the problem of achieving an understanding of political and philosophical texts, Skinner wrote,

[A]s soon as we see that there is no determinate idea to which various writers contributed […] then what we are seeing is equally that there is no history of the idea to be written […] (Skinner 1969, 38).

Arabatzis then took this as forming the basis of a challenge for the historian of science: how can she, in good conscience, write a history of the concept of, say, ‘electron’ without presupposing the kind of determinacy that Skinner warns against and which may bring with it, at least implicitly, a realist stance? His response was to adopt what he called a ‘biographical’ approach according to which the historian of science should focus on the experimental situations and theoretical practices associated with the concepts or terms under consideration (Arabatzis 2005).

The correspondences with the theme of this paper should be obvious: in effect Skinner adopted a kind of eliminativism with regard to the notion of an ‘idea’ in the history of political thought and Arabatzis exported this to the history of science, urging his fellow historians to focus, instead, on the relevant practices, both theoretical and experimental. We can sketch how such a focus leads to a shift in our historical understanding using the example of the development of quantum mechanics (whilst bearing in mind the differences between practice-based accounts and the form of theory-eliminativism being explored here).

The usual (simplistic) story runs as follows: following the collapse of the ‘old’ quantum theory associated with Bohr and Sommerfeld, new approaches were sought, resulting in the so-called ‘matrix’ mechanics as developed by Born, Heisenberg and Jordan, on the one hand, and, on the other, Schrödinger’s wave mechanics. Following initial attempts by Schrödinger himself, these were shown to be equivalent by von Neumann, in the sense of being different representations of—supposedly—one underlying theory formulated in terms of vectors and operators in Hilbert space (for a detailed account of the development of this equivalence result, see Muller 1997). This result helped effect what came to be called the ‘quantum revolution’ which in turn was followed by what might be called the ‘Copenhagen Cohesion’ as the majority of physicists came to accept a particular interpretation of the new theory associated with Bohr and his school.

This crude synopsis ignores crucial details of course but more importantly, the kind of story that it exemplifies, with its central emphasis on the emergence of the theory, offers a distorted historical representation that in turn has been used to support dubious philosophical claims. Let’s briefly consider how the story might be told from the eliminativist perspective.Footnote 5

The most obvious feature of such a narrative would be its de-centering of the above emphasis and putting in its place a more nuanced consideration of the various approaches in play at the time, where this would include not only matrix and wave mechanics but also the transformation calculus developed by Dirac and Jordan (again), as well as Weyl’s group theoretic approach, for example. The multiple inter-relationships between these approaches, the different mathematical resources that they drew upon and the implications that were drawn from them all deserve close analysis that is sympathetic to the relevant contexts (see for example Duncan and Janssen 2019). Of course, such a consideration should not ignore or simply dismiss von Neumann’s result but certainly the move from the establishment of a certain formal equivalence to that of the theory of quantum mechanics is a problematic one.

Mitsch, for example, has persuasively argued that the Hilbert space formulation should be understood along Hilbertian lines as an ‘axiomatic completion’. This involves a clear separation between the relevant physical ‘facts’, or content, and the mathematical formalism, with the aim of ordering the field of knowledge constituted by the former and orienting future research (Mitsch 2022). Consequently, the outcome of this method should be understood as relational, in the sense that it is relative both to the facts and the analytic apparatus. Furthermore, such outcomes should also be taken as provisional in that the method is a continual process that may lead to the revision of any given set of axioms.

Von Neumann then followed this method in capturing what he took to be certain ‘essential facts’ from the field of knowledge covered by the theoretical and experimental context of quantum mechanics. These involved, first, the standard features of probability theory, secondly, the usual—that is, Heisenberg’s—understanding of the Uncertainty Principle and, finally, that the quantities to be represented should be those that are effectively measurable (Mitsch 2022).Footnote 6 Not only is the formulation relative to these assumptions, but it is also so relative to the formalism of Hilbert space, of course (see also Bacciagaluppi 2022). Furthermore, it was clearly provisional. As von Neumann himself recognised, one could consider a different such ‘completion’, yielding a correspondingly different formalism, as in the case of Bohmian mechanics for example.Footnote 7 The theory eliminativist can then appropriate this analysis and use it to undermine the claim that von Neumann’s formulation captures or represents ‘the’ theory of quantum mechanics. As an ’axiomatic completion’ the formulation can instead be understood as bringing order to a particular field of knowledge, where the latter incorporates a range of theoretical and experimental practices that in turn ‘make true’ certain claims, such as, again, ‘quantum mechanics is elegant’.

As a result of such theory elimination, the claim that there was a ‘quantum revolution’ can also be brought into doubt. Of course, this depends on what is meant by ‘revolution’ here but let’s take as an example—albeit, again, in rather simplistic form—the Kuhnian understanding, whereby the old paradigm (in this case classical physics) encounters an accumulation of anomalies, leading to a sense of crisis which provokes its replacement by a new one (quantum mechanics). This would fit the ‘usual story’ sketched aboveFootnote 8 but when Kuhn himself interviewed a number of the participants in the development of quantum mechanics for the American Institute of Physics Oral Histories project he found that at least some resolutely denied there was any ‘air of crisis’ in the field (Seth 2010, 265). It is perhaps for this reason that The Structure of Scientific Revolutions actually contains few details about the development of quantum mechanics itself, beyond a brief mention of Pauli’s state of confusion prior to the development of matrix mechanics. Indeed, as Seth has put it:

in seeing crisis as an aspect of the community as a whole, scholars have given up the presence or absence of “crisis talk” as a marker for more fundamental divisions. Explaining why some saw a crisis and a revolution and some did not, requires a deep rethinking of our understanding of these terms (Seth 2010, 267–268).

What Seth calls the ‘romance of revolutions’ (Seth 2010, 269) among historians and philosophers of science has been encouraged by an adherence to a deep-seated theory-oriented framework and abandoning the latter will help facilitate the kind of ‘deep thinking’ that Seth sees as required here (French 2020).

The point is further reinforced by the recognition that the very distinction between classical and quantum physics, in terms of which any claim as to the existence of a revolution must be articulated, is one that in fact was elaborated over a period of years and can be seen as imposed well after the core developments took place (Gooday and Mitchell 2013). In this context retrospective reflection by the participants assumes a significant role, one that is then amplified by the reliance by historians and philosophers of science on autobiographical material.Footnote 9 And not surprisingly, such reflections typically adopt a distortive theory-oriented framework which reinforces the sharp juncture presumed by talk of revolutions. One of the features of the ‘deep thinking’ facilitated by the adoption of a theory eliminativist stance involves a concomitant sceptical attitude towards such reflections and there is indeed interesting work to be done concerning the use of such autobiographies as historical ‘evidence’ (see, for example, Söderqvist 2007). More broadly, the above recognition of the lack of a sharp division between these supposed ‘paradigms’ meshes with a more nuanced story in which various multiply intertwined theoretical and experimental practices can be discerned, with certain of the relationships involved selected retrospectively and used to cement into place the (romantic) narrative of revolution centred around the emergence and establishment of ‘the’ theory.

A crucial aspect of that ‘cementing into place’ of what amounts to a false narrative involved the imposition of what I’ve called above the ‘Copenhagen Cohesion’. The manner in which alternative interpretations were crushed or dismissed in the late 1920s and after, and the ‘Copenhagen hegemony’ established, is now well-documented (see Cushing 1994). De Broglie’s attempt to elaborate an alternative view was famously and brutally attacked by Pauli at the Solvay conference of 1927, under false pretences, as it turned out, since the argument deployed was fallacious. Likewise, Rosenfeld set out to publicly demolish Bohm’s account at the 1957 Bristol symposium on ‘Observation and Interpretation in the Philosophy of Physics’ (see Körner 1957) and Everett’s ‘relative state’ interpretation was also famously dismissed by Bohr and his cohorts and languished for many years as a result. Despite the presence of such alternatives, the ‘myth’ of the Copenhagen interpretation came to be broadly accepted across the physics community, spilling over into the work of historians and philosophers, even though the label itself was only adopted many years after the fact (see Camilleri 2009). Abandoning a theory-oriented framework helps to further dispel this myth and reveal the multifarious approaches and views that were in play at the time, such as, to give another example, the significantly distinct ‘Princeton’ school that formed around von Neumann and Wigner (see Freire Jr 2015).Footnote 10

This is just one example of course but the upshot can, I suggest, be generalised: the adoption of a theory-oriented framework with, in particular, the assumption that there exists, as an abstract entity or in some theory ‘space’, the theory pertaining to a certain set of phenomena, which all relevant developments attempt to attain, can obscure or even efface entirely the alternatives.Footnote 11 Dropping this framework allows the possibility of articulating a richer narrative, not only from the perspective of the history of science but also with regard to the resources that can be made available for philosophers of science (I shall come back to this).Footnote 12

3 Implications for How we (Philosophers) Should Understand the Devices we Use

We recall that on Arabatzis’ biographical approach the history of the electron should be understood as a history of representations of the electron. So, in order to introduce some useful examples of the ‘devices’ deployed by philosophers of science, let’s briefly consider the recent discussions of the nature of scientific representation (see van Fraassen 2008; Suárez 2010; Frigg and Nguyen 2018). One prominent view takes the apparently plausible idea that representation involves some form of similarity holding between the ‘source’ and the ‘target’ and proposes that this can be formally captured through the set-theoretic notion of (partial) isomorphism. This latter, in turn, is typically situated within the so-called ‘Semantic Approach’ which—putting things rather crudely—characterises theories in terms of families of set-theoretic models (we’ll come back to this; see again van Fraassen 2008; Bueno and French 2011). An alternative account, also prominent, argues that the above idea is in fact deeply problematic and eschews such formal characterisations in favour of a weaker conception that has at its heart the claim that a representation, ‘must have the capacity to be employed by an informed and competent user to draw valid inferences regarding the target’ (Suárez 2010, 98). Now, how should we understand representation, as captured by these different devices, from the perspective afforded by the theory eliminativist stance?

According to French (2020, 231–236), an answer can be given by carefully distinguishing between the ‘object’-level at which various scientific practices, both theoretical and experimental, take place and the ‘meta’-level, where we find the philosophical accounts of those practices using one or other of the afore-mentioned devices. What then stands in the relationship of representation is a meta-level construction that we, as philosophers of science (or, interestingly perhaps, scientists when they are in ‘philosophical’ mode), deploy when we try to make sense of scientific practice. As an example, let us consider differing accounts of the development of the so-called ‘London and London’ model of superconductivity.

Summarising the bare (historical) bones of this development, at the beginning of the 1930s, physicists struggled to explain the phenomenon of superconductivity in terms of the recently developed quantum theory. The breakthrough came with the observation of the so-called ‘Meissner Effect’, in which magnetic flux is expelled from a material as its temperature is taken below a critical point and it becomes superconducting. This led the London brothers to propose a ‘macroscopic’ model of superconductivity, which helped shape the basis of a possible mechanism (London 1935).

Cartwight, Shomar and Suárez (1996) famously presented this as an example of ‘phenomenological’ model construction from the ‘bottom-up’ as the model was not obtained via a process of ‘deidealisation’ from some over-arching theory. As a result, they argued, this construction could not be captured by the Semantic Approach, which they characterised (along with others) in terms of the identification of theories with families of models. French and Ladyman, in response (1997), countered this by claiming that a more detailed consideration of the Londons’ work revealed that their model building was informed by theory in a ‘top-down’ manner and, moreover, that it could, in fact, be straightforwardly accommodated by the Semantic Approach. The debate then continued over a span of years with each side contributing further arguments and examples (see, for example, Bueno et al. 2012).

More recently, Potters has argued that both sides in this exchange have erred in offering characterisations of the Londons’ representation of the phenomena that are too crude and, crucially, too rigid (2019). Thus, he writes,

the interpretation of a model – that which it represents – should not be taken as something established at the moment when it is constructed, but rather as something that is historically elaborated over time […]. in such a way that it can then be successfully projected backwards on the ‘moment of construction’ (Potters 2019, 22).

As a result, he continues,

it is difficult to speak about the connection between the phenomenon and the model, since neither side of the connection should be seen as already stable in itself, but rather as constructed ensembles of different elements – experiments, theories, assumptions, models, etc. – that changed over time (Potters 2019, 23).

And he concludes,

the debate between [Cartwright, Shomar and Suárez] and [French and Ladyman and their co-authors] is shaped by a particular philosophical conception of how scientific representation should be studied: as the discovery of a connection between a given experimental phenomenon and the meaning of the new equations. This approach clashes with the historical episode, but at the same time the episode also suggests an alternative way to study scientific representation: as the establishment, over time, of a connection between historically stabilized constellations of different elements that, through discussion and engagement with alternative views and approaches, come to constitute phenomenon and meaning (Potters 2019, 23; my emphasis).

As with Mitsch’s analysis of von Neumann’s formulation, Potters’ revisionary approach to this debate can also be appropriated by the theory eliminativist. In effect, the concern here is that both ‘sides’ assume that there is a London-London model that is related in a particular way to the relevant phenomena and the disagreement has to do with how that model and its development is characterised and what is the nature of that relationship with the phenomena. However, if we drop that assumption and focus on the various moves made over time—that is, on the various theoretical and experimental practices—then we will obtain a better grasp of that particular historical episode.

Thus, with regard to Potters’ first point—that what a model is taken to represent may actually be elaborated over time and then projected backwards, as it were, onto some ‘moment of construction’ (which may itself be disputed)—this is something that can be seen throughout the history of science and certainly in that of quantum physics. Dorling, for example, claimed that Planck, in fact, did not appreciate what he had done in his famous 1900 paper that is typically cited as the foundational work in quantum theory and that when he exclaimed on his achievement to his son at the time, he in fact had something else in mind (French 2020, 209–210). Likewise, Kuhn has noted that later accounts of a ‘discovery’ typically redescribe it in terms of the vocabulary of the time, something that he relates to a view of discovery—that he, like the eliminativist, rejects—according to which theories lie hidden, to be groped towards by their discoverers (French 2020, 214–215). No doubt various sociological or even psychological explanations can be given of such retrospective attributions but the point here is that they further support and sustain the view of theories as things that are ‘out there’ in some sense, to be discovered or uncovered by the scientists concerned. Rejecting these attributions is, of course, commensurate with the eliminativist position more generally and Potters’ point can be drawn on in service of the latter.

However, his conception of both phenomena and models as constructed ensembles of diverse elements may raise concerns. We recall the point made in the introduction that an eliminativist about theories isn’t compelled to be an anti-realist about physical entities. Thinking of phenomena as constructed could generate obvious worries about a slide into some form of anti-realist ‘social constructivism’. Whatever Potters had in mind here, the theory eliminativist can resist such a slide by adopting the obvious realist response: granted that the delineation of the nature of the phenomenon and its features may not be established through just one observation, or set thereof, as the brief characterisation of the Meissner Effect above might suggest, but may occur over a period of time involving an array of different approaches and methods, this does not imply that the phenomenon per se is a construction, social or otherwise.Footnote 13 On the other hand, regarding models as constructed ensembles might be taken as suggestive of a kind of reificationist view that retains them as things, whether abstract or otherwise, and indeed, Potters’ conception here does seem to sail close to that espoused by Cartwright, Shomar and Suárez whose account he otherwise rejects. This too the eliminativist rejects, instead arguing that the appearance of models as such constructed ensembles is drawn from the relevant practices, of course, and should not be taken as having any ontological significance in the requisite sense. How those practices should be understood, of course, are as possible truth-makers for the diverse claims that appear to reflect this ensemble-like nature.

Having said all that, more generally, this view of scientific practice as involving the establishment of a connection between historically stabilized constellations of different elements can also be appropriated by the theory eliminativist. In the absence of theories ‘out there’ in some Popperian World Three or whatever, waiting to be discovered, or acting as the attractors, in some sense, of the scientific process, concerns may arise that such practice would have to be conceived of as unmoored, unconstrained or anarchic, even. On the contrary, the eliminativist can argue, abandoning the view of theories as things opens the door to an appropriately nuanced conception of scientific practice as fluid and complex, without being completely unconstrained. That last aspect can be captured by the well-known panoply of heuristic moves and manoeuvres (see Post 1971) that the eliminativist is happy to accept (French 2020) and which can be invoked as generating the contextualised stabilization argued for by Potters.

There is a caveat to this appropriation, of course, that follows from the above, namely that constellations of different elements should not be taken as constituting or otherwise forming a theory or model but, rather that, together with the inter-relationships that hold between them, they should be understood as making true certain statements (putatively) ‘about’ the theory or model. Nevertheless, given Potters’ re-analysis, it must be acknowledged that both Cartwright, Shomar and Suárez, on the one hand, and French and Ladyman et al., on the other, missed the richer history that has now been brought to our attention. This is partly because the Meissner Effect was only stabilised as a phenomenon over a period of time but, more importantly perhaps, because of the focus by both sides in this debate on the production of a fully formed model, with the implicit commitment to a reificationist position that led to their simplistic characterisations in terms of ‘bottom-up’ or ‘top-down’ model construction.

In this regard it is worth noting here the different philosophical (and therefore, ‘meta-level’) ‘devices’ deployed by the two sides in this debate: although Cartwright, Shomar and Suárez do not state a particular view of representation in their initial paper, in subsequent works Cartwright and Suarez have advocated a ‘thin’ and naturalistic account that articulates this notion in terms of the inferential capacity of the model in question, as indicated above (Suárez 2015). French and Ladyman and their associates, on the other hand, have explicitly adopted a ‘thick’, formal mapping account that characterises representation in terms of the establishment of a ‘partial isomorphism’ between the model and its target (and hence this sits within the ‘partial structures’ variant of the Semantic Approach; Bueno and French 2011). In their response to French and Ladyman, Cartwright and Suárez have insisted that theories and models themselves do not incorporate these formal features of the mapping approach (Suárez and Cartwright 2008). However, Bueno, French and Ladyman have replied that that is to be expected since these are features of a (meta-level) device, deployed by philosophers of science (and, perhaps, certain scientists who might be so inclined) in order to appropriately capture those features of (object-level) scientific practices in accordance with their philosophical aims.Footnote 14

This latter response can also be taken up by the theory eliminativist who, of course, will dismiss any talk of ‘theories and models themselves’ as incorporating any sort of feature as obfuscatory at best. As French has noted (2020, 1–50), there is a tendency of adherents of the alternative so-called ‘Syntactic’ and ‘Semantic’ approaches to identify theories and models with sets of propositions and families of (set-theoretical) models respectively. Bueno, French and Ladyman’s response in the above debate can then be appropriated by the eliminativist to answer the question, what do such approaches and the formal devices they incorporate do, if not underpin the identity conditions of theories? Succintly put, the answer is that they enable us to frame a given case study of scientific practice in a certain way, for certain purposes. In the case of Bueno, French and Ladyman’s analysis the relevant framing is explicitly given in terms of the Semantic Approach, whereas, as noted, Cartwright, Suárez and Shomar equally explicitly reject the latter.

Another useful example here is provided by the establishment of the equivalence of wave and matrix mechanics, already touched on above. Muller has argued that the various moves, formal and otherwise, that were made in this case can best be captured via the Semantic Approach, not least because this makes available certain set-theoretical devices—such as isomorphisms and the like—in terms of which inter-theoretical relationships, in particular, can be described (Muller 1997). However, from the eliminativist perspective, this ‘capturing’ should not be taken as equivalent to the representation of something at the object-level of the practices themselves. Instead, it should be seen as the presentation, at the level of the history and philosophy of science, of a certain construction, framed via the Semantic Appropach, that serves a particular historical or philosophical purpose.

The further question is what that purpose might be. One answer is that such a construction enables us, as philosophers of science, to understand how science works, in much the same way as von Neumann’s presentation of the Hilbert space formulation at the object-level of that particular set of practices enabled the physicists concerned to understand how wave and matrix mechanics were related (Mitsch 2022). That then raises the issue of what it is to understand at this level at which the philosophy of science is practiced.

Now, there has been considerable discussion of the nature of understanding at the level of scientific practice, with various accounts proposed recently (see, for example, de Regt 2017 and Elgin 2017) and an obvious move would be to adopt one such and apply it at the meta-level of the philosophy of science. Again, it can be regarded as a useful device that we can appropriate. To give just one example, Dellsén has argued that,

one understands a phenomenon just in case one grasps a sufficiently accurate and comprehensive model of the ways in which it or its features are situated within a network of dependence relations; one’s degree of understanding is proportional to the comprehensiveness and accuracy of such a model. (Dellsén 2020, 1262).

Here Dellsén takes a ‘model’ to be an information structure of some kind that is interpreted so as to represent its target.Footnote 15 If we replace ‘phenomenon’ in the above passage with, for example, ‘scientific episode’ and situate the term ‘model’ at the meta-level of philosophical analysis, then we can appropriate his approach and re-configure it as one that offers an account of understanding, not within science, but within the philosophy of science. The claim then would be that the Semantic Approach offers a framework that enables us to understand a scientific episode by virtue of (more or less) accurately and comprehensively ‘modelling’ the ways in which certain features of that episode are situated within a network of dependency relations. More specifically, it enables us to capture such dependency relations as are involved in certain equivalence claims, as in the case of wave and matrix mechanics or in scientific representation (formally in terms of partial isomorphisms and the like, of course). Likewise, one of the central claims of Bueno, French and Ladyman is that the Semantic Approach in effect does something similar when it comes to the various dependency relations that can be extracted from the development and construction of the Londons’ model. Hence, this appropriation of Dellsén’s account represents a way of accommodating such claims within a more general framework of philosophical understanding.

Having said that, neither the Semantic Approach nor Dellsén’s account of understanding are the ‘only shows in town’, as it were. Indeed, Cartwight, Suárez and Shomar could make similar claims to the above with regard to the inferentialist framework, insisting that this better captures the fluidity of the situation when it comes to representation in terms of the inferences that could be drawn but without all the formal machinery of the Semantic Approach.Footnote 16 Other devices are also available, including Category Theory (Landry 2017), as well as those typically associated with the so-called Syntactic Approach (Lutz 2015). We can then engage in a meta-level debate as to which device best captures the relevant dependency relations or, more generally, is best suited for certain purposes. Alternatively, a pluralistic approach may be adopted, according to which different such devices are taken as more suitable for different aims.Footnote 17 And of course, there are alternatives to Dellsén’s framework which might be argued for and pressed into service. Again, however, these are considerations for future work.

4 What are the Implications for the History of Science—Philosophy of Science Relationship?

There has also been a great deal written about this relationship but here I want to note, again, that in many cases, this relationship is cashed out in theory-oriented terms. Consider, for example, the claim that science is (at times) inconsistent. The historical evidence for this is typically given in terms of specific theories that exhibit some form of inconsistency, such as, famously, Bohr’s theory of the atom or, more contentiously perhaps, classical electrodynamics. The disputes over whether the ‘evidence’ then supports such claims or not crucially hinge on different characterisations of what is taken to be the given theory and adopting theory eliminativism then opens up fruitful avenues for resolution (see Vickers 2013). More generally, reformulating such debates along eliminativist lines can enhance the engagement between the history of science and the philosophy of science, not least by encouraging reflection on the reasons for focussing on particular aspects of a given episode (Vickers 2014).

Thus, in the debate over the Londons’ model of superconductivity, we have the alternative philosophical claims that the model building was either independent from theory in methods and aims, or not. The historical evidence is set out through detailed case studies presenting the construction of the model. The dispute between the two sides over whether this ‘evidence’ supports the relevant claim then hinges on differences in the case study details that, crucially, shape different characterisations of ‘the’ model (as a ‘thing’ that can be so characterised). Adopting an eliminativist stance undermines any claim that one such characterisation is ‘closer to the truth’ than the other in the sense of better characterising the model, regarded as an abstract entity existing in some ‘theory space’ or whatever. Instead, we can focus on the support given, or not, by the details of the case studies to the (meta-level) claims made by the two sides in the debate.Footnote 18

So, for example, consider the above claims regarding theory independence. According to Potters, both sides assume that the historical episode concerns the discovery of a particular ‘representational connection’ between the phenomenon—as manifested in the Meissner Effect—and the model and that this then provides the basis for the dispute over whether this discovery was independent from theory or not (Potters 2019). However, he argues, the historical facts do not support this assumption, because both the phenomenon and the model were ‘open for discussion’ (Potters 2019, 23). From the eliminativist perspective, the assumption that Potters’ identifies is itself underpinned by another, that is even more basic, namely that we can identify a model, as an entity, in the first place. Dropping this allows for a more nuanced appreciation of the relevant scientific practices that support the different claims made in the debate (cf. Vickers 2013).

Thus, a crucial aspect of the debate concerns an analogy that London and London supposedly drew between superconducting behaviour and diamagnetism.Footnote 19 Cartwright, Shomar and Suárez took the crucial heuristic role of this analogy as illustrating the afore-mentioned independence from theory, insofar as the model could not then be regarded as derived from Maxwell’s equations of electromagnetism. Bueno, French and Ladyman, on the other hand, argued that it exemplified a form of theory-dependence by virtue of certain structural commonalities with the model. Potters shows that in both cases, the presentations of the relevant features of the practice exemplified by the analogy with diamagnetism are, in effect, shaped by the prior philosophical conceptions of the two sides about the nature of the model and its relationship with such practice. Recognising this opens up the opportunity for an alternative understanding, according to which the diamagnetic analogy can be seen as offering a programme of theoretical development according to which superconductivity was explained in quantum mechanical terms (Potters 2019, 23–24).Footnote 20

From the eliminativist standpoint, then, Potters has taken precisely the right steps: having identified the ways in which the debate has been distorted by prior conceptions of the nature of the Londons’ model, he then turned to the details of the relevant practice and offered a more nuanced account of this particular episode.Footnote 21 Moreover, in doing so, he has illustrated how care must be taken in taking the features of such practice as truth-makers for the diverse claims made. In the light of this, the claims—that the development of the model was/was not independent of theory, respectively—cannot simply be taken as true or false but must be further refined, in particular with regard to what is meant by ‘independent’, of course. Thus, we can begin to see how adopting a theory eliminativist stance can encourage us to interrogate more closely the nature of certain claims made in the philosophy of science and by excising any implicit (or otherwise) reification and shifting focus to the practices that act as putative truth-makers for these claims, we can attain a more sophisticated philosophical understanding of the episode concerned.Footnote 22

Of course, as was mentioned at the beginning of this essay and should now be clear, adopting this stance does not require us to abandon the kinds of devices that have been used within the philosophy of science to explicate such claims. It is often insisted that the Semantic Approach amounts to the assertion that a theory is a family of (set-theoretic) models. Dropping that assertion and instead regarding this approach as a (meta-level) device that philosophers of science can use to characterise and present relevant features of scientific practices opens up the possibility that, so understood, it can capture the openness and fluidity manifested in these practices without being committed to the reification of the Londons’ model, say, in set-theoretic terms. In other words, the Semantic Approach can represent (again, at the ‘meta-level’ of the philosophy of science) this model characterised as a ‘historically stabilized constellation of different elements’, with the relations between such constellations (meta-)represented via partial isomorphisms, for example. However, what Potters’ analysis shows is that care must be taken to ensure that this meta-representation does not distort the historical ‘facts’ as given in terms of the relevant practices.Footnote 23 And, again, this is not to deny that other approaches such as Category Theory and elements of the so-called Syntactic Approach, (also considered in this way as devices), may likewise be deployed to represent such characterisations, albeit in different terms of course.

Adopting such a stance may also shed light on how to accommodate the myriad meta-level formulations of the relationship between the history of science and the philosophy of science. These range from the evidential, or more broadly, confrontational, with historical case studies taken to falsify or confirm philosophical claims, to suggestions that a weaker notion of ‘reinforcement’ should be considered, to the insistence that this relationship must be considered in an iterative or hermeneutic manner, to such claims as that the relationship should be considered in terms of the relationship between the abstract and the concrete. Indeed, as Arabatzis and Schickore have noted,

“The plurality of HPS approaches […] suggests that there is not one single best way of integrating historical, philosophical, and other perspectives on science” (Arabatzis and Schickore 2012).

From a theory eliminativist perspective and given what has been said above, this plurality should come as no surprise. Within the community of those committed to the development of an ‘integrated’ form of history and philosophy of science (broadly understood) there will be expressed not only differences with regard to which features of the given scientific practices are seized upon, as well as differences in the devices used to characterise these features and frame the relevant case studies (again, we can see both kinds of difference on display in the debate over the Londons’ model), but also broadly epistemological differences as to how such studies should bear on the philosophical claims made (thus, both Cartwright et al. and French et al. adopt a form of confirmation/falsification approach to each other’s claims whereas Potters suggests—albeit only briefly—the adoption of a hermeneutic line). As Dresow has argued,

Rather than search for a single model of how history and philosophy of science should interact, we should instead characterize how different methodological approaches in philosophy of science engage productively with history, including both primary sources and historical scholarship (Dresow 2019, 57).

In addition, I would argue, the plurality of approaches to the history of science-philosophy of science relationship is generated, at least in those contexts that cover scientific ‘theorising’, again broadly understood, by differences in the characterisations of the objects of such theorising. This can be seen perhaps most acutely in the manifestation within this ‘integrated’ context of the debate over the status of scientific realism. A core feature of this debate—again putting things quite generally—is the use of certain historical episodes that purportedly demonstrate sharp breaks in this theorising to undermine any form of realism that incorporates some form of ‘cumulativism’, in the—again, broad—sense that scientific progress is conceptualised in terms of accumulating ‘truths’ or moving closer to ‘the’ truth. We have already encountered one such purported ‘break’ here, with the example of the so-called ‘quantum revolution’. In this case there appears to be an obvious ‘break’ in the theorising as we move from classical to quantum mechanics, which is then taken to result in an equivalent break in the proposed ontology, such that a ‘cumulativist’ form of realism cannot be sustained. We have already expressed doubts regarding the supposed revolutionary nature of this transition and further historical work in recent years has revealed the extent of the commonalities that were involved at the time (Duncan and Janssen 2019).

However, these breaks may not only be associated with scientific revolutions, but also with the development of particular theories. Thus, when it comes to theories about light, to give one prominent example, the shifts within the relevent theorising, as revealed by the history of science, are taken to correspond shifts in the underlying ontology that, again, the realist supposedly has difficulty accommodating. The dispute between said realist and her critics who invoke such cases then typically collapses into diverging considerations as to how this theorising is then characterised. A recent and well-known example of this can be seen in the debate over structural realism, in which advocates of this position point to the structural commonalities across theory development (as in the above example of theories of light but also in the case of quantum mechanics), their critics argue, on the basis of an alternative characterisation, that such commonalities are only apparent or superficial, the structural realist then responds either by arguing that these features are in fact in some way essential to how the theory should be understood, or that there are further, deeper structural features that will underpin the relevant commonality and so on (see French 2014 and Ladyman 2020). Here the implicit assumption of theories as ‘things’, whether abstract or otherwise, to which scientists are supposedly heuristically drawn and whose core features can be straightforwardly identified, becomes apparent.

Thus, what such considerations reveal is that not only are we faced with different historical reconstructions, manifesting different virtues and deployed for different purposes, as well as different kinds of philosophical claims made and perspectives adopted, involving distinct methodological claims regarding justification, or discovery and heuristics, and having to do with different debates or whatever but also, crucially, that theories do not, and cannot, act as fixed points for these ways of integrating the history of science and the philosophy of science. In other words, what this plurality should be taken to reveal is that theories should not be regarded as ‘strange attractors’ in some kind of abstract theory space, to which scientists naturally gravitate. In effect then, theory eliminativism can function therapeutically by helping us to become reconciled to such a plurality of relationships holding between such historical reconstructions and the relevant philosophical claims. And again, this should not stymie us in expressing such philosophical claims, once we acknowledge that the truth-makers for them are not the supposed characteristics of theories qua entities but rather, the features of certain practices.

This is not to say that theory eliminativism somehow ‘resolves’ the realism-anti-realism debate, one way or the other. What it does do is reshape the debate so as to allow, again, for a more nuanced appreciation of the core issues involved (see French 2017; 2020 Ch. 9). So, consider an assertion that lies at the heart of the debate, namely, ‘such-and-such theory is true (or approximately so)’. Typically the arguments over the status of such a claim proceed on the basis of the presupposition that the ‘truth’ is some kind of feature of said theory, at least so far as the latter can be expressed in propositional form (on the conventional understanding of ‘truth’). This also holds for at least certain anti-realists, such as constructive empiricists, who agree that a given theory may be true but maintain that we cannot know whether it is or not. According to the theory eliminativist we should drop this presupposition and instead consider the possible truth-makers for the above assertion. However, it is easy to see that the set of practices in which those truth-makers are to be sought must be expanded beyond those that are typically delineated as ‘scientific’. This is because the realist will typically point to a certain feature of scientific practice, namely the production of novel predictions and couple that with some form of Inference to the Best Explanation to justify the above assertion that ‘such-and-such theory is true (or approximately so)’. Realists’ insistence that this form of inference is an aspect of scientific practice have been dismissed as question begging and both anti-realists and, at least some, realists now acknowledge that it constitutes a philosophical move that may be disputed. Thus, from the eliminativist perspective, the relevant set of practices must be broadened beyond the scientific to include those that relate to philosophical analysis, with realists and anti-realists differing as to what may legitimately be included in the latter. The constructive empiricist, for example, will insist that when it comes to unobservable entities, at least (as opposed to mice behind skirting boards, say), Inference to the Best Explanation is an illegitimate move and as a result is excluded from their, empiricist informed, set of practices. Focusing on the relevant practices in this way thus further motivates and underpins the recent articulation of the contours of the realism-anti-realism debate in terms of different ‘stances’ (see, for example, Chakravartty 2018).

However, we can go even further on an eliminativist basis. Let’s return to Skinner’s text, from which, paraphrasing Arabatzis, I took the title of this paper. He wrote, speaking of the history of ideas:

it has I think become clear that any attempt to justify the study of the subject in terms of the “perennial problems” and “universal truths” to be learned from the classic texts must amount to the purchase of justification at the expense of making the subject itself foolishly and needlessly naive (Skinner 1969, 50).

It is now commonplace to accept that this is the case when it comes to the methodology of science, with variation acknowledged across the history of science and between scientific disciplines. Recently a similar view has been advocated in the context of the realism debate, signalling a move away from ‘universal’ grounds for adopting a realist stance, such as the ‘No Miracles Argument’ applied across the board, as it were, towards more ‘local’, context specific reasons (Saatsi 2017). Again, these can be situated within the relevant practices and hence theory eliminativism can be invoked to further motivate such a move. Indeed, it can be argued that it removes one of the central loci of the tendency to ‘go universal’.

From such a perspective, then, case studies from not only the history of science but current scientific developments help to reveal—if we let them—the essential variety of scientific assumptions, heuristics and methodologies adopted in the practice of science. As Skinner, again, puts it, ‘[i]t is in this, moreover, that their essential philosophical […] value can be seen to lie.’ (Skinner, 52). The suggestion is that this attitude can be extended to the ‘targets’ of scientific thinking, namely theories and models, but instead of reifying them as universal entities, whether abstract or otherwise, we should resist the temptation, whilst acknowledging that we can retain ‘theory talk’ as backed up by the relevant practices.

5 Conclusion

My aim in this work has been to extend the framework offered in (French 2020) and consider in a little more detail its implications for how we, philosophers of science, should view, first, the history of science, secondly, the devices and tools that we ourselves have at our disposal and finally, the relationship between the history of science and the philosophy of science. Thinking of theories as entities in some kind of abstract theory space, or Popperian World Three, to which scientists are drawn in some fashion, yields a distorted narrative of the history of science and hence an impoverished philosophy of science. Acknowledging that there are no theories (as things) may then open the door to more fruitful ways of conceiving of the relationship between the history of science and the philosophy of science, whilst still being able to understand talk ‘about’ them, by virtue of the relevant practices acting as truth-makers for such talk.