1 Introduction

According to scientific realism successive theories of modern science bring us progressively closer to the truth, that is, a depiction of nature’s processes and entities as they actually are. As such, this doctrine carries substantial implications for our understanding of scientific progress and the relation between scientific theories and the world. This paper aims to critically scrutinize one argument for scientific realism, namely, that the history of science demonstrates a continuity between successive theories interpretable as the gradual refinement and elaboration of previously attained partial truth.

In greater detail, the thesis under investigation posits that an analysis of the history of science unveils that theories are not wholly rejected and replaced, but that certain pivotal features of older theories persist and successive theories enhance our knowledge of these. The history of science may thus be seen as a process during which points necessitating revision are corrected, while elements that accurately captured reality are preserved.

Conversely, we argue that what is preserved during theory change often does not fulfil the requirements needed to draw the realist conclusion. All—realists and anti-realists alike— would agree a priori that a certain amount of continuity must exist in the history of science: if not, the successes of older theories could not be replicated by their successors. This type of continuity, however, is typically the outcome of what is termed `emergence’ in the philosophy of exact sciences. Emergence denotes the appearance of novel patterns and regularities, unexpected on the basis of the fundamental laws of a theory, within a restricted part of the theory’s domain of application. Such emergent patterns often occur in coarse-grained quantities, while fine-grained accounts are able to demonstrate that the patterns in question are deceptive, as the fundamental theory’s often drastically different laws still apply.

In essence, our argument rests on the idea that continuity between theories is essential for preserving empirical success. This replicability requirement leads to (approximate) continuity in predictions for observable phenomena and in patterns not too distant from what is observable. However, if the old emerges from the new this does not produce continuity in explanations or continuity regarding ontology. In other words, emergence does not ensure continuity in how ``reality is carved at its joints”. In this situation the belief that the progress of science supplements and refines an already partially and approximately correct depiction of the world becomes problematic.

2 Scientific Realism and the Pessimistic Meta-Induction

2.1 The No-Miracle Argument (NMA)

The No-Miracle Argument is a central pillar of scientific realism. It can be summarized as follows: if scientific theories did not genuinely capture the ontology and workings of the world, both observable and unobservable, their predictive and explanatory successes would be nothing short of miraculous. More formally, the NMA asserts that the best explanation for the empirical success of scientific theories is that they represent, at least partially and approximately, the true nature of the world. The realist argues that if scientific theories were merely instrumental tools lacking correct insight into the principles of reality underlying observed phenomena, the predictive and explanatory power of these theories would be inexplicable.

The NMA thus relies on the assumption that truth is an indispensable factor in the empirical success of scientific theories. This assumption links the NMA to the view that the history of science is a cumulative process in which theories gradually increase their truth content, progressing towards the ultimate goal of providing a true description of reality. The premise that empirical success would be miraculous without true theoretical underpinnings supports the realist’s belief that our current empirically successful theories must be partially and approximately true.

2.2 The Pessimistic Meta-Induction (PMI)

The Pessimistic Meta-Induction (Laudan 1981) presents a critical challenge to the No-Miracle Argument by questioning its core assumption that truth is an indispensable and essential factor in the explanation of empirical success. The PMI suggests that the history of science is filled with examples of once empirically successful theories that were later discarded and replaced by schemes with radically different ontological commitments. If accurate, this observation casts doubt on the realist’s claim that successful theories are likely to be true or approximately true: by the standards of newer theories in the history of science, the older theories were deemed false. If this is the case, an induction on the history of science suggests that our current theories are also likely to be false, and their empirical success does not imply (or make probable) the truth of their assumptions concerning underlying processes and entities.

2.3 Continuity as a Realist Counter to the PMI

A crucial element in the realist response to the Pessimistic Meta-Induction involves conceding that theories may undergo significant changes in their ontological commitments and that replaced theories may be deemed false as a whole. However, realists argue that some features of older theories, such as sets of equations or selected axioms, are typically retained and incorporated into successor theories. This partial continuity demonstrates that while older theories undoubtedly contained false assumptions, they also included components that were true or approximately true. These true parts were essential for the empirical successes of the theory (Psillos 1994, 2009, 2022; Alai 2021), which therefore were not miraculous. As science advances, false aspects of theories are gradually discarded while true components are retained, enhancing the set of known truths and refining our understanding of the world.

To illustrate this realist response to the PMI, consider the transition from Maxwell’s 19th century electromagnetic theory to Einstein’s electrodynamics. Maxwell’s theory aimed to explain electromagnetic phenomena as manifestations of mechanical processes, like vibrations, in the “ether”. However, Einstein’s electrodynamics dismissed any ether-like mechanical substratum. This represented a major change in ontology. Despite this shift, Maxwell’s equations relating charges, currents, and fields remained unchanged during the transition; they were responsible for the theory’s successful predictions. The structural features expressed by these equations may be viewed as a kernel of truth that survived the Einsteinian revolution (Worrall 1989).

Another proposed example is the transition from the caloric theory of heat to modern thermodynamics. The central idea of the caloric theory was that heat is a conserved substance called “caloric”, composed of particles that repelled each other but were attracted by other matter. This theory had considerable empirical success (e.g., it supplied elegant explanations for the expansion of materials when heated, for the flow of heat from hot to cold places and not the other way around, and many more thermal phenomena) but was entirely abandoned later. According to its successor, thermodynamics, heat is not a material substance but rather a form of energy. Work, another form of energy, can be transformed into heat so that heat turns out not to be a conserved quantity. Despite this radical rejection of the caloric theory’s core idea and ontology, elements of certain “caloric explanations” are claimed to be preserved in explanations provided by thermodynamics. For example, in some cases, when there is no transformation of work into heat, conservation of energy might replace the earlier principle of the conservation of caloric. In other specific cases caloric played the same role as something that was later classified differently, such as nitrogen. Thus, it might be argued that the caloric theory was partly, approximately and “locally” on the right track after all and contained parts of the final truth (Psillos 1994).

However, these attempts to identify modern truth in old theories remain controversial (e.g., Cordero 2011; Chang 2003; see also the overview and references in Psillos 2022). In the sections that follow, we aim to contribute to the criticism of the realist’s continuity argument by highlighting a general feature of theory replacement, namely the central role played by emergence.

3 Emergence in Theory Change

3.1 Emergence

Emergence may be defined as the appearance of novel and robust patterns of behavior within particular regimes of application of a theory, usually related to large mass, time or length scales, or high number of degrees of freedom. These patterns differ from the typical behavior governed by the fundamental principles of the underlying theory. Therefore, emergent behaviors, structures, or patterns require additional specific information for their explanation beyond just the principles of the given theory. This additional information can include the number of particles, temperatures, mass and length scales, boundary conditions, and the desired accuracy of the description. Coarse-grained patterns in macroscopic quantities, which differ significantly from the fine-grained, microscopic behavior that the underlying (sub)microscopic theory primarily addresses, provide numerous examples of emergent phenomena.

The basic ontology of a theory, along with its fundamental laws, produces descriptions with a broad scope. But in the case of emergence there will typically also be effective descriptions, which possess approximate validity within specific and restricted domains of application of the theory. The emergent patterns that characterize these effective descriptions function as the “laws” of effective theories. But, from the perspective of the basic theory, they are only contingent regularities between non-fundamental and sometimes even non-existent quantities.

When a successful scientific theory is replaced by a new one, the new theory should obviously be capable of reproducing the former’s successes. For example, the successes of phenomenological thermodynamics were reproduced by statistical mechanics, and the successes of classical mechanics were reproduced by relativity theory, and later by quantum mechanics. Even the successes of Aristotelian mechanics were reproduced by classical mechanics. What is common in all these instances is that the old successful predictions are not exactly reproduced, but rather approximated. Moreover, from the viewpoint of the new theories the old successful patterns will only be contingently valid, dependent on conditions that define a narrow domain of the theory’s application. In other words, the old successes appear as emergent patterns, part of effective and non-fundamental descriptions.

The occurrence of emergence in the transition from one theory to the next suggests that the relationships between successive theories are usually not just about refinement or incremental improvement, but involve the discovery of new frameworks that were not previously anticipated. Therefore, emergence could challenge the realist’s assumption of gradual accumulation of truth or approximate truth. To assess the severity of this challenge, we will further investigate two concrete examples of emergence.

3.2 Aristotelian Laws from Classical Mechanics

The beginnings of modern science are usually associated with the rejection of Aristotelian physics: it is widely accepted that Aristotelian theorizing about the physical world was misguided, and that the earnest search for scientific truth only commenced with the Scientific Revolution, culminating in Newton’s Principia.

One of Newton’s revolutionary axioms stated that material bodies on which no forces act persist in a state of uniform motion: no force is needed to maintain the motion of an object moving with a constant velocity along a straight line. Forces, if present, are responsible for accelerations, as articulated by the celebrated formula \(F=m.a\). It follows that a force is needed to slow down and stop a moving material body. This is in complete contrast to what Aristotelian mechanics tells us. According to Aristotle, it is natural for a body to remain at rest until forces compel it to move. Forces are needed to produce a velocity, and instead of the Newtonian law of motion \(F=m.a\), Aristotle proposed the principle \(v=\raisebox{1ex}{$F$}\!\left/ \!\raisebox{-1ex}{$R$}\right.\), where \(v, F\) and \(R\) denote the velocity of the moving body, the force exerted on it, and the resistance offered by the surrounding medium, respectively.

However, even though Newton’s “classical mechanics” pictures the physical universe in a way that is utterly incompatible with the Aristotelian view, one would expect a certain continuity between the two theories. Indeed, it would be miraculous if Aristotle’s mechanics had survived for so long without any empirical support. Clearly, at first glance and without too much attention to quantitative accuracy, the Aristotelian predictions are reasonable: objects around us do not begin to move on their own, and we need to exert a force to sustain motion. Empirical facts of this sort must be explainable by classical mechanics as well. In other words, even though Aristotle’s and Newton’s mechanics possess very different structures, and classify the entities and processes in the physical world with entirely different categories, they must agree at least approximately on certain predictions.

It is not difficult to see how this works. In cases where a body moves in a medium that offers a resistance to its motion, the Newtonian law of motion \(F=m.a\) must be supplemented with a friction term, so that it becomes \(F=m.a+Rv\), with \(R\) quantifying the strength of the friction. This equation can be solved for the velocity \(v\), and it turns out that the solution tends to a uniform motion as time progresses.Footnote 1 If the friction is substantial, this limit of uniform motion is reached quickly; the final velocity, which remains constant, is \(\raisebox{1ex}{$F$}\!\left/ \!\raisebox{-1ex}{$R$}\right.\). This is exactly what the Aristotelian theory predicts. So, in situations where significant friction counteracts the accelerating force, the fundamental Newtonian relationship between force and acceleration is obscured, and it appears to be the case that the force is responsible for a velocity rather than an acceleration. Under these conditions, Aristotelian relations emerge as an approximation to the laws derived from Newtonian physics.

The existence of this kind of continuity is to be expected because Newtonian mechanics must reproduce the limited empirical success of Aristotelian mechanics. Is there anything more profound to be gleaned from the continuity between Aristotelian and Newtonian mechanics? Could this continuity be used to argue that Aristotelian mechanics contained a core of truth, which Newton managed to preserve? In a sense, the answer is yes. Aristotle correctly identified certain empirical regularities, and these phenomena were preserved by Newton’s theory. This shared part might be considered a preserved piece of approximate truth. But this approximate retention of patterns is on the level of observable phenomena and does not represent the kind of deeper truth preservation that scientific realists are after. Realists are interested in the actual mechanisms operating in the world, which, they contend, we approach closer through progressively better scientific theories. From that perspective, Aristotle’s physics is a non-starter. It does not succeed in identifying any mechanism of motion that is preserved and described in a more detailed way by classical mechanics.

3.3 Classical Particles from Quantum Theory

Our daily world, and the world of classical physics, is a world populated by objects. Objects possess well-defined physical properties, first of all positions and velocities, and follow continuous trajectories through space. In classical mechanics, the concept of an object is embodied by “material particles”---a notion that is absolutely central to the theory. According to classical mechanics no two particles can ever occupy the same position, so that particles can always be told apart on the basis of where they are (they may also have other identifying properties, like their masses; this is not the case if the particles are of the same kind, like two electrons). Moreover, the particles of classical mechanics can be followed and re-identified over time, because of their continuous trajectories. Thus, classical particles, like the objects of everyday experience, are individuals, with synchronic and diachronic identities.

Surprisingly, this familiar picture of the physical world, being built up from material particles, is difficult to reconcile with modern physics.Footnote 2 According to relativistic quantum field theory it is impossible to have a physical system that with certainty will be found within a spatial domain of a given finite extension (see, e.g., Halvorson and Clifton 2002, Dieks 2023b). Therefore, the physical ``things’’ that are allowed by relativistic quantum field theory cannot be localized, which poses a problem for a particle picture. The paradox does not stop here. One further unexpected result is that even if we try to think of particles as non-localizable and non-classical entities, the so-called Unruh effect shows that their presence will generally be observer-dependent. For example, if an inertial observer measures a vacuum, without particles, the findings of an accelerated observer may be completely different: this observer may find evidence showing that there are particles after all (Wald 1994, Ch. 5; Halvorson and Clifton 2002). This notion is obviously difficult to reconcile if one thinks of particles as robust entities whose existence is objective and independent of observation.

Despite these and other seemingly bizarre results, it is clear that quantum physics should be able to make contact with the world of daily experience and the part of physical reality that can be handled by classical mechanics. The classical particle concept must become applicable at some level when transitioning from the quantum to the classical world (Dieks and Lubberdink 2020; Dieks 2023a). The way this connection can be made is by first moving from relativistic quantum field theory to non-relativistic quantum mechanics, considering only velocities and energies that are so low that complications due to relativity theory will not be practically noticeable. Subsequently, we focus on the predictions of non-relativistic quantum theory in the classical limiting case, the case in which physical systems possess large masses and where so-called decoherence processes make quantum effects difficult to detect. Both steps involve a loss of generality and a loss of precision. There is a loss of generality in the first step due to the restriction to low energies and processes in which no particle creation and annihilation takes place, and in the second step due to the focus on (semi-)macroscopic situations, with systems interacting with an environment containing many degrees of freedom. There is also a loss of precision: in principle, quantum effects could be detected even within the classical domain, but this would requires sophisticated techniques whose use is usually disregarded. A classical picture therefore only appears on the condition that we are satisfied with a coarse-grained description.

In non-relativistic quantum mechanics, the theoretical description of systems that we intuitively refer to as collections of particles of the same kind, such as electrons, do not resemble the description of a collection of individual entities each having its own identity. Instead, the quantum description of the total system exhibits features that are reminiscent of the classical description of one global wave-like phenomenon. However, when position measurements are performed, the results can create the impression of separate individuals being investigated. This is the case if strings of measured positions are found that come close to what we expect from spatial trajectories in classical particle schemes. In a famous essay, Schrödinger (1950) put it as follows:

Such a string gives the impression of an identifiable individual, just as in the case of any object in our daily surrounding. It is in this way that we must look upon the tracks in the cloud chamber or in a photographic emulsion, and on the (practically) simultaneous discharges of Geiger counters set in a line, which discharges we say are caused by the same particle passing one counter after another. In such cases it would be extremely inconvenient to discard this terminology. There is, indeed, no reason to ban it, provided we are aware that, on sober experimental grounds, the sameness of a particle is not an absolute concept. It has only a restricted significance and breaks down completely in some cases.

To render “particle” as a usable concept, strings of measurement results must be interpretable as trajectories of distinct individuals. The results must therefore not be too close to each other, so that the constructed trajectories do not intermingle. This requirement can be translated into a mathematical criterion for the approximate applicability of the notion of a particle (Schrödinger 1950, Dieks 2023a). As was to be expected, this criterion is satisfied in everyday circumstances, which makes it understandable that we do not experience problems with the application of the particle concept in daily life.

In conclusion, according to quantum mechanics, patterns in events may arise that create the impression of particle-presence. This happens in a tiny corner of the total domain of quantum mechanics; but, of course, a corner that is highly significant for us humans. Even within this subdomain, the particle picture will only work well if no sophisticated experiments are performed that can reveal quantum effects. Quantum features are present in principle, and their detection would prove the classical particle picture incorrect.Footnote 3 Classical particle behavior thus shows all the characteristics of emergence. It is behavior that is uncharacteristic of how things typically are according to the fundamental principles of the underlying theory; its validity is approximate and depends on the fulfilment of conditions defining a restricted domain; and it is robust as long as these conditions are satisfied, allowing an effective theory (in this case classical mechanics) to work, for all practical purposes, within that domain.

4 Emergence and Continuity

In the transitions from Aristotle to Newton and from classical to quantum there is certainly continuity. In both cases, the old theory is contained in the new one as an effective description, approximately valid in a small part of the new theory’s domain. This seems to live up to the continuity expectations of scientific realists, who claim that there will be continuity because of truth preservation.

However, the example of Aristotelian mechanics as a limiting case of Newtonian theory should give us pause. As discussed in Sect. 3.2, there is only a small class of phenomena for which Aristotle’s theory yields predictions close to the Newtonian ones. Within this domain, the emergent pattern emergent from Newton’s theory is on the level of events, but does not extend to causal links and explanations. Aristotle’s framework revolving around such concepts as natural places, natural versus forced motion and the law \(v=\raisebox{1ex}{$F$}\!\left/ \!\raisebox{-1ex}{$R$}\right.\), stands in such stark contrast to the Newtonian account that Aristotle’s mechanics is often not even considered to be a scientific predecessor of classical mechanics at all. None of the principles of motion used by Aristotle was taken over by Newton. From this perspective, the transition from Aristotle’s theory of motion to classical mechanics does not support the claim of truth preservation.

Still, it is undeniable that there are phenomena falling within the scope of Newton’s theory that can also be accommodated by Aristotelian mechanics, even though with the help of principles that from the Newtonian viewpoint are utterly false. Doesn’t this overlap cry out for explanation, and isn’t the only reasonable explanation some common element of underlying truth, as suggested by the No-Miracle Argument?

The obvious answer is that a very simple explanation exists for the continuity between Aristotle and Newton, one that does not require a shared core of truth of the kind pursued by the scientific realist. This explanation is that Newton’s theory has to reproduce the empirical success of Aristotle’s theory—if it had proved unable to do so, this would have been a very serious objection to Newton’s theory. Scientists and philosophers of whatever persuasion agree that successor theories must be able to reproduce the empirical success of their predecessors. This innocent demand of retention of empirical success is enough to understand that successive theories must have a common part, namely the set of observable events described by both theories. Aristotle and Newton were both able to describe bodies moving through a medium that offers resistance.

The existence of continuity on the level of observable phenomena is what can be expected a priori, and can be fully explained by empiricists. What is more, empiricists will expect a continuity that goes deeper than retention of success on the level of what is directly observational. This is because scientific theories do not contain, within their conceptual frameworks, built-in demarcation lines between what is humanly observable and what is not. Scientific theories have the form of objective descriptions not depending on the presence of humans or their capabilities of perception. Therefore, concepts applicable to humanly observable phenomena will also be applicable, according to a given theory, to processes and events that defy direct human observation, for example because they involve objects that are too small to be seen. Thus, Aristotle’s theory of motion predicted not only that heavy objects that are visible fall down (striving to reach their natural places) but also that very tiny grains of material will do the same. This absence of dividing lines between the observable and the unobservable holds across the board of scientific theories. Therefore, if a successor theory is able, as it should be, to reproduce the observational success of a predecessor, it should be expected to also reproduce the predictions of the old theory in a regime going beyond what is directly observable. In the example of Aristotle and Newton, the set of approximately identical predictions does not only contain humanly observable motions but also motions too subtle to be perceived.

So, purely on the basis of empirical adequacy and common-sense insights regarding the form of scientific theories, empiricists expect and explain continuities between successive theories. Of course, there must be limits to these continuities. First, even on the level of the observable the sameness of predictions will be only approximate. Second, we may assume that the old theory was superseded for a reason: the predictions of the two theories have to start diverging at some point. When we go some distance beyond the realm of the observable, or consider situations that were not envisaged by the old theory, non-negligeable differences between old and new descriptions are to be expected. In the case at hand an example can be taken from situations with very low friction. Aristotelian theory tells us that when even a minute force is exerted on a body while friction is virtually absent, this body will acquire a huge (but constant) velocity. By contrast, Newtonian mechanics predicts a small but constant acceleration if a small force is exerted; the velocity may remain small for a long time but will grow indefinitely in the long run. Of course, the latter behavior exemplifies the general mechanism behind motion according to Newton’s theory, while the Aristotelian law in this case gives us a false prediction.

Aristotelian physics is often considered not to belong to science proper. However, the relation between the Aristotelian and Newtonian doctrines of motion is very similar to that between classical and quantum mechanics and, more generally, to the relation between old and new scientific theories when old theories emerge, as effective schemes, from their successors.

As a case in point we may compare and contrast the concepts of material particle in quantum mechanics and classical physics. As described in Sect. 3.3, the notion of a particle as a spatially localized entity possessing a well-defined identity is foreign to the structure of quantum theory. Nevertheless, as explained by Schrödinger, events predicted by quantum mechanics can be effectively organized with the help of a classical particle picture if certain conditions, typical of the conditions in which we live, are satisfied. This confirms what had to be expected: quantum mechanics is able to reproduce the empirical predictions of classical mechanics in the circumstances of everyday experience. But the continuity in predicted patterns goes further than what is directly observable, and also encompasses situations in which classical mechanics works well while the particles and forces are not directly observable. This is analogous to what we saw in the case of Aristotle and Newton, and is unsurprising because of the requirements of empirical adequacy and the simple observation that scientific theories do not distinguish between processes that can and those that cannot be directly perceived by humans.

Just as we saw in the comparison between Aristotle and Newton, the explanatory principles central to classical mechanics are entirely incorrect from the quantum perspective. For example, according to the quantum description there cannot even exist localized particles that move uniformly through space, which contradicts one of Newton’s laws of motion. It is relevant that such seemingly weird quantum assertions can be experimentally verified, in principle, even within the classical realm. It is only when we do not probe the quantum structure of the things in our environment that an effective classical description may be used.

The change of status of theoretical schemes, from fundamental theories to emergent and effective descriptions, is a quite general phenomenon in the history of science. Further examples are the emergence of the classical theory of absolute space and time from, first, special and, then, general relativity; and the emergence of classical electrodynamics from quantum electrodynamics. In such cases there often is an incompatibility concerning explanatory principles and basic ontologies. Nevertheless, there is always continuity and accumulation of knowledge: empirical success is retained, increased and refined.

5 Emergence and Truth

Realists typically maintain that existing empirically successful theories already possess a great deal of partial truth. This truth is believed to have been preserved from earlier successful theories and will probably be preserved in theories to come. Processes and entities that have withstood theory change are likely to represent already traced bits of the truth. However, the examples that we have discussed challenge this perspective.

As we have argued, there are significant cases where new and more general theories transform older schemes into effective descriptions that are only approximately valid. This transformation occurs within limited areas of the domains of the new theories. Moreover, the new fundamental principles drastically differ from the effective ones. In such cases, it is hard to justify the belief that what is continuous (the effective part) represents a piece of truth.

Of course, the effective part does represent a regularity that is approximately correct. Here, “approximately” has to be understood as “not easily proven wrong by human experimentation”. However, this approximate correctness within a restricted domain does not seem to deserve the honorific title of objective truth. The main weapon of the realist, the no-miracle argument (or inference to the best explanation), is aimed at establishing the objective correctness of principles that are stable and satisfactorily explain (even provide “best explanations”). Contingent instrumental systematizations of phenomena are not sufficient for realist purposes.

Objective correctness is thus not achieved by relying on the ontologies and principles of effective schemes. Hence, the continuity exhibited when an old theory is a limiting case of a very differently structured successor can hardly support realist claims. This objection is strengthened by the consideration that continuity observable in cases of emergence can be understood merely as a consequence of empirical adequacy and the retention of empirical success. There is no need for the retention of truth in a sense that exceeds what is acknowledged by the empiricist.

As we pointed out in Sect. 3.2, the case of Aristotle and Newton illustrates this argument. Despite the continuity between Aristotle’s and Newton’s theories of motion, few realists would argue that Aristotle was on the right track or that Newton preserved the truth discovered by Aristotle.

In Sect. 3.3, we discussed how the basic and central ontology can drastically change from a theory to an effective theory, even in mature science. On the fundamental level of quantum field theory, there are no individual particles, while such particles constitute the essence of classical mechanics. This case is interesting because it is a standard realist claim that physics has gradually taught us more about particles like the electron. This case deserves a more extensive discussion than allowed here, so we will only provide a few additional remarks to what was said in Sect. 3.3.

Bain and Norton (2004) list more than ten successive electron theories, each with substantially different ontological commitments. Each of these was generally accepted during brief periods in the last hundred and fifty years. The descriptions of the electron provided by these theories range from vibrations in the electromagnetic ether, via massive charged particles, to the quantum field characterization associated with the current Standard Model. Despite the dramatic differences between these theoretical descriptions, Bain and Norton argue that there is a stable core of electron properties that has been retained and refined over time.

However, the stable properties they refer to are measurable quantities like electrical charge and mass. These quantities have been determined with increasing precision by increasingly sophisticated experimental techniques. As Bain and Norton comment, new electron theories have not only been able to reproduce the empirical successes of their predecessors, but they have also improved the accuracy of these predictions. This is precisely the type of continuity that empiricists expect, without committing to any already achieved truth about the nature of the electron. As such, the continuity identified by Bain and Norton lacks the theoretical robustness required to sustain realist claims.

In quantum field theory, if an individual particle, the “electron”, does not exist, how could it possibly have the same electrical charge as Thomson’s electron? From an empiricist viewpoint, the parameters that are measured with increasing precision are essential for the refinement of our descriptions and explanations of observable phenomena. This understanding doesn’t necessitate a commitment to already achieved truth about unobservable sub-microscopic particles.

According to the view that we propose, empirical adequacy and retention of empirical success act as a bridge between theories. This bridge enables theories to build on the empirical successes of each other. While the retention of empirical success is sufficient for ensuring continuity between theories, it does not provide convincing support for the realist’s claim of convergence towards truth.