In 1991 Larry Laudan and Jarret Leplin proposed a solution for the problem of empirical equivalence and the empirical underdetermination that is often thought to result from it. In this paper we argue that, even though Laudan and Leplin’s reasoning is essentially correct, their solution should be accurately assessed in order to appreciate its nature and scope. Indeed, Laudan and Leplin’s analysis does not succeed in completely removing the problem or, as they put it, in refuting the thesis of underdetermination as a consequence of empirical equivalence. Instead, what they show is merely that science possesses tools that may eventually lead out of an underdetermination impasse. We apply their argument to a real case of two empirically equivalent theories: Lorentz’s ether theory and Einstein’s special relativity. This example illustrates the validity of Laudan and Leplin’s reasoning, but also shows the importance of the reassessment we argue for.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
Suppose that the hypotheses H and H’ are rivals, that (H ∧ A) → e, that (H ' ∧ A) → e, and that e is observed—so that H is confirmed and H’ disconfirmed. The Duhem-Quine thesis implies that it is always logically possible to change A in a way such that (H ' ∧ A ') → e. Therefore, it is always logically possible to create EE between H and H’.
In probabilistic theories we have to refine the criteria: we should require that the probability of the evidence is the same according to the theories in question.
As Laudan and Leplin state, ‘A number of deep epistemic implications, roughly collectable under the notion of “underdetermination”, have been alleged for empirical equivalence. For instance, it is typical of recent empiricism to hold that evidence bearing on a theory, however broad and supportive, is impotent to single out that theory for acceptance, because of the availability or possibility of equally supported rivals. Instrumentalists argue that the existence of theoretically noncommittal equivalents for theories positing unobservable entities establishes the epistemic impropriety of deep-structure theorizing, and with it the failure of scientific realism. Some pragmatist infer that only nonepistemic dimensions of appraisal are applicable to theories, and that, accordingly, theory endorsement is not exclusive, nor, necessarily, preferential’ (ibid., 459–460). In this paper we will deal only with the first and third dimensions of the problem that Laudan and Leplin mention in this quote, not with the second one. For a general assessment of the problem of EE and UD as a problem for the realist, see (Psillos 1999).
See also (De Regt and Dieks 2005). There it is argued that ‘scientific understanding’, and a fortiori ‘explanation’, are pragmatic, context-dependent features. A phenomenon P is understood if there is an intelligible theory about P; and a theory T is intelligible if scientists are able to recognize qualitatively characteristic consequences of T without performing exact calculations. Different ‘conceptual toolkits’ can work as sources of intelligibility for a theory—visualization, causal explanations and unifications. The crucial point is that none of these explanatory virtues can be asserted as necessary or sufficient in order to obtain intelligibility for a theory; rather, which tools can provide intelligibility depends on contextual features.
For constructive empiricists it is possible to accept both theories at the same time. Since they are not committed to the non-empirical content of the theories, they can accept both as empirically adequate and make a pragmatic preference if the context so requires. This stance only works if we are willing to accept that empirical adequacy is enough; that is, if empirical adequacy is the basic and sufficient feature that we should expect from a theory in order to accept it. The cost would be to quit to demands for understanding from scientific theories, for example. We think that a more general solution is available. There are arguments that show that a way out is possible regardless of whether one is a constructive empiricist, a realist, or what have you.
‘T. H. Huxley’s aphorism about ‘the great tragedy of Science—the slaying of a beautiful hypothesis by an ugly fact—which is so constantly being enacted under the eyes of philosophers’ aptly describes the lag of aesthetic appreciation behind empirical assessment. The perceived beauty of a hypothesis is a function of the observational success of anteceding theories aesthetically similar to it; the novel fact appears as yet ugly because unassimilated within a theory of which the aesthetic qualities have been sufficiently weighted by the community. In time the community’s indicators of beauty will evolve to render the theory erected about the new fact a structure of sovereign beauty and the disproven hypothesis merely passé’ (ibid., 39–40).
‘Metarationalism is clearly responsible for the genesis of indicators of truth because their inclusion among the desiderata of theories derives entirely from the a priori definition of the goal of science, the complete and true explanatory account of the universe. The requirements of internal consistency or predictive accuracy are prized not because they have previously been witnessed to accompany verisimilitude but because they are the elements of an explication of that very concept: indicators of truth appear in other terms to provide not a mere ampliative connotation but rather an analytic definition of truthlikeness. It remains of course possible for indicators of truth to be inductively learned by a scientific community but this is irrelevant to the a priori logical status of such criteria’ (ibid., 38). In order to retain neutrality regarding the realism-antirealism schism, we can replace ‘indicators of truth’ for ‘indicators of empirical success’.
Furthermore, in times of scientific crisis there is no unique canon of beauty (if there ever is). A good example is given by the four-dimensional formulation of special relativity by Hermann Minkowski. Some scientists (such as Sommerfeld and Laue) considered the chrono-geometric formulation as expressing aesthetic virtues (based on simplicity, mainly), whereas others (e.g., P. Frank, at least for some time) considered it as expressing a non-empirical flaw (given the loss of intuitive visualizability involved). See (Illy 1981) and (Walter 2010).
Laudan and Leplin acknowledge that van Fraassen would not accept this thesis. However, they claim that ‘we reject [van Fraassen’s] implicit assumption that conditions of observability are fixed by physiology. Once it is decided what is to count as observing, physiology may determine what is observable. But physiology does not impose or delimit our concept of observation. We could possess the relevant physiological apparatus without possessing a concept of observation at all. The concept we do possess could perfectly well incorporate technological means of detection. In fact, the concept of observation has changed with science, and even to state that the (theory-independent) facts determine what is observable, van Fraassen must use a concept of observation that implicitly appeals to a state of science and technology’ (Laudan and Leplin 1991, 452).
For a detailed explanation of the Ramsey sentence and Craig’s theorem, and of why both failed to accomplish the logical positivist goal, see (Suppe 1974, 27–35).
John Norton provides a similar reason to dismiss Kukla’s algorithm. Even if we accept that T and T* have the same empirical consequences, it is clear that the theoretical terms and entities in T are necessary for the derivation of such consequences for both theories—for the theoretical terms are required to derive the empirical consequences of T*, but they are denied in the latter theory (see the example of intentional psychology below). Therefore, by negating those terms and entities T* gets gratuitously impoverished: ‘If we assume that the algorithm is applied to a well-formulated theory T whose theoretical structure is essential to T’s generation of observational consequences, then the construction of T’ [Kukla’s T*] amounts to a gratuitous impoverishment of theory T, the denial of structures that are essential to the derivation of observational consequences that are well confirmed by them’ (Norton 2008, 39–40).
‘It seems to me that the whole philosophical dispute between the received-viewers and Laudan and Leplin comes down to the issue of distinguishing genuine theoretical competitors from logico-semantic tricks. Laudan and Leplin represent the issue as being concerned with the existence or nonexistence of empirical equivalents. But it is evident, both from my example as well from the example they reject in a footnote, that there do exist empirically equivalent propositions to any theory. The only question is whether these structures fail to satisfy some additional criteria for genuine theoreticity. The received-viewers are satisfied with their examples of empirical equivalence. The burden is on Laudan and Leplin to explain why empirical equivalence isn’t enough’ (Kukla 1993, 5).
It is still possible to weaken the algorithm and take it just as stating that T’ asserts that T holds when we are observing, but it does not hold when nobody is looking. As a theory, this would be way too bizarre to be considered as genuinely scientific. However, the weakened algorithm can still be taken as an instance of the evil-genius argument—as an instance of the fact that, from a logical point of view, there are many hypotheses consistent with the information of our senses but that deny them as providing reliable information about reality. But in this case the algorithm is no longer a problem of the philosophy of science, but of metaphysics.
We consider, unlike constructive empiricists, that explanation and understanding are essential aspects of science—see (De Regt and Dieks 2005).
It is important to emphasize that theoreticity constraints serve as a tool for blocking algorithms that automatically yield EE theories; the main point in this subsection is to discuss the first premise of our problem, that given any theory T there is an EE rival T’. The universal scope of this premise crucially depended on the effectiveness of algorithms. But theoreticity requirements preclude that their outputs may be considered as genuinely scientific hypotheses or theories. When it comes to EE between genuine scientific theories these basic theoreticity requirements are fulfilled by the theories involved, by definition—otherwise the theories would not be genuinely scientific—, so they cannot function as criteria that provide a way out of the choice problem. These remarks prevent a possible objection. The reader might complain that in Section 3 non-empirical virtues were dismissed as a full solution of the problem because of their context-dependency, but now another context-dependent feature, theoreticity, is being used as a part of the defended solution. However, as we just mentioned, theoreticity constraints block algorithms and so undermine the first premise of the problem. We are not using theoreticity as a criterion to make a choice between EE ‘real life’ theories. For example, even if the degree of plausibility of a certain hypothesis or theory may not be objectively addressed in some cases of ‘real-life’ science, in the case of ‘algorithmic theories’ it is clear that the algorithms involved do not include any receipt to provide their outputs with the mentioned property.
More precisely, it has not been demonstrated that algorithms of this kind cannot exist. However, it is extremely unlikely—given the non-a priori character of the theoreticity requirements—that an algorithmic procedure could include a recipe for obtaining plausible hypotheses. In ‘real life’ science plausibility for a new hypothesis is usually originated in scientists’ creativity and ingenuity, so it is difficult to see how an algorithm could contain a receipt for this property to be included in their output.
Notice that theoreticity constraints also block the holist Duhem-Quine thesis as providing support for the universal scope of the first premise of the problem. As Adolf Grünbam showed, the Duhem-Quine thesis ‘nor other logical considerations can guarantee the deducibility of O’ [the class of observational consequences] from an explanans constituted by the conjunction of H and some non-trivial revised set A’ of the auxiliary assumptions which is logically compatible with A under the hypothesis H’ (Grünbaum 1960, 77). Suppose rival hypotheses H and H’ are given, and suppose that a crucial experiment to test them favors H’. The Duhem-Quine thesis implies that it is always logically possible to save H by arranging the set of auxiliary assumptions A and replacing it by A’, so that the outcome of the experiment could be accommodated. In that case, we could always have a case of EE between H and H’. Grünbaum shows that this logical feature is not enough to prove that there will be a suitable A’ of non-trivial assumptions for H to accommodate the observations. In our context, we could simply replace non-trivial assumptions for assumptions that accomplish theoreticity constraints.
Boyd’s own position is that, in a case of EE between T and T ', the compliance of T with the form of causal explanations present in empirically successful theories in background knowledge counts as an indicator for the truth of T that is lacking in T '–for the explanations in T ' do not have the mentioned form. The principle of confirmation just defended weakens Boyd’s original position in the sense that it is detached from any realist commitments (Boyd considers the problem of EE and UD as a threat to the realist), and at the same time generalizes it in the sense that possible friction with background knowledge is not given only by divergence from the canonical form of causal explanations.
The epistemic justification of the principle we have extracted from Boyd’s argument is a very basic goal of science: mutual consistency between accepted theories. Suppose that T and T’ are EE, that T is consistent with another well-confirmed theory P, and that T’ is at odds with it. The evidential support for P counts as empirical evidence against T’ granted that we agree that consistency between the theories we accept is a basic principle of science. If we want that our theories are mutually consistent, then Boyd’s argument should be taken as a principle in the dynamics of empirical confirmation. This is a very plausible stance of course: if we aspire to obtain knowledge of reality by means of scientific theories, it is clear that if the set of scientific theories we accept were inconsistent, we would hardly call such set ‘knowledge’. Suppose that in a certain domain of physics theory T is introduced and that all of its predictions are confirmed, that in a different domain theory P is proposed and all its predictions are confirmed, and that P and T are incompatible. This situation, of course, would be taken as a serious problem for science, and it would be expected that endeavors in order to show that one of the theories must be given up would be undertaken by scientists.
As Okasha asserts (1997, 254), Laudan and Leplin’s argument can be schematized this way:
i) H 1 and H 2 are EE
ii) T ⇒ H 1
iii) T⇏H 2
iv) T ⇒ H
v) H ⇒ e
vi) H 1 ⇏e,
vii) H 2 ⇏e,
therefore; ix) e confirms T (this requires the converse consequence condition), and then x) e confirms H 1 (this requires the special consequence condition); but e does not confirm H 2 .
In this passage Laudan and Leplin cannot be using the term ‘systems of the world’ in its canonical meaning. See (Hoefer and Rosenberg 1994).
Actually, even if we interpret Laudan and Leplin’s argument as directed only against the universality and generality of the problem, they do not even mention that a remaining ‘local’ problem of EE and UD still stands—let alone that this remaining problem has important epistemic dimensions.
The case of Bohmian mechanics—originally introduced in the early 50’s— and standard quantum mechanics seems to be an example of a case of EE and UD that cannot be readily solved à la Laudan and Leplin (and Boyd). In this case, thus, the epistemic underpinning of the reasons for the dominant position of the latter theory is a subject for philosophical discussion. Actually, J. Cushing (1994, chapter 11), in the context of the case of Bohm vs. Bohr, assesses Laudan and Leplin’s analysis in a way that comes close to what we say here. We defer an examination of this case for a future paper.
For an excellent presentation and analysis of Lorentz’s theory see (Janssen 1995, chapter 3).
For an analysis of the physical and epistemological framework of Lorentz’s theory see (McCormmach 1970).
For the scientific context and motivations of Lorentz’s invention of his theory, see (Janssen and Stachel 2004).
Poincaré’s contributions and corrections on Lorentz’s work were the following: i) he showed that ‘local time’ was not a mere mathematical tool, as Lorentz originally claimed, for it was connected to observable effects in the behavior of moving clocks; ii) the introduction of a fictitious fluid in the ether that carried an amount of electromagnetic momentum; iii) the introduction of the Poincare-pressure which kept the moving electron stable and precluded its explosion due to Coulomb forces; iv) he corrected Lorentz’s expressions for the transformation of velocities and charge density between moving frames; and v) he showed that the Lorentz’s transformations form a group, and by so doing he showed that they are fully symmetric. Only with these amendments Lorentz’s theory becomes completely predictively equivalent to special relativity. See (Darrigol 1995).
Poincare published his On the Dynamics of the Electron—the work where he introduced the amendments and developments of Lorentz’s theory that make it predictively equivalent to special relativity—in 1906.
For a historical treatment of the formulation and early reception of special relativity see (Miller 1981).
See (Brush 1999).
‘Lorentz pointed out that his black-body formula agrees with the long wavelength limit of the quantum formula that Planck had derived in 1900, a coincidence which struck him as highly remarkable considering the widely different assumptions in the two cases. It was characteristic of Lorentz to spell out what was incomplete in his work and what was still unknown; he stressed that his theory is valid only for long wavelengths and that Planck’s applies to the whole spectrum. So it was Lorentz, an originator of the electron theory, who first intimated the possible limits of the theory. Starting from the electron theory and from a mechanism appropriate to the theory, he arrived at the limiting case of the radiation law; and he did not see how to extend his theory to Planck’s general case.’ (McCormmach 1970, 486-487).
Unlike the relation between Lorentz’s theory and quantum physics, the connection between special relativity and general relativity was not historically relevant. General relativity got completed in 1916—and empirically tested in 1919—and by then special relativity was already generally accepted by the scientific community, whereas Lorentz’s theory had been put aside. Our claim that the connection between special and general relativity grounds a reason to choose Einstein’s theory instead of Lorentz’s is thus only conceptual, not historical.
Balashov, Y., & Janssen, M. (2003). Presentism and relativity. British Journal for the Philosophy of Science, 54, 327–346.
Bangu, S. (2006). Underdetermination and the argument from indirect confirmation. Ratio, 19, 269–277.
Boyd, R. (1973). Realism, underdetermination, and a causal theory of evidence. Noûs, 7, 1–12.
Brush, S. (1999). Why was relativity accepted? Physics in Perspective, 1, 184–214.
Bunge, M. (1961). The weight of simplicity in the construction and assaying of scientific theories. Philosophy of Science, 28, 120–141.
Cushing, J. (1994). Quantum mechanics: historical contingency and the Copenhagen hegemony. Chicago: The University of Chicago Press.
Darrigol, O. (1995). Henri Poincaré’s criticism of Fin de Siècle electrodynamics. Studies in History and Philosophy of Modern Physics, 26, 1–44.
Dawid, R. (2013). String theory and the scientific method. Cambridge: Cambridge University Press.
De Regt, H. W., & Dieks, D. (2005). A contextual approach to scientific understanding. Synthese, 144, 137–170.
Grünbaum, A. (1960). The Duhemian argument. Philosophy of Science, 27, 75–87. Reprinted in S. Harding (ed.), 1976, 116–131.
Hempel, C. (1945). Studies in the logic of confirmation (II). Mind, 54, 97–121.
Hoefer, C., & Rosenberg, A. (1994). Empirical equivalence, underdetermination, and systems of the world. Philosophy of Science, 61, 592–607.
Illy, J. (1981). Revolutions in a revolution. Studies in History and Philosophy of Science, 12, 173–210.
Janssen, M. (1995). A comparison between Lorentz’s Ether theory and special relativity in the light of the experiments of trouton and noble. PhD Dissertation, University of Pittsburgh.
Janssen, M. (2002a). Reconsidering a scientific revolution: the case of Lorentz versus Einstein. Physics in Perspective, 4, 421–446.
Janssen, M. (2002b). COI stories: explanations and evidence in the history of science. Perspectives on Science, 10, 457–522.
Janssen, M. (2003). The Trouton Experiment, E = mc 2, and a Slice of Minkowski Space-time. In A. Ashtekar, R. S. Cohen, D. Howard, J. Renn, S. Sarkar, & A. Shimony (Eds.), Revisiting the foundations of relativistic physics: festschrift in honor of John Stachel (pp. 27–54). Dordrecht: Kluwer.
Janssen, M. (2009). Drawing the line between kinematics and dynamics in special relativity. Studies in History and Philosophy of Modern Physics, 40, 26–52.
Janssen, M., & Stachel, J. (2004). The optics and electrodynamics of moving bodies, preprint, Max Planck Institute for the History of Science.
Kox, A. J. (2013). Hendrik Antoon Lorentz struggle with quantum theory. Archive for History of Exact Sciences, 67, 149–170.
Kukla, A. (1993). Laudan, Leplin, and underdetermination. Analysis, 53, 1–7.
Kukla, A. (1996). Does every theory have empirically equivalent rivals? Erkenntnis, 44, 137–166.
Laudan, L., & Leplin, J. (1991). Empirical equivalence and underdetermination. The Journal of Philosophy, 88, 449–472.
Leplin, J., & Laudan, L. (1993). Determination underdeterred: reply to Kukla. Analysis, 53, 8–16.
McAllister, J. (1989). Truth and beauty in scientific reason. Synthese, 78, 25–51.
McCormmach, R. (1970). H.A. Lorentz and the electromagnetic view of nature. Isis, 61, 459–497.
Miller, A. I. (1981). Albert Einstein’s special theory of relativity: emergence (1905) and early interpretation (1905–1911). New York: Springer.
Norton, J. (2008). Must evidence underdetermine theory? In M. Carrier, D. Howard, & J. Kourany (Eds.), The challenge of the social and the pressure of practice: science and values revisited (pp. 17–44). Pittsburgh: University of Pittsburgh Press.
Okasha, S. (1997). Laudan and Leplin on empirical equivalence. British Journal for the Philosophy of Science, 48, 251–256.
Psillos, S. (1999). Scientific realism: how science tracks truth. London: Routledge.
Suppe, F. (1974). The search for philosophical understanding of scientific theories. In F. Suppe (Ed.), The structure of scientific theories (pp. 3–241). Urbana: University of Illinois Press.
Van Fraassen, B. (1980). The scientific image. Oxford: Clarendon Press.
Walter, S. (2010). Minkowski’s modern world. In V. Petkov (Ed.), Minkowski spacetime: a hundred years later. New York: Springer.
We thank two anonymous referees for their helpful comments and suggestions on an earlier version of this paper.
About this article
Cite this article
Acuña, P., Dieks, D. Another look at empirical equivalence and underdetermination of theory choice. Euro Jnl Phil Sci 4, 153–180 (2014). https://doi.org/10.1007/s13194-013-0080-3
- Empirical equivalence
- Theory choice
- Non-empirical virtues
- Empirical evidence
- Special relativity
- Hendrik Lorentz