Abstract
A theory is scientific rather than pseudoscientific if it is capable of receiving genuine ‘support’ from the ‘facts’. One scientific theory is better than another rival theory if it is better supported by the facts than its rivals. Although some would reject the term ‘support’ and replace it by ‘confirm’ or ‘corroborate’, most recent attempts to provide an objective and generally applicable criterion of scientific merit have started essentially from these two assumptions. But when does a fact provide genuine support for a theory? And when do the facts support one theory better than another?
Keywords
- Background Knowledge
- Scientific Research Programme
- Explanatory Content
- Auxiliary Assumption
- Powerful Heuristic
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Notes
See, e.g. Popper [1963], p. 390: “‘background knowledge’… is… all those things which we accept (tentatively) as unproblematic while we are testing the theory”. There is a slight difficulty here. Popper requires the background knowledge to a theory to be consistent with it. But as he himself points out it is one mark of a very good theory if it corrects (i.e. is inconsistent with) previously accepted factual statements. This means that one cannot know in advance of the proposal of a theory what its background knowledge will be! Popper requires that the previously accepted factual statements contradicted by the theory drop out of background knowledge. Thus a theory is no more severely tested by a test whose result it predicts to be different from the result predicted by background knowledge, than it is in a case where background knowledge remains silent about the result. Indeed, on this account, a theory receives less credit for successfully contradicting accepted knowledge than it receives for successfully going against a result which accepted knowledge makes ‘highly probable’. This is surely contrary to the spirit of the Popperian programme. On an historical note, this counter-intuitive consequence of Popper’s corroboration theory seems to me to have arisen because of the attempt to make background knowledge serve two distinct purposes. It was originally meant to consist of those extra assumptions, both singular and universal, required in the deduction of testable consequences from a scientific theory (see below, pp. 52–4). This is indicated in the quotation from Popper above. However it was then pressed into service to eliminate trivial confirmations (or corroborations) of theories — a theory should not get credit for simply predicting something that was already part of background knowledge. Indeed, speaking informally, in Popper’s definition of the severity of a test whose outcome is e for an hypothesis h given background knowledge b, which definition makes the severity depend on p (e, h ⋅ b) minus p (e, b), b is playing one role in the first probability function (it is there the set of those extra assumptions we have to make in order to derive e from h), and the second role in the second probability function (there it is the set of already accepted knowledge). In what follows we are essentially investigating how successfully background knowledge performs its second role — that of ruling out trivial confirmations of theories.
Lakatos’s [1970] account is essentially the same as this.
See for example his [1971], p. 104.
See e.g. Adler, Bazin and Schiffer [1965].
See Whittaker [1910], pp. 135–6.
See especially Zahar [1973], §3.1.
Elie Zahar in his [1973] argues that this widespread belief is ill-founded, for there were completely independent reasons within Lorentz’s programme for giving the specific value to this parameter. Thus the Michelson-Morley result did in fact support Lorentz’s programme. But of course Zahar would agree that had Lorentz’s explanation been arrived at in the way described in the text it would not have been supported by the Michelson result. For the sake of logical clarity I should add that although I speak here of classical theory being provided with a new parameter, in a sense the parameter was already implicit. That is, it was already assumed that there was no contraction of rigid rods. If some such assumption had not already been made, this new assumption (unless it introduced inconsistency) could not affect the theory’s predictions.
I should make it clear that the methodology of research programmes does not condemn the practice of, for example, reading off the values of parameters from some experimental results. This happens in all the best research programmes. For example, the wave theory arrives at the values of the wavelength λi of various kinds of monochromatic light by predicting various interference fringe spacings as functions of A, and then reading off the value of λi. from the observed fringe spacings. The methodology merely states that having used these facts to construct their theory, wave theoreticians must look to other facts to support the theory. (See my [1975], for the details of the Fresnel case.)
In fact Popper’s discussion of conventionalist strategems indicates that he had already spotted the problem in 1934. He meets the problem (more or less) head on in his [1957] paper on The Aim of Science’. The problem had often been discovered before. For example, Duhem recognised that it is not difficult to construct ‘purely artificial’ theoretical systems, but ‘we see in the hypotheses on which [such a system] rests, statements skillfully worked out so that they represent the experimental laws already known’ (Duhem [1906], p. 28); it is only by avoiding such artificial systems that we can hope to progress toward the ‘natural classification’.
This justification can for example be based on Popper’s requirement that a theory be given credit only when it has ‘stuck out its neck’.
This point is made as a criticism of Lakatos’s [1970] criterion of scientific progress by Zahar on p. 102 of his [1973].
Musgrave [1974].
Whereas the Popperian account makes the empirical support relation a three place relation ES(h, e, b), between a hypothesis, some evidence and background knowledge, our new account makes it a three place relation, ES(h, e, b’) where b’ is only the background knowledge used in the construction of a theory.
This is really the basis of Musgrave’s claim (see above, p. 50) that this approach to empirical support reduces to absurdity.
See pp. 60–1: Whether some fact was used in the construction of a theory is an objective matter-quite separate from any question about whether the theory’s inventor knew or ‘was aware of the fact. In the above case of the two scientists who introduce the same theory, if one has to use some fact in order to construct his theory, whilst the second does not, then the second scientist has shown that there are theoretical considerations which are supported by this fact (although the first scientist was not aware of it). Thus in deciding whether some fact according to this new account supports a theory one will ask such questions as “Did x’s programme give him independent reasons for fixing this parameter in this theory at this value or did its value have to be ‘read off’ from some observations?” And not such questions as “Did x know of this fact or have this fact in mind when he developed this theory?”
Below, p. 58ff.
Kuhn [1962]. Similar points were made by Agassi in his attack on what he calls Boyle’s rule (see Agassi [1966]) and by Feyerabend (see for example his [1963] and his [1975]).
The fact that a decision is involved here is particularly well emphasised by Popper (see especially his discussion of Fries’s trilemma in his [1934] pp. 93–111).
What for example, if the meter-reader was drunk or has bad reflexes?
In the best research programmes the heuristic may give us some indication which auxiliary assumption needs to be replaced.
It was a mistake on Lakatos’s part to think that a ‘protective belt’ could get constructed in this way. Simply adding extra assumptions to a theoretical system cannot block the derivation of a false observational consequence.
For an example of a ‘degenerating research programme’ of whose historical accuracy I am more confident, see Chapter 3 of my [1975]. (The example is Biot’s development of the corpuscular optics research programme.)
This was already pointed out by Lakatos (see his [1970], pp. 184–8).
This would reduce it to the sophisticated falsificationist account which is essentially that given by Watkins above.
This is one important way in which the criterion of progress I have been advocating differs from the one due to Popper; although of course it owes a good deal to the Popper who rejects ‘conventionalist strategems’ and the like. Further differences are these: (i) (to repeat what I said above, p. 52) Popper’s corroboration appraisals cannot distinguish between any shifts between refuted theories (the group of Newtonian assumptions amended to include the new planet was still inconsistent with some observational results, e.g. about the Moon); (ii) Popper never applied these ideas to the Duhem-Quine problem, indeed he twice denied that such a problem exists by denying (without argument) that Duhem had shown the inconclusiveness of falsification (see Popper [1934], p. 78, footnote *, and [1963], p. 112); and (iii) that Popper was occasionally confused on these matters is well illustrated by the fact that there are two entries in the subject index of his [1963], (p. 413): ‘Marxism-refuted’ and ‘Marxism-made irrefutable’; these two claims are rather difficult to reconcile unless one has the idea of various versions of a Marxist research programme, which versions may differ in refutability-but even then the point is not that Marxism has been made completely irrefutable but that there has been no increase in genuine empirical content (and thus in refutability) in the various theory shifts that have been made in response to refutations of previous theories.
See Zahar [19731.
See especially my [1975], though some details are to be found in my [1976].
Zahar [1973].
See Whittaker [1910], pp. 132–6, and my [1975]. For another example (Bohr’s early quantum programme), see Lakatos [1970], pp. 140–154. I should add that having a powerful heuristic indicates only that the programme is likely to be progressive in the theoretical sense-that it will produce theories with extra potential empirical support-over their predecessors. Whether or not some of this extra content is empirically confirmed-so that the programme is also empirically progressive-is in the lap of the experimenters.
This seems to be true of the heuristic guidance offered to various classical programmes by the assumption of the existence of the ether. This guidance was very strong at the time of Fresnel but difficulties presented themselves and it had become very weak by the time of Lorentz (see Zahar [1973]; also Schaffner [1972]).
See Lakatos and Zahar [1975].
For this particular example see Urbach [1974]. When 1 speak of the strength of a heuristic I am referring to its wide applicability, relatively unexhausted state, and ability to operator independently of facts. There is another sense which one might want to speak of a heuristic’s strength, namely how near it approaches to being an algorithm. The heuristic of the Ptolemaic programme was strong in this second sense, but weak in mine.
Zahar argues in his [1973] that the classical programme progressed in Lorentz’s hands at least in the empirical sense: it derived new support from the result of the Michelson-Morley experiment. (Zahar argues however that Lorentz’s programme was not progressive in all senses for heuristically the classical programme had degenerated.)
After all, if it were ‘irrational’ to work on a degenerating programme we should have to pronounce irrational all those geniuses who took up some old idea which hitherto no one had successfully developed and who turned it into a progressive research programme. (See Section 5 of my [1976].)
See e.g. Feyerabend [1964].
They occur, however, rather less often than Feyerabend would have us believe. In his [1975] he, for example, counts the loss of content about the specific gravity of phlogiston in the Chemical Revolution as an example of incommensurability. But, of course, losses in theoretical content occur in revolutions, the interesting question is whether losses in empirical content occur.
The details of this story are fascinating. Light pressure was accepted as experimentally detected only after Stokes had shown that it could also be predicted on the basis of the version of the wave theory then current.
See Watkins above.
Even this explanation was far from uncontroversial. For the controversy see Wood [1905], Chapter vi (this was dropped from subsequent editions of Wood’s book).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1978 D. Reidel Publishing Company, Dordrecht, Holland
About this chapter
Cite this chapter
Worrall, J. (1978). The Ways in which the Methodology of Scientific Research Programmes Improves on Popper’s Methodology. In: Radnitzky, G., Andersson, G. (eds) Progress and Rationality in Science. Boston Studies in the Philosophy of Science, vol 58. Springer, Dordrecht. https://doi.org/10.1007/978-94-009-9866-7_3
Download citation
DOI: https://doi.org/10.1007/978-94-009-9866-7_3
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-277-0922-6
Online ISBN: 978-94-009-9866-7
eBook Packages: Springer Book Archive