Abstract
Perhaps no technological innovation has so dominated the second half of the twentieth century as has the introduction of the programmable computer. It is quite difficult if not impossible to imagine how contemporary affairs—in business and science, communications and transportation, governmental and military activities, for example—could be conducted without the use of computing machines, whose principal contribution has been to relieve us of the necessity for certain kinds of mental exertion. The computer revolution has reduced our mental labors by means of these machines, just as the Industrial Revolution reduced our physical labor by means of other machines.
I am grateful to Tim Colburn, M. M. Lehman, and the editors of this volume for critical comments and valuable suggestions regarding this essay.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Notes
William S. Davis, Fundamental Computer Concepts (Reading, MA: Addison-Wesley, 1986), p. 2.
Stephen C. Kleene, Mathematical Logic (New York: John Wiley and Sons, 1967), eh. 5.
Douglas Downing and Michael Covington, Dictionary of Computer Terms (Woodbury, NY: Barren’s, 1986), p. 117. On the use of the term “heuristics” in the field of artificial intelligence, see Avron Barr and Edward A. Feigenbaum, The Handbook of Artificial Intelligence, vol. I (Reading, MA: Addison-Wesley, 1981), pp. 28-30, 58, 109.
Examples of expert systems may be found in Avron Ban and Edward A. Feigenbaum, The Handbook of Artificial Intelligence, vol. II (Reading, MA: Addison-Wesley, 1982).
A discussion of various kinds of expert systems may be found in James H. Fetzer, Artificial Intelligence: Its Scope and Limits (Dordrecht, The Netherlands: Kluwer Academic Publishers, 1990), pp. 180–191.
Bruce G. Buchanan and Edward H. Shortliffe, eds., Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (Reading, MA: Addison-Wesley, 1984).
Ibid., p. 74. The number “.6” represents a “certainty factor” which, on a scale from-1 to 1, indicates how strongly the claim has been confirmed (CF > 0) or disconfirmed (CF < 0); see also note 34 below.
Inference to the best explanation is also known as “abductive inference.” See, for example, James H. Fetzer and Robert Almeder, Glossary of Epistemology/Philosophy of Science (New York: Paragon House, 1993); and especially Yun Peng and James Reggia, Abductive Inference Models for Diagnostic Problem-Solving (New York: Springer-Verlag, 1990).
Barr and Feigenbaum, The Handbook of Artificial Intelligence, vol. II, p. 189. The tendency has been toward the use of measures of subjective probability in lieu of CFs; see note 34 below.
Buchanan and Shortliffe, eds., Rule-based Expert Systems, p. 4.
Ibid., p. 5.
On the project manager, see, for example, Neal Whitten, Managing Software Development Projects (New York: John Wiley and Sons, 1989).
Criteria for the selection of domain experts are discussed by D. A. Waterman, A Guide to Expert Systems (Reading, MA: Addison-Wesley, 1986).
The term “traditional” occurs here in contrast to the (far weaker) “artificial intelligence” conception of knowledge, in particular. On the traditional conception, see, for example, Israel Scheffler, Conditions of Knowledge (Chicago, IL: University of Chicago Press, 1965). On the use of this term in AI, see especially Fetzer, Artificial Intelligence, ch. 5, pp. 127-32.
See James H. Fetzer, Scientific Knowledge (Dordrecht, The Netherlands: D. Reidel, 1981), ch. 1.
The origins of distinctions between analytic and synthetic knowledge can be traced back to the work of eighteenth and nineteenth century philosophers, especially David Hume (1711–1776) and Immanuel Kant (1724–1804). Hume drew a distinction between knowledge of relations between ideas and knowledge of matters of fact, while Kant distinguished between knowledge of conceptual connections and knowledge of the world. While it would not be appropriate to review the history of the distinction here, it should be observed that it has enormous importance in many philosophical contexts. For further discussion, see Robert Ackermann, Theories of Knowledge (New York: McGraw-Hill, 1965); Scheffler, Conditions of Knowledge; and Fetzer and Almeder, Glossary. For a recent defense of the distinction, see Fetzer, Artificial Intelligence, pp. 106-9; and especially James H. Fetzer, Philosophy of Science (New York: Paragon House, 1993), chs. 1 and 3.
Thus, if the “many” who are honest were a large proportion of all the senators, then that degree of support should be high; if it were only a modest proportion, then it should be low; and so on. If the percentage were, say, m/n, then the support conferred upon the conclusion by those premises would presumably equal m/n. See, for example, Fetzer, Scientific Knowledge, Part III; Fetzer, Philosophy of Science, chs. 4-6; and note 34 below.
On the total-evidence condition, see Carl G. Hempel, Aspects of Scientific Explanation (New York: The Free Press, 1965), pp. 53-79.
This is a pragmatic requirement that governs inductive reasoning.
For further discussion, see, for example, Fetzer, Philosophy of Science, ch. 1.
See, for example, Carl G. Hempel, “On the Nature of Mathematical Truth” and “Geometry and Empirical Science,” both of which are reprinted in Herbert Feigl and Wilfrid Sellars, eds., Readings in Philosophical Analysis (New York: Appleton-Century-Crofts, 1949), pp. 222–237 and 238-49.
Thus, as Einstein observed, to the extent to which the laws of mathematics refer to reality, they are not certain; and to the extent to which they are certain, they do not refer to reality—a point I shall pursue.
For further discussion, see, for example, Fetzer, Scientific Knowledge, pp. 14–15.
The differences between stipulative truths and empirical truths are crucial for understanding computer programming.
Davis, Fundamental Computer Concepts, p. 20. It should be observed, however, that some consider the clock to be convenient for but not essential to computer operations.
David Nelson, “Deductive Program Verification (A Practitioner’s Commentary),” Minds and Machines, vol. 2, no. 3 (August 1992), pp. 283–307; the quote is from p. 289. On this and other grounds, Nelson denies that computers are properly described as “mathematical machines” and asserts that they are better described as “logic machines.”
Up to ten billion times as large, according to John Markoff, “Flaw Undermines Accuracy of Pentium Chips,” New York Times, November 24, 1994, pp. C1–C2. As Markoff illustrates, the difficulty involves division: Problem: \( 4,195,835 - \left[ {\left( {4,195,835 \div 3,145,727} \right) \times 3,145,727} \right] = ? \) Correct Calculation: [(4,195,835) x 3,145,727] = 0 Pentium’s Calculation: 4,195,835 - [(1.3337391) x 3,145,727] = 256
The remark is attributed to William Kahan of the University of California at Berkeley by Markoff, “Flaw Undermines Accuracy/’ p. C1. A number of articles discussing the problem have since appeared, including John Markoff, “Error in Chip Still Dogging Intel Officials,” New York Times, December 6, 1994, p. C4; Laurie Flynn, “A New York Banker Sees Pentium Problems,” New York Times, December 19, 1994, pp. C1-C2; John Markoff, “In About-Face, Intel Will Swap Flawed Pentium Chip for Buyers,” New York Times, December 21, 1994, pp. Al and C6; and John Markoff, “Intel’s Crash Course on Consumers,” New York Times, December 21, 1994, p. C1.
Including a security loophole with Sun Microsystems that was acknowledged in 1991, as Markoff observes in “Flaw Undermines Accuracy,” p. C2.
John Hockenberry, “Pentium and Our Crisis of Faith,” New York Times, December 28, 1994, p. A11; Peter H. Lewis, “From a Tiny Flaw, a Major Lesson,” New York Times, December 27, 1994, p. B10; and “Cyberscope,” Newsweek, December 12, 1994. Another example of humor at Intel’s expense: Question: What’s another name for the “Intel Inside” sticker they put on Pentiums? Answer: A warning label.
Hockenberry, “Pentium and Our Crisis of Faith,” p. A11.
As M. M. Lehman has observed, another—often more basic—problem can arise when changes in the world affect the truth of assumptions on which programs are based—which leads him to separate (what he calls) S-rype and E-type systems, where the latter but not the former are subject to revision under the control of feedback. See, for example, M. M. Lehman, “Feedback, Evolution, and Software Technology,” IEEE Software Process Newsletter, April 1995, for more discussion.
Fetzer, Scientific Knowledge, p. 15. Other problems not discussed in the text include determining the precise conditions that must be satisfied for something to properly qualify as “scientific knowledge” (by arbitrating among inductivist, deductivist, and abductivist models, for example), and the appropriate measures that should be employed in determining degrees of evidential support (by accounting for the proper relations between subjective, frequency, and propensity interpretations of probability), a precondition for the proper appraisal of “certainty factors” (CFs), for example. These issues are pursued in Fetzer, Scientific Knowledge, and Fetzer, Philosophy of Science.
See, for example, Davis, Fundamental Computer Concepts, pp. 110–13.
Michael Marcotty and Henry E. Ledgard, Programming Language Landscape, 2d ed. (Chicago, IL: Science Research Associates, 1986), ch. 2.
See James H. Fetzer, “Philosophical Aspects of Program Verification/’ Minds and Machines, vol. 1, no. 2 (May 1991), pp. 197–216.
See James H. Fetzer, “Program Verification: The Very Idea,” Communications of the ACM, vol. 31, no. 9 (September 1988), p. 1057.
Brian C. Smith, “Limits of Correctness in Computers,” Center for the Study of Language and Information, Stanford University, Report No. CSLI-85-35 (1985); reprinted in Charles Dunlop and Rob Kling, eds., Computerization and Controversy (San Diego, CA: Academic Press, 1991), pp. 632-46. The passage quoted here is found on p. 638 (emphasis in original).
Smith, “Limits,” p. 639. As Smith also observes, computers and models themselves are “embedded within the real world,” which is why the symbol for “REAL WORLD” is open in relation to the box, which surrounds the elements marked “COMPUTER” and “MODEL.”
See Fetzer, Philosophy of Science, pp. xii–xiii.
Indeed, on the deductivist model of scientific inquiry, which has been advocated especially by Karl R. Popper, even the adequacy of scientific theories is deductively checkable by comparing deduced consequences with descriptions of the results of observations and experiments, which are warranted by perceptual inference. This process is not a symmetrical decision procedure, since it can lead to the rejection of theories in science but not to their acceptance. The failure to reject on the basis of severe tests, however, counts in favor of a theory. See Karl R. Popper, Conjectures and Refutations (New York: Harper and Row, 1968). On the deductivist model, see Fetzer, Philosophy of Science. The construction.of proofs in formal sciences, incidentally, is also an asymmetrical procedure, since the failure to discover a proof does not establish that it does not exist.
The use of the term “propensity” is crucial here, since it refers to the strength of the causal tendency. The general standard being employed may be referred to as the propensity criterion of causal relevance. See, for example, Fetzer, Scientific Knowledge, and Fetzer, Philosophy of Science, for technical elaboration.
The use of the term “frequency” is crucial here, since it refers to the relative frequency of an attribute. The general standard being employed may be referred to as the frequency criterion of statistical relevance. See, for example, Wesley C. Salmon, Statistical Explanation and Statistical Relevance (Pittsburgh, PA: University of Pittsburgh Press, 1971). But Salmon mistakes statistical relevance for explanatory relevance.
Strictly speaking, in the case of propensities, causal relations and relative frequencies are related probabilistically. See, for example, Fetzer, Scientific Knowledge, and Fetzer, Philosophy of Science.
Even when the chemical composition, the manner of striking, and the dryness of a match are causally relevant to its lighting, that outcome may be predicted with deductive certainty (when the relationship is deterministic) or with probabilistic confidence (when the relationship is indeterministic) only if no other relevant properties, such as the presence or absence of oxygen, have been overlooked. For discussion, see, for example, James H. Fetzer, “The Frame Problem: Artificial Intelligence Meets David Hume,” International journal of Expert Systems, vol. 3, no. 3 (1990), pp. 219–32; and James H. Fetzer, “Artificial Intelligence Meets David Hume: A Response to Pat Hayes,” International journal of Expert Systems, vol. 3, no. 3 (1990), pp. 239-47.
Laws of nature are nature’s algorithms. See Fetzer, “Artificial Intelligence Meets David Hume,” p. 239. A complete theory of the relations between models of the world and the world would include a defense of the abductivist model of science as “inference to the best explanation.” [Note added 2000: But see p. 178, n. 27, above.]
Smith thus appears to have committed a fallacy of equivocation by his ambiguous use of the phrase “theory of the model-world relationship.”
Whitten, Managing Software Development Projects, p. 13.
Jonathan Jacky, “Safety-Critical Computing: Hazards, Practices, Standards, and Regulations,” The Sciences, September/October 1989; reprinted in Dunlop and Kling, Computerization and Controversy, pp. 612-31.
As M. M. Lehman has observed (in personal communication with the author), specifications are frequently merely partial models of the problem to be solved.
C. A. R. Hoare, “An Axiomatic Basis for Computer Programming/’ Communications of the ACM, vol. 12, no. 10 (October 1969), pp. 576–80 and 583; the quotation may be found on p. 576.
Smith, “Limits,” pp. 639–43. Other authors have concurred. See, for example, Alan Borning, “Computer System Reliability and Nuclear War,” Communications of the ACM, vol. 30, no. 2 (February 1987), pp. 112-31; reprinted in Dunlop and Kling, Computerization and Controversy.
For further discussion, see, for example, B. W. Boehm, Software Engineering Economics (New York: Prentice-Hall, 1981).
See Fetzer, “Program Verification,” pp. 1056–57.
See James H. Fetzer, “Author’s Response,” Communications of the ACM, vol. 32, no. 4 (April 1989), p. 512.
Hoare, “An Axiomatic Basis for Computer Programming,” p. 579.
See Fetzer, “Program Verification.”
Avra Cohn, “The Notion of Proof in Hardware Verification,” Journal of Automated Reasoning, vol. 5, no. 2 (June 1989), p. 132.
Bev Littlewood and Lorenzo Strigini, “The Risks of Software,” Scientific American, November 1992, p. 65.
Ibid., pp. 65-66 and 75.
David Shepherd and Greg Wilson, “Making Chips That Work,” New Scientist, May 13, 1989, pp. 61–64.
Ibid., p. 62.
Littlewood and Strigini, “The Risks of Software,” p. 75.
C. A. R. Hoare, “Mathematics of Programming,” BYTE, August 1986, p. 116.
David Pamas, quoted in William E. Suydam, “Approaches to Software Testing Embroiled in Debate,” Computer Design, vol. 24, no. 21 (November 15, 1986), p. 50.
Littlewood and Strigini, “The Risks of Software,” p. 63.
For an introduction to chaos theory, see James Gleick, Chaos: Making a New Science (New York: Penguin Books, 1988).
See Fetzer, “The Frame Problem,” pp. 228–29. Predictions based upon partial and incomplete descriptions of chaotic systems are obviously fallible—their reliability would appear to be unknowable.
Sometimes unverifiable or incorrect programs can even be preferred; see Stephen Savitzky, “Technical Correspondence,” Communications of the ACM, vol. 32, no. 3 (March 1989), p. 377. These include cases where a verifiably incorrect program yields a better performance than a verifiably correct program as a solution to a problem—where the most important features of a successful solution may involve properties that are difficult or impossible to formalize—and other kinds of situations.
For further discussion, see Fetzer, Scientific Knowledge, and Fetzer, Philosophy of Science.
M. M. Lehman, “Uncertainty in Computer Application,” Communications of the ACM, vol. 33, no. 5 (May 1990), p. 585 (emphasis added).
Nelson, “Deductive Prograin Verification” (supra note 27), p. 292.
Donald Gillies has informed me that Hoare now advocates this position.
The prospect of having to conduct statistical tests of nuclear weapons, space shuttle launches, etc., suggests the dimensions of the problem.
See, for example, Jacky, “Safety-Critical Computing,” pp. 622–27.
Ibid., p. 627. An excellent and accessible discussion of problems involving computer systems that affect many important areas of life is Ivars Peterson, Fatal Defect: Chasing Killer Computer Bugs (New York: Random House/Times Books, 1995).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Fetzer, J.H. (2001). Computer Reliability and Public Policy: Limits of Knowledge of Computer-Based Systems. In: Computers and Cognition: Why Minds are not Machines. Studies in Cognitive Systems, vol 25. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-0973-7_11
Download citation
DOI: https://doi.org/10.1007/978-94-010-0973-7_11
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-0243-4
Online ISBN: 978-94-010-0973-7
eBook Packages: Springer Book Archive