Skip to main content

Computer Reliability and Public Policy: Limits of Knowledge of Computer-Based Systems

  • Chapter
Computers and Cognition: Why Minds are not Machines

Part of the book series: Studies in Cognitive Systems ((COGS,volume 25))

Abstract

Perhaps no technological innovation has so dominated the second half of the twentieth century as has the introduction of the programmable computer. It is quite difficult if not impossible to imagine how contemporary affairs—in business and science, communications and transportation, governmental and military activities, for example—could be conducted without the use of computing machines, whose principal contribution has been to relieve us of the necessity for certain kinds of mental exertion. The computer revolution has reduced our mental labors by means of these machines, just as the Industrial Revolution reduced our physical labor by means of other machines.

I am grateful to Tim Colburn, M. M. Lehman, and the editors of this volume for critical comments and valuable suggestions regarding this essay.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. William S. Davis, Fundamental Computer Concepts (Reading, MA: Addison-Wesley, 1986), p. 2.

    Google Scholar 

  2. Stephen C. Kleene, Mathematical Logic (New York: John Wiley and Sons, 1967), eh. 5.

    Google Scholar 

  3. Douglas Downing and Michael Covington, Dictionary of Computer Terms (Woodbury, NY: Barren’s, 1986), p. 117. On the use of the term “heuristics” in the field of artificial intelligence, see Avron Barr and Edward A. Feigenbaum, The Handbook of Artificial Intelligence, vol. I (Reading, MA: Addison-Wesley, 1981), pp. 28-30, 58, 109.

    Google Scholar 

  4. Examples of expert systems may be found in Avron Ban and Edward A. Feigenbaum, The Handbook of Artificial Intelligence, vol. II (Reading, MA: Addison-Wesley, 1982).

    Google Scholar 

  5. A discussion of various kinds of expert systems may be found in James H. Fetzer, Artificial Intelligence: Its Scope and Limits (Dordrecht, The Netherlands: Kluwer Academic Publishers, 1990), pp. 180–191.

    Google Scholar 

  6. Bruce G. Buchanan and Edward H. Shortliffe, eds., Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (Reading, MA: Addison-Wesley, 1984).

    Google Scholar 

  7. Ibid., p. 74. The number “.6” represents a “certainty factor” which, on a scale from-1 to 1, indicates how strongly the claim has been confirmed (CF > 0) or disconfirmed (CF < 0); see also note 34 below.

    Google Scholar 

  8. Inference to the best explanation is also known as “abductive inference.” See, for example, James H. Fetzer and Robert Almeder, Glossary of Epistemology/Philosophy of Science (New York: Paragon House, 1993); and especially Yun Peng and James Reggia, Abductive Inference Models for Diagnostic Problem-Solving (New York: Springer-Verlag, 1990).

    Google Scholar 

  9. Barr and Feigenbaum, The Handbook of Artificial Intelligence, vol. II, p. 189. The tendency has been toward the use of measures of subjective probability in lieu of CFs; see note 34 below.

    Google Scholar 

  10. Buchanan and Shortliffe, eds., Rule-based Expert Systems, p. 4.

    Google Scholar 

  11. Ibid., p. 5.

    Google Scholar 

  12. On the project manager, see, for example, Neal Whitten, Managing Software Development Projects (New York: John Wiley and Sons, 1989).

    Google Scholar 

  13. Criteria for the selection of domain experts are discussed by D. A. Waterman, A Guide to Expert Systems (Reading, MA: Addison-Wesley, 1986).

    Google Scholar 

  14. The term “traditional” occurs here in contrast to the (far weaker) “artificial intelligence” conception of knowledge, in particular. On the traditional conception, see, for example, Israel Scheffler, Conditions of Knowledge (Chicago, IL: University of Chicago Press, 1965). On the use of this term in AI, see especially Fetzer, Artificial Intelligence, ch. 5, pp. 127-32.

    Google Scholar 

  15. See James H. Fetzer, Scientific Knowledge (Dordrecht, The Netherlands: D. Reidel, 1981), ch. 1.

    Google Scholar 

  16. The origins of distinctions between analytic and synthetic knowledge can be traced back to the work of eighteenth and nineteenth century philosophers, especially David Hume (1711–1776) and Immanuel Kant (1724–1804). Hume drew a distinction between knowledge of relations between ideas and knowledge of matters of fact, while Kant distinguished between knowledge of conceptual connections and knowledge of the world. While it would not be appropriate to review the history of the distinction here, it should be observed that it has enormous importance in many philosophical contexts. For further discussion, see Robert Ackermann, Theories of Knowledge (New York: McGraw-Hill, 1965); Scheffler, Conditions of Knowledge; and Fetzer and Almeder, Glossary. For a recent defense of the distinction, see Fetzer, Artificial Intelligence, pp. 106-9; and especially James H. Fetzer, Philosophy of Science (New York: Paragon House, 1993), chs. 1 and 3.

    Google Scholar 

  17. Thus, if the “many” who are honest were a large proportion of all the senators, then that degree of support should be high; if it were only a modest proportion, then it should be low; and so on. If the percentage were, say, m/n, then the support conferred upon the conclusion by those premises would presumably equal m/n. See, for example, Fetzer, Scientific Knowledge, Part III; Fetzer, Philosophy of Science, chs. 4-6; and note 34 below.

    Google Scholar 

  18. On the total-evidence condition, see Carl G. Hempel, Aspects of Scientific Explanation (New York: The Free Press, 1965), pp. 53-79.

    Google Scholar 

  19. This is a pragmatic requirement that governs inductive reasoning.

    Google Scholar 

  20. For further discussion, see, for example, Fetzer, Philosophy of Science, ch. 1.

    Google Scholar 

  21. See, for example, Carl G. Hempel, “On the Nature of Mathematical Truth” and “Geometry and Empirical Science,” both of which are reprinted in Herbert Feigl and Wilfrid Sellars, eds., Readings in Philosophical Analysis (New York: Appleton-Century-Crofts, 1949), pp. 222–237 and 238-49.

    Google Scholar 

  22. Thus, as Einstein observed, to the extent to which the laws of mathematics refer to reality, they are not certain; and to the extent to which they are certain, they do not refer to reality—a point I shall pursue.

    Google Scholar 

  23. For further discussion, see, for example, Fetzer, Scientific Knowledge, pp. 14–15.

    Google Scholar 

  24. The differences between stipulative truths and empirical truths are crucial for understanding computer programming.

    Google Scholar 

  25. Davis, Fundamental Computer Concepts, p. 20. It should be observed, however, that some consider the clock to be convenient for but not essential to computer operations.

    Google Scholar 

  26. David Nelson, “Deductive Program Verification (A Practitioner’s Commentary),” Minds and Machines, vol. 2, no. 3 (August 1992), pp. 283–307; the quote is from p. 289. On this and other grounds, Nelson denies that computers are properly described as “mathematical machines” and asserts that they are better described as “logic machines.”

    Google Scholar 

  27. Up to ten billion times as large, according to John Markoff, “Flaw Undermines Accuracy of Pentium Chips,” New York Times, November 24, 1994, pp. C1–C2. As Markoff illustrates, the difficulty involves division: Problem: \( 4,195,835 - \left[ {\left( {4,195,835 \div 3,145,727} \right) \times 3,145,727} \right] = ? \) Correct Calculation: [(4,195,835) x 3,145,727] = 0 Pentium’s Calculation: 4,195,835 - [(1.3337391) x 3,145,727] = 256

    Google Scholar 

  28. The remark is attributed to William Kahan of the University of California at Berkeley by Markoff, “Flaw Undermines Accuracy/’ p. C1. A number of articles discussing the problem have since appeared, including John Markoff, “Error in Chip Still Dogging Intel Officials,” New York Times, December 6, 1994, p. C4; Laurie Flynn, “A New York Banker Sees Pentium Problems,” New York Times, December 19, 1994, pp. C1-C2; John Markoff, “In About-Face, Intel Will Swap Flawed Pentium Chip for Buyers,” New York Times, December 21, 1994, pp. Al and C6; and John Markoff, “Intel’s Crash Course on Consumers,” New York Times, December 21, 1994, p. C1.

    Google Scholar 

  29. Including a security loophole with Sun Microsystems that was acknowledged in 1991, as Markoff observes in “Flaw Undermines Accuracy,” p. C2.

    Google Scholar 

  30. John Hockenberry, “Pentium and Our Crisis of Faith,” New York Times, December 28, 1994, p. A11; Peter H. Lewis, “From a Tiny Flaw, a Major Lesson,” New York Times, December 27, 1994, p. B10; and “Cyberscope,” Newsweek, December 12, 1994. Another example of humor at Intel’s expense: Question: What’s another name for the “Intel Inside” sticker they put on Pentiums? Answer: A warning label.

    Google Scholar 

  31. Hockenberry, “Pentium and Our Crisis of Faith,” p. A11.

    Google Scholar 

  32. As M. M. Lehman has observed, another—often more basic—problem can arise when changes in the world affect the truth of assumptions on which programs are based—which leads him to separate (what he calls) S-rype and E-type systems, where the latter but not the former are subject to revision under the control of feedback. See, for example, M. M. Lehman, “Feedback, Evolution, and Software Technology,” IEEE Software Process Newsletter, April 1995, for more discussion.

    Google Scholar 

  33. Fetzer, Scientific Knowledge, p. 15. Other problems not discussed in the text include determining the precise conditions that must be satisfied for something to properly qualify as “scientific knowledge” (by arbitrating among inductivist, deductivist, and abductivist models, for example), and the appropriate measures that should be employed in determining degrees of evidential support (by accounting for the proper relations between subjective, frequency, and propensity interpretations of probability), a precondition for the proper appraisal of “certainty factors” (CFs), for example. These issues are pursued in Fetzer, Scientific Knowledge, and Fetzer, Philosophy of Science.

    Google Scholar 

  34. See, for example, Davis, Fundamental Computer Concepts, pp. 110–13.

    Google Scholar 

  35. Michael Marcotty and Henry E. Ledgard, Programming Language Landscape, 2d ed. (Chicago, IL: Science Research Associates, 1986), ch. 2.

    Google Scholar 

  36. See James H. Fetzer, “Philosophical Aspects of Program Verification/’ Minds and Machines, vol. 1, no. 2 (May 1991), pp. 197–216.

    Google Scholar 

  37. See James H. Fetzer, “Program Verification: The Very Idea,” Communications of the ACM, vol. 31, no. 9 (September 1988), p. 1057.

    Google Scholar 

  38. Brian C. Smith, “Limits of Correctness in Computers,” Center for the Study of Language and Information, Stanford University, Report No. CSLI-85-35 (1985); reprinted in Charles Dunlop and Rob Kling, eds., Computerization and Controversy (San Diego, CA: Academic Press, 1991), pp. 632-46. The passage quoted here is found on p. 638 (emphasis in original).

    Google Scholar 

  39. Smith, “Limits,” p. 639. As Smith also observes, computers and models themselves are “embedded within the real world,” which is why the symbol for “REAL WORLD” is open in relation to the box, which surrounds the elements marked “COMPUTER” and “MODEL.”

    Google Scholar 

  40. See Fetzer, Philosophy of Science, pp. xii–xiii.

    Google Scholar 

  41. Indeed, on the deductivist model of scientific inquiry, which has been advocated especially by Karl R. Popper, even the adequacy of scientific theories is deductively checkable by comparing deduced consequences with descriptions of the results of observations and experiments, which are warranted by perceptual inference. This process is not a symmetrical decision procedure, since it can lead to the rejection of theories in science but not to their acceptance. The failure to reject on the basis of severe tests, however, counts in favor of a theory. See Karl R. Popper, Conjectures and Refutations (New York: Harper and Row, 1968). On the deductivist model, see Fetzer, Philosophy of Science. The construction.of proofs in formal sciences, incidentally, is also an asymmetrical procedure, since the failure to discover a proof does not establish that it does not exist.

    Google Scholar 

  42. The use of the term “propensity” is crucial here, since it refers to the strength of the causal tendency. The general standard being employed may be referred to as the propensity criterion of causal relevance. See, for example, Fetzer, Scientific Knowledge, and Fetzer, Philosophy of Science, for technical elaboration.

    Google Scholar 

  43. The use of the term “frequency” is crucial here, since it refers to the relative frequency of an attribute. The general standard being employed may be referred to as the frequency criterion of statistical relevance. See, for example, Wesley C. Salmon, Statistical Explanation and Statistical Relevance (Pittsburgh, PA: University of Pittsburgh Press, 1971). But Salmon mistakes statistical relevance for explanatory relevance.

    Google Scholar 

  44. Strictly speaking, in the case of propensities, causal relations and relative frequencies are related probabilistically. See, for example, Fetzer, Scientific Knowledge, and Fetzer, Philosophy of Science.

    Google Scholar 

  45. Even when the chemical composition, the manner of striking, and the dryness of a match are causally relevant to its lighting, that outcome may be predicted with deductive certainty (when the relationship is deterministic) or with probabilistic confidence (when the relationship is indeterministic) only if no other relevant properties, such as the presence or absence of oxygen, have been overlooked. For discussion, see, for example, James H. Fetzer, “The Frame Problem: Artificial Intelligence Meets David Hume,” International journal of Expert Systems, vol. 3, no. 3 (1990), pp. 219–32; and James H. Fetzer, “Artificial Intelligence Meets David Hume: A Response to Pat Hayes,” International journal of Expert Systems, vol. 3, no. 3 (1990), pp. 239-47.

    Google Scholar 

  46. Laws of nature are nature’s algorithms. See Fetzer, “Artificial Intelligence Meets David Hume,” p. 239. A complete theory of the relations between models of the world and the world would include a defense of the abductivist model of science as “inference to the best explanation.” [Note added 2000: But see p. 178, n. 27, above.]

    Google Scholar 

  47. Smith thus appears to have committed a fallacy of equivocation by his ambiguous use of the phrase “theory of the model-world relationship.”

    Google Scholar 

  48. Whitten, Managing Software Development Projects, p. 13.

    Google Scholar 

  49. Jonathan Jacky, “Safety-Critical Computing: Hazards, Practices, Standards, and Regulations,” The Sciences, September/October 1989; reprinted in Dunlop and Kling, Computerization and Controversy, pp. 612-31.

    Google Scholar 

  50. As M. M. Lehman has observed (in personal communication with the author), specifications are frequently merely partial models of the problem to be solved.

    Google Scholar 

  51. C. A. R. Hoare, “An Axiomatic Basis for Computer Programming/’ Communications of the ACM, vol. 12, no. 10 (October 1969), pp. 576–80 and 583; the quotation may be found on p. 576.

    Article  Google Scholar 

  52. Smith, “Limits,” pp. 639–43. Other authors have concurred. See, for example, Alan Borning, “Computer System Reliability and Nuclear War,” Communications of the ACM, vol. 30, no. 2 (February 1987), pp. 112-31; reprinted in Dunlop and Kling, Computerization and Controversy.

    Google Scholar 

  53. For further discussion, see, for example, B. W. Boehm, Software Engineering Economics (New York: Prentice-Hall, 1981).

    Google Scholar 

  54. See Fetzer, “Program Verification,” pp. 1056–57.

    Google Scholar 

  55. See James H. Fetzer, “Author’s Response,” Communications of the ACM, vol. 32, no. 4 (April 1989), p. 512.

    Google Scholar 

  56. Hoare, “An Axiomatic Basis for Computer Programming,” p. 579.

    Google Scholar 

  57. See Fetzer, “Program Verification.”

    Google Scholar 

  58. Avra Cohn, “The Notion of Proof in Hardware Verification,” Journal of Automated Reasoning, vol. 5, no. 2 (June 1989), p. 132.

    Article  Google Scholar 

  59. Bev Littlewood and Lorenzo Strigini, “The Risks of Software,” Scientific American, November 1992, p. 65.

    Google Scholar 

  60. Ibid., pp. 65-66 and 75.

    Google Scholar 

  61. David Shepherd and Greg Wilson, “Making Chips That Work,” New Scientist, May 13, 1989, pp. 61–64.

    Google Scholar 

  62. Ibid., p. 62.

    Google Scholar 

  63. Littlewood and Strigini, “The Risks of Software,” p. 75.

    Google Scholar 

  64. C. A. R. Hoare, “Mathematics of Programming,” BYTE, August 1986, p. 116.

    Google Scholar 

  65. David Pamas, quoted in William E. Suydam, “Approaches to Software Testing Embroiled in Debate,” Computer Design, vol. 24, no. 21 (November 15, 1986), p. 50.

    Google Scholar 

  66. Littlewood and Strigini, “The Risks of Software,” p. 63.

    Google Scholar 

  67. For an introduction to chaos theory, see James Gleick, Chaos: Making a New Science (New York: Penguin Books, 1988).

    Google Scholar 

  68. See Fetzer, “The Frame Problem,” pp. 228–29. Predictions based upon partial and incomplete descriptions of chaotic systems are obviously fallible—their reliability would appear to be unknowable.

    Google Scholar 

  69. Sometimes unverifiable or incorrect programs can even be preferred; see Stephen Savitzky, “Technical Correspondence,” Communications of the ACM, vol. 32, no. 3 (March 1989), p. 377. These include cases where a verifiably incorrect program yields a better performance than a verifiably correct program as a solution to a problem—where the most important features of a successful solution may involve properties that are difficult or impossible to formalize—and other kinds of situations.

    Google Scholar 

  70. For further discussion, see Fetzer, Scientific Knowledge, and Fetzer, Philosophy of Science.

    Google Scholar 

  71. M. M. Lehman, “Uncertainty in Computer Application,” Communications of the ACM, vol. 33, no. 5 (May 1990), p. 585 (emphasis added).

    Google Scholar 

  72. Nelson, “Deductive Prograin Verification” (supra note 27), p. 292.

    Google Scholar 

  73. Donald Gillies has informed me that Hoare now advocates this position.

    Google Scholar 

  74. The prospect of having to conduct statistical tests of nuclear weapons, space shuttle launches, etc., suggests the dimensions of the problem.

    Google Scholar 

  75. See, for example, Jacky, “Safety-Critical Computing,” pp. 622–27.

    Google Scholar 

  76. Ibid., p. 627. An excellent and accessible discussion of problems involving computer systems that affect many important areas of life is Ivars Peterson, Fatal Defect: Chasing Killer Computer Bugs (New York: Random House/Times Books, 1995).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Fetzer, J.H. (2001). Computer Reliability and Public Policy: Limits of Knowledge of Computer-Based Systems. In: Computers and Cognition: Why Minds are not Machines. Studies in Cognitive Systems, vol 25. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-0973-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-94-010-0973-7_11

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-1-4020-0243-4

  • Online ISBN: 978-94-010-0973-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics