Skip to main content

Belief in The Singularity is Fideistic

Part of the The Frontiers Collection book series (FRONTCOLL)

Abstract

We deploy a framework for classifying the bases for belief in a category of events marked by being at once weighty, unseen, and temporally removed (wutr, for short). While the primary source of wutr events in Occidental philosophy is the list of miracle claims of credal Christianity, we apply the framework to belief in The Singularity, surely—whether or not religious in nature—a wutr event. We conclude from this application, and the failure of fit with both rationalist and empiricist argument schemas in support of this belief, not that The Singularity won’t come to pass, but rather that regardless of what the future holds, believers in the “machine intelligence explosion” are simply fideists. While it’s true that fideists have been taken seriously in the realm of religion (e.g. Kierkegaard in the case of some quarters of Christendom), even in that domain the likes of orthodox believers like Descartes, Pascal, Leibniz, and Paley find fideism to be little more than wishful, irrational thinking—and at any rate it’s rather doubtful that fideists should be taken seriously in the realm of science and engineering.

Keywords

  • Computing Machine
  • Turing Machine
  • Human Person
  • Reasonable Doubt
  • Deductive Proof

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-642-32560-1_19
  • Chapter length: 18 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   59.99
Price excludes VAT (USA)
  • ISBN: 978-3-642-32560-1
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   79.99
Price excludes VAT (USA)
Hardcover Book
USD   79.99
Price excludes VAT (USA)

Notes

  1. 1.

    Which shouldn’t be confused with the denomination known as ‘Greek Orthodox’—a denomination that does though happen to itself be orthodox/credal in our sense. An elegant characterization of orthodox Christianity is provided by Chesterton (2009). Along the same lines, and no doubt paying homage to his intellectual and spiritual hero, is Lewis’s (1960) Mere Christianity. A more mechanical and modern characterization is obtained by simply following Swinburne (1981) in identifying orthodox Christianity with the union of the propositional claims in its ancient creeds (e.g. Apostle’s, Nicene, Athanasian), which then declaratively speaking within this limited scope harmonizes Catholicism and Protestantism.

  2. 2.

    And—see footnote 1—Chesterton, Lewis, and Swinburne.

  3. 3.

    We are happy to agree that believing in a person includes more than mere propositional belief, but this topic isn’t germane to our objectives herein.

  4. 4.

    We recognize that The Singularity has now come to be associated with a group of events (e.g. the group often is taken to include the ability of human persons to exist in forms that are not bio-embodied), but to maintain a reasonable scope in the present paper we identify \(\mathcal S \) with only the “smart-machine” prediction, which is quite in line with e.g. the sub-title of the highly influential (Kurzweil 2000): “When Computers Exceed Human Intelligence.” There is also in alignment with the locus classicus: (Good 1965).

  5. 5.

    We recognize that Turing’s optimism was constrained by certain conditions regarding how long a computing machine’s prowess on his test would last, but such niceties can be safely left aside.

  6. 6.

    As a matter of fact, Turing, like—as we shall see—those predicting the coming \(\mathcal S \), would seem to be guilty of the same fatal sin: failing to give a rationalist (or even an empiricist) argument for the prediction in question. One of us rather long ago happily conceded that the Turing Test will be passed (Bringsjord 1992), but this concession was not accompanied by any timeline whatsoever—and if there had been a timeline, it would have been an exceedingly conservative one.

  7. 7.

    A counter-argument can be found in (Bringsjord 2010).

  8. 8.

    This is as good a spot as any to say that we could mine the supernatural event-claims of Islam and Judaism instead of those in credal Christianity, but we aren’t that familiar with these other two monotheistic religions, and Western philosophy, for better or for worse, has certainly focused on the event-claims of Christianity of the other two historical monotheistic religions.

  9. 9.

    For example, our conclusion about believers in The Singularity would be obtained by turning instead to (Pollock 1974). This is as good a place as any to mention that both Chisholm’s scheme, and Pollock’s, are “computing-machine friendly.” One of us has made use of Chisholm’s strength-factor scheme to ground software for engineering argumentation; see (Bringsjord et al. 2008). And Pollock himself built an artificial agent on the basis of his epistemology; see for example (Pollock 1989, 1995).

  10. 10.

    Some readers will inevitably ask: “Is there any such thing?!” We are of course well aware of the fact that even some axioms in some axiomatic set theories are controversial, and hence perhaps not certain. (Even the power set axiom in ZFC has its detractors.) Nonetheless, whatever one can deduce in deductively valid fashion from, say, 1 = 1, would be certain, and one would be well-advised to believe such a consequence. For instance, \(1 = 1 \vee Q\), for any proposition \(Q\), would be an acceptable disjunction for even a strong rationalist to believe.

  11. 11.

    For example, we could distinguish between the strength of inferential links in the argument for wutr \(P\).

  12. 12.

    Barbarically put, the principle states that an argument for \(Q\) is only as strong, overall, as the weakest inferential link in that argument. We leave aside the fascinating subject of fideism “forced” by decision–theoretic considerations. One who for example agrees with Pascal’s Wager may decide to believe even if the best propositional evidence is counter-balanced, just because the potential disutility of not believing is infinitely large.

  13. 13.

    That there are such humans in no way is inconsistent with results (e.g. those produced by the ingenious experimentation of Johnson-Laird 2000) showing that most humans fail to reason at the level of FOL. For additional evidence that some people are pretty darn good at deductive reasoning that coincides with FOL, see (Rips 1994).

  14. 14.

    In our experience, the concept of intelligence as it’s used in communication between those believing in \(\mathcal S \) comes at least close to be conflated with the concept of power, or more precisely, information-acquisition power, conjoined with processing speed a la Moore’s Law. Once this conflation occurs, the notion that machines of the future will be ultraintelligent quickly arrives on scene. Why? The point can be put in sci-fi terms: We imagine a Terminator 3-like event in which unmanned machines hooked into all digital information on the planet suddenly break through any and all privacy restrictions on use of this data, and proceed to exploit it. These machines are now able to do things that are unprecedentedly “intelligent.” For example, the machines may now be able to prevent human crimes before they happen. (E.g. machines with access to everyone’s email, and the processing power to check them for plans of foul play, could thwart criminals.) Needless to say, while this notion of information-theoretic super-intelligence is coherent, and may in fact even be likely to materialize, no fundamentally new functionality is in play, and hence, while in our interaction with believers in The Singularity we witness the conflation in question, the case for S isn’t insulated from our counter-argumentation.

References

  • Bringsjord, S. (1992). What robots can and can’t be. Dordrecht: Kluwer.

    MATH  CrossRef  Google Scholar 

  • Bringsjord, S. (2010). Meeting Floridi’s challenge to artificial intelligence from the knowledge-game test for self-consciousness. Metaphilosophy 41(3), 292–312. http://kryten.mm.rpi.edu/sb_on_floridi_offprint.pdf

    Google Scholar 

  • Bringsjord, S., Taylor, J., Shilliday, A., Clark, M. Arkoudas, K. (2008). Slate: An argument-centered intelligent assistant to human reasoners. In: F. Grasso, N. Green, R. Kibble C. Reed (Eds.), Proceedings of the 8th International Workshop on Computational Models of Natural Argument (CMNA 8), (pp. 1–10). Greece: Patras. http://kryten.mm.rpi.edu/Bringsjord_etal_Slate_cmna_crc_061708.pdf

  • van Bringsjord, S., & Heuveln, B. (2003). The mental eye defense of an infinitized version of Yablo’s paradox. Analysis, 63(1), 61–70.

    MathSciNet  MATH  CrossRef  Google Scholar 

  • Bringsjord, S., & Zenzen, M. (2003). Superminds: People harness hypercomputation, and more. Dordrecht: Kluwer Academic Publishers.

    MATH  Google Scholar 

  • Chalmers, D. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17, 7–65.

    Google Scholar 

  • Chesterton, G. (2009). Orthodoxy. Chicago: Moody Publishers.

    Google Scholar 

  • Chisholm, R. (1977). Theory of knowledge . Englewood Cliffs: Prentice-Hall.

    Google Scholar 

  • Dennett, D. (2007). Breaking the spell: Religion as a natural phenomenon. New York: Penguin.

    Google Scholar 

  • Floridi, L. (2005). Consciousness, agents and the knowledge game. Minds and Machines, 15(3–4), 415–444. http://www.philosophyofinformation.net/publications/pdf/caatkg.pdf

  • Glymour, C. (1992). Thinking things through. Cambridge: MIT Press.

    MATH  Google Scholar 

  • Good, I.J. (1965). Speculations concerning the first ultraintelligent machines. In F. Alt & M. Rubinoff (Eds.), Advances in Computing (Vol. 6, pp. 31–38). Academic: New York.

    Google Scholar 

  • Habermas, G. (1984). The resurrection of Jesus: An apologetic. Lanham: University Press of America.

    Google Scholar 

  • Hamkins, J. D., & Lewis, A. (2000). Infinite time turing machines. Journal of Symbolic Logic, 65(2), 567–604.

    MathSciNet  MATH  CrossRef  Google Scholar 

  • Inhelder, B., & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence. New York: Basic Books.

    CrossRef  Google Scholar 

  • Johnson-Laird, P. N., Legrenzi, P., Girotto, V., & Legrenzi, M. S. (2000). Illusions in reasoning about consistency. Science, 288, 531–532.

    CrossRef  Google Scholar 

  • Kierkegaard, S. (1986). Fear and trembling. New York: Penguin.

    Google Scholar 

  • Kurzweil, R. (2000). The age of spiritual machines: When computers exceed human intelligence. New York: Penguin USA.

    Google Scholar 

  • Leibniz, G. (1998). Theodicy. Chicago: Open Court.

    Google Scholar 

  • Lewis, C. S. (1960). Mere christianity. New York: Macmillan.

    Google Scholar 

  • McGrew, T. (2010). Miracles, Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/miracles

  • Paley, W. (2010). Evidence of christianity, qontro classic books. Paley’s apology was first published in 1794. The book is available through Project Gutenberg. The full official title of the book is A View of the Evidences of Christianity.

    Google Scholar 

  • Pollock, J. (1974). Knowledge and justification. Princeton: Princeton University Press.

    Google Scholar 

  • Pollock, J. (1989). How to build a person: A prolegomenon. Cambridge: MIT Press.

    Google Scholar 

  • Pollock, J. (1995). Cognitive carpentry: A blueprint for how to build a person. Cambridge: MIT Press.

    Google Scholar 

  • Rips, L. (1994). The psychology of proof. Cambridge: MIT Press.

    MATH  Google Scholar 

  • Swinburne, R. (1981). Faith and reason. Oxford: Clarendon Press.

    Google Scholar 

  • Swinburne, R. (1991). The existence of god. Oxford: Oxford University Press.

    CrossRef  Google Scholar 

  • Swinburne, R. (2010). Was Jesus god?. Oxford: Oxford University Press.

    Google Scholar 

  • Turing, A. (1950). Computing machinery and intelligence. Mind LIX, (59)(236), 433–460.

    Google Scholar 

  • Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. Whole Earth Review.

    Google Scholar 

  • Yudkowsky, E. (1996). Starting into the singularity. http://yudkowsky.net/obsolete/singularity.html

  • Vinge, V. (1993). The Coming Technological Singularity: How to Survive in the Post-Human Era, VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, Mar 30–31, 1993. NASA CP-10129. Available http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

  • Vinge, V. (2007). What If the Singularity Does NOT Happen. Talk presented at Seminars about Long-Term Thinking, 15 Feb 2007. Available http://www-rohan.sdsu.edu/faculty/vinge/longnow/index.htm

  • Vinge, V. (2010). Species of Mind. Talk presented at IAAI-10, 15 July 2010. http://www-rohan.sdsu.edu/faculty/vinge/misc/iaai10/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Selmer Bringsjord .

Editor information

Editors and Affiliations

Appendices

Vernor Vinge on Bringsjord et al.’s “Belief in the Singularity is Fideistic”

It’s no surprise that pure rationalism is useless for discussing the possibility of the Singularity. Pure rationalism is not much use outside of mathematics. (And in computer science, _pace_ Edsger Dijkstra, it’s not really useful outside of very simple situations.) In the sciences, the goal to strive for is rationalism combined with a focused empiricism consisting of cleverly planned observations and experiments that disprove as much as possible as quickly as possible.

Unfortunately, such a combination of rationalism and empiricism is rarely attainable in discussing future progress in science and engineering. (When it can be achieved, it amounts to Alan Kay’s famous advice that “The best way to predict the future is to invent it.”) For many planning environments, we must instead consider a variety of scenarios (e.g. (Vinge 1993, 2007). Risks and symptoms and benchmarks can then be watched for and used to support further plans and action. In this process, some of the players may be somewhat fideistic. That’s fine. Without an element of fideism in our entrepreneurs, we’d have fewer failures, but we’d also lose or postpone many wonderful innovations.

As for Bringsjord et al.’s WUTR (weighty, unseen, and temporally removed) assessment of the Technological Singularity:

  • Weighty:

The possibility of the Singularity is certainly weighty. Progress along all the different paths to the Singularity is bringing into focus (and perhaps stark immediacy) a number of questions that have been endlessly debated over the last few thousand years (identity, consciousness, intelligence, mortality). Whether or not the Singularity happens, the technological interrogation of these issues has put us in a different playing field than all the philosophers of the past.

  • Unseen:

That there are no current examples of super-intelligence is not a surprise. On the other hand, the milestones already passed are not trivial, except as claimed to be so after they were attained. Bringsjord et al. propose an interesting milestone of their own, the problem of automatic program generation where the input is a simple function described in standard mathematical notation. Tell me more! This sounds like something that is doable with 2012-era computers/software, at least competitive with human performance.)

Bringsjord raise a much broader complaint in saying that Singularity enthusiasts don’t even specify the difference between human level intelligence and machine superhuman intelligence: “We have scoured the writings of pro-S thinkers for even an atom of an account of the difference, and have come up utterly empty.”

In discussing this point, they raise the possibility that superintelligence might be claimed as simply the running of a computer very fast—and they dismiss that possibility as irrelevant. I agree that 2012 software running very very fast would be an absurd contender, but that is the wrong comparison. For myself (and I expect most people) the really hard thing to accept is that human equivalent intellects could run on a computer. But that is a goal we have a moderately good criterion for, namely Turing’s Test (especially in the extended sense that Penrose describes in “The Emperor’s New Mind”, at the end of his generally skeptical discussion of the topic). Now imagine that such a Turing Test winner is run at much higher speed. In (Vinge 1993), I called such an achievement “weak superhumanity”. In fact, I used the word “weak” because I believe there would be lot more to superhuman intelligence (Vinge 1993, 2010). Nevertheless, it provides a goal as specific as Turing’s Test for the discussion of superhuman intelligence.

  • Temporally removed:

Until it actually happens, the Singularity will have this characteristic. But in the absence of technological surprises and classical disasters (e.g. nuclear war), I expect to see automation gradually achieving more and more of what has been human-only capabilities. At the same time, I expect that human/computer teams will be ever more powerful; they may in fact guide the Singularity into being. The Teens should be interesting years.

Michael Anissimov on Bringsjord et al.’s “Belief in The Singularity is Fideistic”

The substance of Brinsjord et al’s critique is in a single paragraph of pages 10–11 of their essay, P1 referring to Chalmers’ first assumption, “there will (eventually, barring defeaters) be Artificial Intelligence (of the human level)”:

There can be no denying that (P1) isn’t certain; in fact, all of us can be quite certain that (P1) isn’t certain. [\({\ldots }\)] ...suppose that human persons are information-processing machines more powerful than standard Turing machines, for instance the infinite-time Turing machines specified and explored by Hamkins and Lewis (2000), that AI (as referred to in A) is based on standard Turing-level information processing, and that the process of creating the artificial intelligent machines is itself at the level of Turing-computable functions. Under these jointly consistent mathematical suppositions, it can be easily proved that AI can never reach the level of human persons (and motivated readers with a modicum of understanding of the mathematics of computer science are encouraged to carry out the proof). So, we know that (P1) isn’t certain.

It is difficult to ascertain on what basis Brinsjord et al are making the claim that human persons are information—processing machines “more powerful” than standard Turing machines. Occam’s razor, along with decades of evidence from cognitive science, seem to imply that the human brain and mind can be viewed as a massively parallel Turing machine.

Supposing that artificial intelligences will “never reach the level of human persons” is a claim with few academic citations. Generally, such statements appear to be appeals to intuitions of human exceptionalism—the notion that humans have something deeply special about them that could never be duplicated in a machine. Given this intuition, human exceptionalists are forced to retroactively search for supporting arguments. The notion that human brains somehow utilize extra-Turing information processing is one such argument.

The Church-Turing thesis is the idea that anything algorithmically computable is computable by a Turing machine. Given the nearly universally accepted supposition in the cognitive sciences that intelligence is made up of a collection of mental routines that are fuzzy algorithms, plus the Church-Turing thesis, we get the conclusion that intelligence is indeed computable by standard Turing machines. Acceptance of these two ideas is not universal in cognitive science and computer science, but the ideas are broadly accepted, with extensive discussions in the literature.

The history of science is filled with various examples of human exceptionalism that were proven wrong. For instance, the notion that human beings are animated by an immaterial soul has been replaced by the scientific notion of the brain as the director of behavior. Another example would be the pre-scientific notion of humans as separate from the animal kingdom, replaced by the idea of humans as a part of the animal kingdom. The notion that human beings are the only agents that can implement intelligence is being supplanted by the notion that intelligence is a bundle of algorithms that can be implemented by any suitable computer, whether carbon-based or silicon-based.

Rights and permissions

Reprints and Permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Bringsjord, S., Bringsjord, A., Bello, P. (2012). Belief in The Singularity is Fideistic. In: Eden, A., Moor, J., Søraker, J., Steinhart, E. (eds) Singularity Hypotheses. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32560-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32560-1_19

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32559-5

  • Online ISBN: 978-3-642-32560-1

  • eBook Packages: EngineeringEngineering (R0)