Skip to main content

Software Immortals: Science or Faith?

  • Chapter
  • First Online:
Singularity Hypotheses

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

According to the early futurist Julian Huxley, human life as we know it is ‘a wretched makeshift, rooted in ignorance’. With modern science, however, ‘the present limitations and miserable frustrations of our existence could be in large measure surmounted’ and human life could be ‘transcended by a state of existence based on the illumination of knowledge’ (1957b, p. 16).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The Singularity will infuse the universe with ‘spirit’ in the sense that, Kurzweil predicts, we will be able to convert much of the matter of the universe into ‘computronium’—the ‘ultimate computing substrate’ (2007b). Moravec too hypothesizes that the entire universe might be converted into ‘an extended thinking entity, a prelude to even greater things’ (1988, p. 116).

  2. 2.

    Anselm 1078/1973, Chap. 16 (p. 257).

  3. 3.

    Kurzweil calls the belief that death gives meaning to life the ‘deathist meme’ (Olson and Kurzweil 2006).

  4. 4.

    In 2001 Kurzweil predicted that we would be able to build hardware matching the computational capacity of the human brain by 2010, and in 2006, he predicted software enabling a machine to match a human’s cognitive capacities by 2029—i.e., by 2029 machines will be able to pass the Turing test (Kurzweil 2001, 2006a). Tipler predicts human-level AI by 2030 (2007, p. 251).

  5. 5.

    According to Goertzel (2007b), with a coordinated effort we could reach the Singularity even earlier—by approximately 2016.

  6. 6.

    Other futurists, who are not techno-supernaturalists, make similar claims about the possibility or indeed feasibility of uploading: see e.g. Goertzel 2007a, b.

  7. 7.

    According to Maimonides, a human being is composed of a ‘substance’ and a ‘form’ (c. 1178/1981, p. 38a). The afterlife will be made up of ‘separated souls’, which are ‘divested of anything corporeal’ (1191/1985, pp. 215, 216). Angels are ‘forms without substance’ (c. 1178/1981, p. 39a).

  8. 8.

    Maimonides also endorsed the doctrine of physical resurrection. Prior to the world-to-come, God can return the soul to the body, enabling the individual to live another long life. ‘Life in the world-to-come follows the Resurrection’, Maimonides said (1191/1985, p. 217).

  9. 9.

    Isaiah 25:8. Tanakh: A New Translation of THE HOLY SCRIPTURES According to the Traditional Hebrew Text. Philadelphia: The Jewish Publication Society, 1985.

  10. 10.

    Acts 17:28. The Holy Bible, King James Version.

  11. 11.

    1 Corinthians 15:44. The Holy Bible, New Revised Standard Version. New York: Oxford University Press, 1989.

  12. 12.

    See too Steinhart 2008.

  13. 13.

    Isaiah 26:19. Tanakh: A New Translation of THE HOLY SCRIPTURES According to the Traditional Hebrew Text.

  14. 14.

    If true, how can we know that this life isn’t a simulation (Tipler 1994)? The notion of simulation resurrection leads to the ‘simulation argument’ (see Bostrom 2003b). On sceptical arguments based on simulation-resurrection (or ‘matrix’) thought-experiments, see further Weatherson 2003; Chalmers 2005; Brueckner 2008; Bostrom 2009b; Bostrom and Kulczycki 2011. On the simulation argument with a theological twist, see Steinhart 2010.

  15. 15.

    Moravac does not share the view of the posthuman future as heaven—as he points out, ‘[s]uperintelligence is not perfection’ (1988, p. 125). See further the section ‘Doctrine and Faith’.

  16. 16.

    According to Kurzweil, super-intelligent humans may engineer new universes (2007b)—another behaviour typically attributed to God.

  17. 17.

    See too criticisms of Kuzweil’s claims about the history of computing (Proudfoot 1999a, b).

  18. 18.

    Patternism typically addresses the brain, despite Moravec’s reference to brain and body.

  19. 19.

    Goertzel (2007b) also uses the term ‘pattern’ (and ‘patternist philosophy of mind’), claiming that the mind is a ‘set of patterns’. According to Goertzel, ‘the mind can live on via transferring the patterns that constitute it into a digitally embodied software vehicle’. What lives on is a ‘digital twin’.

  20. 20.

    Similarly, Bostrom says that a brain scan must be detailed enough to capture the ‘features that are functionally relevant to the original brain’s operation’ (Bostrom and Yudkowsky 2011). But which features are these?

  21. 21.

    The locus classicus is Strawson 1959. Philosophers have also argued that, since human beings are animals, the appropriate persistence conditions for human persons are those for biological organisms (e.g. Olson 1997).

  22. 22.

    On zombie thought-experiments, see Chalmers 1995; Block 1995; McCarthy 1995; Dennett 1995; Flanagan and Polger 1995; Sloman 2010.

  23. 23.

    The classic statement of the duplication problem is found in Williams 1973a, p. 77; 1973b, p. 19. Making bodily continuity a necessary condition of persistence of persons still allows an analogous problem arising from ‘fission’ (see Parfit 1987, pp. 254–261).

  24. 24.

    Here, as elsewhere in this essay, I suppress the symmetry step A = B ⊢ B = A.

  25. 25.

    Likewise, if a back-up of A is a mere copy, then it is a mere copy even if in fact it is the only backup: a mere copy that is actually created has no more claim to be A than any other back-up that might have been created. (Using the standard distinction, A’s duplicate is qualitatively, but not numerically, identical to A.)

  26. 26.

    See Sainsbury 2009, pp. 107–109.

  27. 27.

    Cf. Steinhart’s notion of a ‘variant’ (2002, pp. 311, 312).

  28. 28.

    On the notion of A’s ‘surviving as’ (rather than being identical to) both B and C, see Parfit 2008. On ‘survival as’ a digital ‘ghost’, see Steinhart (2007, 2010). Chalmers (2010) also suggests this move.

  29. 29.

    The proponent of replacing identity with survival-as regards the cost as minimal—‘this way of dying is about as good as ordinary survival’, Parfit claims (1987, p. 264).

  30. 30.

    Bostrom gives mixed signals on the question of survival. He also claims that, as an uploaded mind file, one will have ‘the ability to make back-up copies of oneself (favorably impacting on one’s life-expectancy)’ (2005a, p. 7).

  31. 31.

    Jack Copeland suggested this strategy to me, and I am indebted to him for helpful discussion of this point.

  32. 32.

    See e.g. Zadeh (1975); Goguen (1969).

  33. 33.

    See Copeland (1997).

  34. 34.

    On person-specific Turing tests, see further Steinhart 2007.

  35. 35.

    Luther c. 1530–2/1959, p. 78. For Luther, belief can be justified—by faith itself.

  36. 36.

    According to Tipler (1994), God is the ‘Omega Point’—the ‘completion’ of all finite existence; the Omega point ‘loves us’ and for this reason will give us immortality (pp. 12, 14). Again this is unjustified anthropomorphism.

  37. 37.

    Steinhart (2008) argues that posthumans (since they have been perfected) will be sensitive to their ‘ethical and epistemic obligations’, and so will simulate ‘all lesser civilizations’. However, this is still to anthropomorphize beings that are more like angels than humans. In response to the argument from evil, for example, many theologians and philosophers have insisted that we cannot deduce the moral attitudes of the divine—following this reasoning, there may be a ‘noseeum’ reason why posthumans will not recognize (or observe) Tipler’s ‘universal’ moral principle.

  38. 38.

    Catechism of the Catholic Church, with modifications from the Editio Typica (New York: Doubleday, 1994), Part One, Chapter Three, Article 11, 1000 (p. 282).

  39. 39.

    Hume (1757/1956), p. 30.

  40. 40.

    Freud (1949), p. 30.

  41. 41.

    Solomon et al. (2004), pp. 16, 17.

  42. 42.

    Of course, this does not falsify the Singularity hypothesis—any more than it does the claims of supernaturalist religion.

  43. 43.

    I am grateful to Jack Copeland and to Eric Steinhart for their valuable comments on an earlier draft of this paper.

References

  • Allen, P., & Greaves, M. (2011). The singularity isn’t near. Technology Review. October 12, 2011, http://www.technologyreview.com/blog/guest/27206.

  • Anselm. (1078/1973). The prayers and meditations of St Anselm. (Translated and with an introduction by Sister B. Ward). London: Penguin Books.

    Google Scholar 

  • Ayres, R. U. (2006). Review [of The singularity is near]. Technological Forecasting & Social Change, 73, 95–100.

    Google Scholar 

  • Block, N. (1995). On a confusion about the function of consciousness. Behavioral and Brain Sciences, 18(2), 227–287.

    Article  Google Scholar 

  • Bostrom, N. (2003a). When machines outsmart humans. Futures, 35, 759–764.

    Article  Google Scholar 

  • Bostrom, N. (2003b). Are we living in a computer simulation? Philosophical Quarterly, 53(211), 243–255.

    Article  Google Scholar 

  • Bostrom, N. (2004). The future of human evolution. In C. Tandy (Ed.), Death and anti-death: Two hundred years after Kant, fifty years after Turing (pp. 339–371). Palo Alto, CA: Ria University Press http://www.nickbostrom.com/fut/evolution.html.

  • Bostrom, N. (2005a). Transhumanist values. Journal of Philosophical Research, Special Supplement: Ethical Issues for the Twenty-First Century, F. Adams (Ed.), (pp. 3–14). Charlottesville, VA: Philosophy Documentation Center.

    Google Scholar 

  • Bostrom, N. (2005b). A history of transhumanist thought. Journal of Evolution and Technology 14(1). April, 2005, http://jetpress.org/volume14/freitas.html.

  • Bostrom, N. (2006). How long before superintelligence? Linguistic and Philosophical Investigations, 5(1), 11–30.

    Google Scholar 

  • Bostrom, N. (2008a). Letter from Utopia. Studies in Ethics, Law, and Technology, 2(1). http://www.bepress.com/selt/vol2/iss1/art6.

  • Bostrom, N. (2008b). Why I want to be a Posthuman when I grow up. In B. Gordijn & R. Chadwick (Eds.), Medical enhancement and posthumanity (pp. 107–137). Dordrecht: Springer.

    Google Scholar 

  • Bostrom, N. (2009a). The future of humanity. In J. K. Berg Olsen, E. Selinger & S. Riis (Eds.), New waves in philosophy of technology (pp. 186–215). New York: Palgrave Macmillan.

    Google Scholar 

  • Bostrom, N. (2009b). The simulation argument: Some explanations. Analysis, 69(3), 458–461.

    Article  MathSciNet  Google Scholar 

  • Bostrom, N., & Kulczycki, M. (2011). A patch for the simulation argument. Analysis, 71(1), 54–61.

    Article  Google Scholar 

  • Bostrom, N., & Yudkowsky, E. (2011). The ethics of artificial intelligence. In W. Ramsey & K. Frankish (Eds.), Draft for Cambridge handbook of artificial intelligence. Accessed November 8, 2011, http://www.fhi.ox.ac.uk/selected_outputs_journal_articles.

  • Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47, 139–159.

    Article  Google Scholar 

  • Brooks, R. A. (1995). Intelligence without reason. In L. Steels & R. A. Brooks (Eds.), The artificial life route to artificial intelligence. Hillsdale: Lawrence Erlbaum.

    Google Scholar 

  • Brooks, R. A. (1999). Cambrian intelligence: The early history of the new AI. Cambridge, MA: MIT Press.

    Google Scholar 

  • Brooks, R.A., Kurzweil, R., & Gelernter, D. (2006). Gelernter, Kurzweil debate machine consciousness. KurzweilAI.net. December 6, 2006, http://www.kurzweilai.net/articles/art0688.html?printable=1.

  • Brueckner, A. (2008). The simulation argument again. Analysis, 68(3), 224–226.

    Article  Google Scholar 

  • Chalmers, D. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

    MathSciNet  Google Scholar 

  • Chalmers, D. (2005). The matrix as metaphysics. In C. Grau (Ed.), Philosophers explore the matrix (pp. 132–176). Oxford: Oxford University Press.

    Google Scholar 

  • Chalmers, D. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17, 7–65.

    Google Scholar 

  • Clark, A. (1997). Being there: Putting mind, body, and world together again. Cambridge, MA: MIT Press.

    Google Scholar 

  • Copeland, B. J. (1997). Vague identity and fuzzy logic. Journal of Philosophy, 94(10), 514–534.

    Article  MathSciNet  Google Scholar 

  • Copeland, B. J. (2000). Narrow versus wide mechanism. Journal of Philosophy, 97(1), 5–32.

    Article  MathSciNet  Google Scholar 

  • Dennett, D.C. (1995). The unimagined preposterousness of zombies. Journal of Consciousness Studies, 2(4), 322–326.

    Google Scholar 

  • Devezas, T. C. (2006). Discussion [of The singularity is near]. Technological Forecasting & Social Change, 73, 112–121.

    Article  Google Scholar 

  • Dreyfus, H. L. (1992). What computers still can’t do. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dreyfus, H. L. (2007). Why Heideggerian AI failed and how fixing it would require making it more Heideggerian. Artificial Intelligence, 171, 1137–1160.

    Article  Google Scholar 

  • Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind over machine. Oxford: Blackwell.

    Google Scholar 

  • Else, L. (2009). Ray Kurzweil: A singular view of the future. New Scientist Opinion, 2707. May 6, 2009. http://new.scientist.com.

  • Evans, G. (1978). Can there be vague objects? Analysis, 38(4), 208.

    Article  Google Scholar 

  • Flanagan, O., & Polger, T. (1995). Zombies and the function of consciousness. Journal of Consciousness Studies, 2(4), 313–321.

    Google Scholar 

  • Ford, K., & Hayes, P. (1998). On conceptual wings: Rethinking the goals of artificial intelligence. Scientific American Presents, 9(4), 78–83.

    Google Scholar 

  • Freud, S. (1949). The future of an illusion. London: The Hogarth Press.

    Google Scholar 

  • Gershenson, C., & Heylighen, F. (2005). How can we think the complex? In K. Richardson (Ed.), Managing organizational complexity: Philosophy, theory and application (pp. 47–61). Information Age Publishing.

    Google Scholar 

  • Goertzel, B. (2007a). Human-level artificial general intelligence and the possibility of a technological singularity. A reaction to Ray Kurzweil’s The Singularity is Near, and McDermott’s critique of Kurzweil. Artificial Intelligence, 171, 1161–1173.

    Google Scholar 

  • Goertzel, B. (2007b). Artificial general intelligence: Now is the time. KurzweilAI.net. April 9, 2007, http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time.

  • Goguen, J. A. (1969). The logic of inexact concepts. Synthese, 29, 325–373.

    Article  Google Scholar 

  • Hawkins, J. (2008). [interviewed in] Tech luminaries address singularity, [and in] Expert view. IEEE Spectrum. June, 2008. http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity.

  • Heylighen, F. (2012). A brain in a vat cannot break out: Why the singularity must be extended, embedded and embodied. Journal of Consciousness Studies, 19(1–2), 126–142.

    Google Scholar 

  • Hofstadter, D. (2008). [interviewed in] Tech luminaries address singularity. IEEE Spectrum. June, 2008. http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity.

  • Horgan, J. (2008). The consciousness conundrum. IEEE Spectrum, 45(6), 36–41.

    Article  Google Scholar 

  • Hume, D. (1757/1956). The natural history of religion, H. E. Root (Ed.), London: Adam & Charles Black.

    Google Scholar 

  • Huxley, J. (1927). Religion without revelation. London: Ernest Benn Ltd.

    Google Scholar 

  • Huxley, J. (1957a). Transhumanism. In J. Huxley, New bottles for new wine (pp. 13–17). London: Chatto and Windus.

    Google Scholar 

  • Huxley, J. (1957b). Evolutionary humanism. In J. Huxley, New bottles for new wine (pp. 279–312).

    Google Scholar 

  • Huxley, J. (1964). The new divinity. In J. Huxley, Essays of a humanist (pp. 218–226). London: Chatto and Windus.

    Google Scholar 

  • Joy, W. (2000). Why the future doesn’t need us. Wired Magazine, 8(4). http://www.wired.com/wired/archive/8.04/joy.pr.html.

  • Kurzweil, R. (1999). The age of spiritual machines: When computers exceed human intelligence. New York: Viking Press.

    Google Scholar 

  • Kurzweil, R. (2001). The law of accelerating returns. KurzweilAI.net. March 7, 2001, http://www.kurzweilai.net/articles/art0134.html?printable=1.

  • Kurzweil, R. (2002). The material world: “Is that all there is?” Response to George Gilder and Jay Richards. In J. W. Richards (Ed.), Are we spiritual machines? Ray Kurzweil vs. the critics of strong A.I. Seattle: Discovery Institute.

    Google Scholar 

  • Kurzweil, R. (2004). A dialogue on reincarnation. KurzweilAI.net. January 6, 2004, http://www.kurzweilai.net/articles/art0609.html?printable=1.

  • Kurzweil, R. (2006a). Why we can be confident of turing test capability within a quarter century. KurzweilAI.net. July 13, 2006, http://kurzweilai.net/meme/frame.html?main=/articles/art0683.html.

  • Kurzweil, R. (2006b). Reinventing humanity: The future of machine-human intelligence. The Futurist, 40(2), 39–46.

    MathSciNet  Google Scholar 

  • Kurzweil, R. (2006c). Nanotechnology dangers and defenses. Nanotechnology Perceptions, 2, 7–13.

    Google Scholar 

  • Kurzweil, R. (2006d). The singularity is near: When humans transcend biology. New York: Penguin Books.

    Google Scholar 

  • Kurzweil, R. (2007a). Let’s not go back to nature. New Scientist, 2593: 19.

    Google Scholar 

  • Kurzweil, R. (2007b). Foreword to The intelligent universe. KurzweilAI.net. February 2, 2007, http://www.kurzweilai.net/articles/art0691.html?printable=1.

  • Kurzweil, R. (2011). Don’t underestimate the singularity. Technology Review. October 19, 2011, http://www.technologyreview.com/blog/guest/27263.

  • Luther, M. (c. 1530-2/1959). Luther’s Works, Vol. 23: Sermons on the Gospel of St. John Chapters 6–8. J. Pelikan & D. E. Poellot (Eds.). St. Louis, Missouri: Concordia Publishing House.

    Google Scholar 

  • Maimonides, M. (c. 1178/1981). Mishneh torah: The book of knowledge. In M. Hyamson (Ed.), New, corrected edition. Jerusalem and New York: Feldheim Publishers.

    Google Scholar 

  • Maimonides, M. (1191/1985). Essay on resurrection. In Crisis and leadership: epistles of Maimonides (A. Halkin, translation and notes; D. Hartman, discussion ). Philadelphia: Jewish Publication Society of America.

    Google Scholar 

  • McCarthy, J. (1995). Todd Moody’s zombies. Journal of Consciousness Studies, 2(4), 345–347.

    Google Scholar 

  • McDermott, D. (2006). Kurzweil’s argument for the success of AI. Artificial Intelligence, 170, 1183–1186.

    Article  Google Scholar 

  • McDermott, D. (2007). Level-headed. Artificial Intelligence, 171, 1183–1186.

    Article  Google Scholar 

  • Modis, T. (2006). Discussion [of The singularity is near]. Technological Forecasting & Social Change, 73, 104–112.

    Google Scholar 

  • Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics, 38(8), 114–117. (The article is reprinted in Proceedings of the IEEE, 86(1), 82–85, 1998).

    Google Scholar 

  • Moore, G. E. (1975). Progress in digital integrated electronics. Technical Digest, IEEE International Electron Devices Meeting, 21, 11–13.

    Google Scholar 

  • Moore, G. E. (2008). [interviewed in] Tech luminaries address singularity. IEEE Spectrum. June 2008, http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity.

  • Moravec, H. (1988). Mind children: The future of robot and human intelligence. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Moravec, H. (1992). Pigs in cyberspace. In B. R. Miller & M. T. Wolf (Eds.), Thinking robots, an aware internet, and cyberpunk librarians: The 1992 LITA President’s Program, presentation by Hans Moravec, Bruce Sterling, and David Brin. Chicago, Illinois: Library and Information Technology Association.

    Google Scholar 

  • Moravec, H. (1998). When will computer hardware match the human brain? Journal of Evolution and Technology, 1. http://www.jetpress.org/volume1/moravec.htm.

  • Moravec, H. (1999). Robot: mere machine to transcendent mind. Oxford: Oxford University Press.

    Google Scholar 

  • Nordmann, A. (2008). Singular simplicity. IEEE Spectrum, 45(6), 60–63.

    Google Scholar 

  • Olson, E. (1997). The human animal: Personal identity without psychology. Oxford: Oxford University Press.

    Google Scholar 

  • Olson, S., & Kurzweil, R. (2006). Sander Olson interviews Ray Kurzweil. KurzweilAI.net. February 3, 2006, http://www.kurzweilai.net/articles/art0643.html?printable=1.

  • Parfit, D. (1987). Reasons and persons. Oxford: Oxford University Press.

    Google Scholar 

  • Parfit, D. (2008). Personal identity. In J. Perry (Ed.), Personal identity (2nd ed.). Berkeley: University of California Press.

    Google Scholar 

  • Pollack, J. B. (2006). Mindless intelligence. IEEE Intelligent Systems, 21(3), 50–56.

    Article  Google Scholar 

  • Proudfoot, D. (1999a). How human can they get? Science, 284(5415), 745.

    Article  Google Scholar 

  • Proudfoot, D. (1999b). Facts about artificial intelligence. Science, 285(5429), 835.

    Article  Google Scholar 

  • Proudfoot, D. (2011). Anthropomorphism and AI: Turing’s much misunderstood imitation game. Artificial Intelligence, 175, 950–957.

    Article  MathSciNet  Google Scholar 

  • Proudfoot, D. (2012). Software immortals: Science or faith. In A. Eden, J. Søraker, J. Moor, & E. Steinhart (Eds.), The singularity hypothesis: A scientific and philosophical analysis, The Frontiers Collection. Springer.

    Google Scholar 

  • Proudfoot, D., & Copeland, B. J. (2011). Artificial intelligence. In E. Margolis, R. Samuels, & S. P. Stich (Eds.), The Oxford handbook to philosophy and cognitive science (pp. 147–182). New York: Oxford University Press.

    Google Scholar 

  • Pyszczynski, T., Greenberg, J., & Solomon, S. (1999). A dual-process model of defense against conscious and unconscious death-related thoughts: An extension of terror management theory. Psychological Review, 106(4), 835.

    Article  Google Scholar 

  • Richards, J. W. (Ed.). (2002). Are we spiritual machines? Ray Kurzweil vs. the critics of strong A.I. Seattle: Discovery Institute.

    Google Scholar 

  • Sainsbury, R. M. (2009). Paradoxes (3rd ed.). Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Sandberg, A., & Bostrom, N. (2008). Whole brain emulation: A roadmap. Technical report #2008-3, Future of Humanity Institute, Oxford University. http://www.fhi.ox.ac.uk/reports/2008-3.pdf.

  • Shapiro, L. (2011). Embodied cognition. Milton Park: Routledge.

    Google Scholar 

  • Sloman, A. (2008). The well-designed young mathematician. Artificial Intelligence, 172, 2015–2034.

    Article  MATH  Google Scholar 

  • Sloman, A. (2010). Phenomenal and access consciousness and the ‘Hard’ problem: A view from the designer stance. International Journal of Machine Consciousness, 2(1), 117–169.

    Article  Google Scholar 

  • Solomon, S., Greenberg, J., & Pyszczynski, T. (2004). The cultural animal: Twenty years of terror management theory and research. In J. Greenberg, S. L. Koole, & T. Pyszczynski (Eds.), Handbook of experimental existential psychology. New York: The Guilford Press.

    Google Scholar 

  • Steinhart, E. (2002). Indiscernible persons. Metaphilosophy, 33(3), 300–320.

    Article  MathSciNet  Google Scholar 

  • Steinhart, E. (2007). Survival as a digital ghost. Minds and Machines, 17, 261–271.

    Article  Google Scholar 

  • Steinhart, E. (2008). Teilhard de Chardin and Transhumanism. Journal of Evolution and Technology, 20(1), 1–22. http://jetpress.org/v20/steinhart.htm.

    Google Scholar 

  • Steinhart, E. (2010). Theological implications of the simulation argument. Ars Disputandi, 10, 23–37.

    Google Scholar 

  • Strawson, P. F. (1959). Individuals: An essay in descriptive metaphysics. London: Methuen.

    Book  Google Scholar 

  • Tipler, F. J. (1994). The physics of immortality: Modern cosmology, God and the resurrection of the dead. New York: Doubleday.

    Google Scholar 

  • Tipler, F. J. (2007). The physics of christianity. New York: Doubleday.

    Google Scholar 

  • Voltaire (1764/1971). Philosophical dictionary. (T. Besterman, Edited and Translated). Harmondsworth, Middlesex: Penguin Books.

    Google Scholar 

  • Weatherson, B. (2003). Are you a sim? Philosophical Quarterly, 53(212), 425–431.

    Article  Google Scholar 

  • Whitby, B. (1996). The turing test: AI’s biggest blind alley? In P. Millican & A. Clark (Eds.), The legacy of Alan Turing, (Vol. I Machines and thought). Oxford: Oxford University Press.

    Google Scholar 

  • Williams, B. (1973a). Are persons bodies? In B. Williams, Problems of the self: Philosophical papers (1956–1972). Cambridge: Cambridge University Press.

    Google Scholar 

  • Williams, B. (1973b). Bodily continuity and personal identity. In B. Williams, Problems of the self: Philosophical papers (1956–1972). Cambridge: Cambridge University Press.

    Google Scholar 

  • Zadeh, L. A. (1975). Fuzzy logic and approximate reasoning. Synthese, 30, 407–428.

    Article  MATH  Google Scholar 

  • Zorpette, G. (2008). Waiting for the rapture. IEEE Spectrum, 45(6), 32–35.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Diane Proudfoot .

Editor information

Editors and Affiliations

Francis Heylighen on Proudfoot’s “Software Immortals: Science or Faith?”

Francis Heylighen on Proudfoot’s “Software Immortals: Science or Faith?”

The Continuity of Embodied Identity

I enjoyed reading Diane Proudfoot’s essay on “technological supernaturalism”, i.e. the belief that human individuals will be resurrected as immortal software entities by some future, God-like artificial intelligence(s) (Proudfoot 2012). Proudfoot thoroughly deconstructs the many dubious assumptions underlying this philosophy, as propounded by authors such as Kurzweil, Bostrom, Moravec and Tipler.

I particularly liked her arguments showing that this purportedly scientific vision is almost wholly parallel to the traditional religious vision in which our souls are promised an eternal life in heavenly bliss after our mortal bodies have passed away. The “terror management” theory (Pyszczynski et al. 1999) that she refers to indeed provides a plausible explanation for why people, whether religiously or scientifically inspired, seem to be drawn so strongly to the idea that their personhood would somehow survive physical death. But we may not even need such a psychological explanation for this glaring similarity between technological and religious supernaturalism: to me it seems obvious that the former is directly inspired by the latter. For example, while Tipler initially presented his ideas as purely scientific inferences, in further writing (Tipler 2007) he made it clear that he is a devout Catholic who takes doctrine rather literally. The motivation to rationalize a pre-existing faith may be less obvious in the case of more humanistic thinkers, like Bostrom or Moravec. But even a staunch atheist cannot avoid being influenced by such a pervasive meme as the belief in an afterlife, and may be tempted to defuse its power to convert people to religion by reinterpreting it scientifically.

After pointing out where I agree with Proudfoot, let me now indicate where we part ways. In my view, her paper falls in the common trap of what may be called “analytic nitpicking”. Philosophers from the analytic tradition investigate issues by making fine-grained distinctions between the different possible meanings of a concept, and then applying logic to draw out the implications of each of these possible interpretations, in particular in order to show how a particular interpretation may lead to some inconsistency or counter-intuitive result. But these “technical distinctions”—to use Proudfoot’s phrase—are in general considered meaningful only by philosophers: scientists and practitioners typically do not care, because these distinctions tend to lack operational significance. A classic example is the zombie thought experiment about consciousness (Chalmers 1995): if a zombie by definition behaves indistinguishably from a normal human, then according to Leibniz’s principle of the identity of the indistinguishables, a zombie must be a human. The zombie argument therefore fails to clarify anything about consciousness.

Proudfoot applies the analytic method to the problem of personal identity: in how far can an “uploaded”, software personality be identical to the original flesh-and-blood person that it is supposed to resurrect? She argues that various interpretations of the identity concept all lead to problems—such as lack of transitivity or the apparently nonsensical conclusion that two independent software instantiations, A and B, are actually one person. I consider this nitpicking because the identity concept, like practically any concept used in real life, is essentially vague and fluid. The recurrent error made by analytic philosophers is to assume that distinctions are absolute and invariant, while in the complex reality that surrounds us distinctions tend to vary across times, observers and contexts (Gershenson and Heylighen 2005).

Apparently universal rules about the logical notion of identity (such as A = B, B = C, therefore A = C), hence, are unlikely to be applicable to the much more fluid notion of personal identity. Proudfoot is to some degree aware of these difficulties, and therefore considers the alternative model of fuzzy logic. But fuzzy logic is still a kind of logic, and therefore built on invariant (albeit fuzzy) distinctions. The nature of personal identity is precisely that it is not invariant. It is not only the case—as the authors cited by Proudfoot point out—that since I was born about every atom in my body has changed, but also that about every bit of knowledge, experience or emotion in my mind has changed. My personality is substantially different from the personality I had when I was born, or even when I was 5, 10, 15, or 20 years old….

The only thing that allows me to state that the Francis Heylighen of today is somehow still the same as the Francis Heylighen of 40 years ago is continuity: during that time, there was a continuing distinction between Francis Heylighen and the rest of the world, even while the nature of that distinction was changing. This continuity was not one of consciousness (which waxed and waned along with my sleep-wake cycle), but of the rough outline of my body and personality. This continuity is precisely what lacks in the resurrection scenarios of the technological supernaturalists. In such scenario, my body and personality break down at my biological death, while my personality (or at least a software equivalent of it) is recreated by a super-intelligent AI many decades later, in a completely different (non-physical) environment.

Proudfoot is right to question the claim that the resurrected personality would be identical to my original personality (together with the more outlandish claims that the AI would feel compelled to resurrect every person that ever lived, or that the information about all these personalities would have survived the inevitable thermodynamic dissipation). However, rather than wandering through “technical distinctions” about identity, she should better have focused on the most glaring difference: the resurrected personality would lack both my body and my environment. While she mentions the situated and embodied perspective on cognition merely in passing, for me it is crucial: the ability to interact with the environment via bodily sensors and effectors is a defining feature of the notions of person, mind, consciousness or intelligence. As I have developed this point in more depth in my criticism of the common view of the Singularity as the emergence of a disembodied super-intelligence (Heylighen 2012), I won’t go into further details here.

However, note that this philosophy does not deny the possibility of attaining some sort of technological immortality: continuity of identity can in principle be maintained by gradually replacing my different body parts by various electronic circuits—as long as these maintain (or augment) my ability to interact with the world via high-bandwidth sensors and effectors. But now we are entering the domain of practical implementation, leaving behind both the metaphysical speculations of the techno-supernaturalists and the Platonic nitpicking of the analytic philosophers…

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Proudfoot, D. (2012). Software Immortals: Science or Faith?. In: Eden, A., Moor, J., Søraker, J., Steinhart, E. (eds) Singularity Hypotheses. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32560-1_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32560-1_18

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32559-5

  • Online ISBN: 978-3-642-32560-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics